Preconditioned Krylov subspace methods for eigenvalue problems
Wu, Kesheng; Saad, Y.; Stathopoulos, A.
1996-12-31
Lanczos algorithm is a commonly used method for finding a few extreme eigenvalues of symmetric matrices. It is effective if the wanted eigenvalues have large relative separations. If separations are small, several alternatives are often used, including the shift-invert Lanczos method, the preconditioned Lanczos method, and Davidson method. The shift-invert Lanczos method requires direct factorization of the matrix, which is often impractical if the matrix is large. In these cases preconditioned schemes are preferred. Many applications require solution of hundreds or thousands of eigenvalues of large sparse matrices, which pose serious challenges for both iterative eigenvalue solver and preconditioner. In this paper we will explore several preconditioned eigenvalue solvers and identify the ones suited for finding large number of eigenvalues. Methods discussed in this paper make up the core of a preconditioned eigenvalue toolkit under construction.
Preserving Symmetry in Preconditioned Krylov Subspace Methods
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Chow, E.; Saad, Y.; Yeung, M. C.
1996-01-01
We consider the problem of solving a linear system Ax = b when A is nearly symmetric and when the system is preconditioned by a symmetric positive definite matrix M. In the symmetric case, one can recover symmetry by using M-inner products in the conjugate gradient (CG) algorithm. This idea can also be used in the nonsymmetric case, and near symmetry can be preserved similarly. Like CG, the new algorithms are mathematically equivalent to split preconditioning, but do not require M to be factored. Better robustness in a specific sense can also be observed. When combined with truncated versions of iterative methods, tests show that this is more effective than the common practice of forfeiting near-symmetry altogether.
Pipelined Flexible Krylov Subspace Methods
NASA Astrophysics Data System (ADS)
Sanan, Patrick; Schnepp, Sascha M.; May, Dave A.
2015-04-01
State-of-the-art geophysical forward models expend most of their computational resources solving large, sparse linear systems. To date, preconditioned Krylov subspace methods have proven to be the only algorithmically scalable approach to solving these systems. However, at `extreme scale', the global reductions required by the inner products within these algorithms become a computational bottleneck, and it becomes advantageous to use pipelined Krylov subspace methods. These allow overlap of global reductions with other work, at the expense of using more storage and local computational effort, including overhead required to synchronize overlapping work. An impediment to using currently-available pipelined solvers for relevant geophysical forward modeling is that they are not `flexible', meaning that they cannot support nonlinear or varying preconditioners. Such preconditioners are effective for solving challenging linear systems, notably those arising from modelling of Stokes flow with highly heterogeneous viscosity structure. To this end, we introduce, for the first time, Krylov subspace methods which are both pipelined and flexible. We implement and demonstrate pipelined, flexible Conjugate Gradient, GMRES, and Conjugate Residual methods, which will be made publicly available via the open source PETSc library. Our algorithms are nontrivial modifications of the flexible methods they are based on (that is, they are not equivalent in exact arithmetic), so we analyze them mathematically and through a number of numerical experiments employing multi-level preconditioners. We highlight the benefits of these algorithms by solving variable viscosity Stokes problems directly relevant to lithospheric dynamics.
Krylov subspace methods on supercomputers
NASA Technical Reports Server (NTRS)
Saad, Youcef
1988-01-01
A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.
Krylov subspace methods - Theory, algorithms, and applications
NASA Technical Reports Server (NTRS)
Sad, Youcef
1990-01-01
Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.
Overview of Krylov subspace methods with applications to control problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
An overview of projection methods based on Krylov subspaces are given with emphasis on their application to solving matrix equations that arise in control problems. The main idea of Krylov subspace methods is to generate a basis of the Krylov subspace Span and seek an approximate solution the the original problem from this subspace. Thus, the original matrix problem of size N is approximated by one of dimension m typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now just becoming popular for solving nonlinear equations. It is shown how they can be used to solve partial pole placement problems, Sylvester's equation, and Lyapunov's equation.
Solving nonlinear heat conduction problems with multigrid preconditioned Newton-Krylov methods
Rider, W.J.; Knoll, D.A.
1997-09-01
Our objective is to investigate the utility of employing multigrid preconditioned Newton-Krylov methods for solving initial value problems. Multigrid based method promise better performance from the linear scaling associated with them. Our model problem is nonlinear heat conduction which can model idealized Marshak waves. Here we will investigate the efficiency of using a linear multigrid method to precondition a Krylov subspace method. In effect we will show that a fixed point nonlinear iterative method provides an effective preconditioner for the nonlinear problem.
Reduced-Rank Adaptive Filtering Using Krylov Subspace
NASA Astrophysics Data System (ADS)
Burykh, Sergueï; Abed-Meraim, Karim
2003-12-01
A unified view of several recently introduced reduced-rank adaptive filters is presented. As all considered methods use Krylov subspace for rank reduction, the approach taken in this work is inspired from Krylov subspace methods for iterative solutions of linear systems. The alternative interpretation so obtained is used to study the properties of each considered technique and to relate one reduced-rank method to another as well as to algorithms used in computational linear algebra. Practical issues are discussed and low-complexity versions are also included in our study. It is believed that the insight developed in this paper can be further used to improve existing reduced-rank methods according to known results in the domain of Krylov subspace methods.
Krylov-subspace acceleration of time periodic waveform relaxation
Lumsdaine, A.
1994-12-31
In this paper the author uses Krylov-subspace techniques to accelerate the convergence of waveform relaxation applied to solving systems of first order time periodic ordinary differential equations. He considers the problem in the frequency domain and presents frequency dependent waveform GMRES (FDWGMRES), a member of a new class of frequency dependent Krylov-subspace techniques. FDWGMRES exhibits many desirable properties, including finite termination independent of the number of timesteps and, for certain problems, a convergence rate which is bounded from above by the convergence rate of GMRES applied to the static matrix problem corresponding to the linear time-invariant ODE.
An adaptation of Krylov subspace methods to path following
Walker, H.F.
1996-12-31
Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.
NASA Astrophysics Data System (ADS)
Gatsis, John
An investigation of preconditioning techniques is presented for a Newton-Krylov algorithm that is used for the computation of steady, compressible, high Reynolds number flows about airfoils. A second-order centred-difference method is used to discretize the compressible Navier-Stokes (NS) equations that govern the fluid flow. The one-equation Spalart-Allmaras turbulence model is used. The discretized equations are solved using Newton's method and the generalized minimal residual (GMRES) Krylov subspace method is used to approximately solve the linear system. These preconditioning techniques are first applied to the solution of the discretized steady convection-diffusion equation. Various orderings, iterative block incomplete LU (BILU) preconditioning and multigrid preconditioning are explored. The baseline preconditioner is a BILU factorization of a lower-order discretization of the system matrix in the Newton linearization. An ordering based on the minimum discarded fill (MDF) ordering is developed and compared to the widely popular reverse Cuthill-McKee ordering. An evolutionary algorithm is used to investigate and enhance this ordering. For the convection-diffusion equation, the MDF-based ordering performs well and RCM is superior for the NS equations. Experiments for inviscid, laminar and turbulent cases are presented to show the effectiveness of iterative BILU preconditioning in terms of reducing the number of GMRES iterations, and hence the memory requirements of the Newton-Krylov algorithm. Multigrid preconditioning also reduces the number of GMRES iterations. The framework for the iterative BILU and BILU-smoothed multigrid preconditioning algorithms is presented in detail.
Application of Block Krylov Subspace Spectral Methods to Maxwell's Equations
Lambers, James V.
2009-10-08
Ever since its introduction by Kane Yee over forty years ago, the finite-difference time-domain (FDTD) method has been a widely-used technique for solving the time-dependent Maxwell's equations. This paper presents an alternative approach to these equations in the case of spatially-varying electric permittivity and/or magnetic permeability, based on Krylov subspace spectral (KSS) methods. These methods have previously been applied to the variable-coefficient heat equation and wave equation, and have demonstrated high-order accuracy, as well as stability characteristic of implicit time-stepping schemes, even though KSS methods are explicit. KSS methods for scalar equations compute each Fourier coefficient of the solution using techniques developed by Gene Golub and Gerard Meurant for approximating elements of functions of matrices by Gaussian quadrature in the spectral, rather than physical, domain. We show how they can be generalized to coupled systems of equations, such as Maxwell's equations, by choosing appropriate basis functions that, while induced by this coupling, still allow efficient and robust computation of the Fourier coefficients of each spatial component of the electric and magnetic fields. We also discuss the implementation of appropriate boundary conditions for simulation on infinite computational domains, and how discontinuous coefficients can be handled.
Domain decomposed preconditioners with Krylov subspace methods as subdomain solvers
Pernice, M.
1994-12-31
Domain decomposed preconditioners for nonsymmetric partial differential equations typically require the solution of problems on the subdomains. Most implementations employ exact solvers to obtain these solutions. Consequently work and storage requirements for the subdomain problems grow rapidly with the size of the subdomain problems. Subdomain solves constitute the single largest computational cost of a domain decomposed preconditioner, and improving the efficiency of this phase of the computation will have a significant impact on the performance of the overall method. The small local memory available on the nodes of most message-passing multicomputers motivates consideration of the use of an iterative method for solving subdomain problems. For large-scale systems of equations that are derived from three-dimensional problems, memory considerations alone may dictate the need for using iterative methods for the subdomain problems. In addition to reduced storage requirements, use of an iterative solver on the subdomains allows flexibility in specifying the accuracy of the subdomain solutions. Substantial savings in solution time is possible if the quality of the domain decomposed preconditioner is not degraded too much by relaxing the accuracy of the subdomain solutions. While some work in this direction has been conducted for symmetric problems, similar studies for nonsymmetric problems appear not to have been pursued. This work represents a first step in this direction, and explores the effectiveness of performing subdomain solves using several transpose-free Krylov subspace methods, GMRES, transpose-free QMR, CGS, and a smoothed version of CGS. Depending on the difficulty of the subdomain problem and the convergence tolerance used, a reduction in solution time is possible in addition to the reduced memory requirements. The domain decomposed preconditioner is a Schur complement method in which the interface operators are approximated using interface probing.
Krylov subspace methods for computing hydrodynamic interactions in Brownian dynamics simulations
Ando, Tadashi; Chow, Edmond; Saad, Yousef; Skolnick, Jeffrey
2012-01-01
Hydrodynamic interactions play an important role in the dynamics of macromolecules. The most common way to take into account hydrodynamic effects in molecular simulations is in the context of a Brownian dynamics simulation. However, the calculation of correlated Brownian noise vectors in these simulations is computationally very demanding and alternative methods are desirable. This paper studies methods based on Krylov subspaces for computing Brownian noise vectors. These methods are related to Chebyshev polynomial approximations, but do not require eigenvalue estimates. We show that only low accuracy is required in the Brownian noise vectors to accurately compute values of dynamic and static properties of polymer and monodisperse suspension models. With this level of accuracy, the computational time of Krylov subspace methods scales very nearly as O(N2) for the number of particles N up to 10 000, which was the limit tested. The performance of the Krylov subspace methods, especially the “block” version, is slightly better than that of the Chebyshev method, even without taking into account the additional cost of eigenvalue estimates required by the latter. Furthermore, at N = 10 000, the Krylov subspace method is 13 times faster than the exact Cholesky method. Thus, Krylov subspace methods are recommended for performing large-scale Brownian dynamics simulations with hydrodynamic interactions. PMID:22897254
Druskin, V.; Lee, Ping; Knizhnerman, L.
1996-12-31
There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.
A subspace preconditioning algorithm for eigenvector/eigenvalue computation
Bramble, J.H.; Knyazev, A.V.; Pasciak, J.E.
1996-12-31
We consider the problem of computing a modest number of the smallest eigenvalues along with orthogonal bases for the corresponding eigen-spaces of a symmetric positive definite matrix. In our applications, the dimension of a matrix is large and the cost of its inverting is prohibitive. In this paper, we shall develop an effective parallelizable technique for computing these eigenvalues and eigenvectors utilizing subspace iteration and preconditioning. Estimates will be provided which show that the preconditioned method converges linearly and uniformly in the matrix dimension when used with a uniform preconditioner under the assumption that the approximating subspace is close enough to the span of desired eigenvectors.
A linear system solver based on a modified Krylov subspace method for breakdown recovery
NASA Astrophysics Data System (ADS)
Tong, Charles; Ye, Qiang
1996-03-01
Despite its usefulness in solving eigenvalue problems and linear systems of equations, the nonsymmetric Lanczos method is known to suffer from a potential breakdown problem. Previous and recent approaches for handling the Lanczos exact and near-breakdowns include, for example, the look-ahead schemes by Parlett-Taylor-Liu [23], Freund-Gutknecht-Nachtigal [9], and Brezinski-Redivo Zaglia-Sadok [4]; the combined look-ahead and restart scheme by Joubert [18]; and the low-rank modified Lanczos scheme by Huckle [17]. In this paper, we present yet another scheme based on a modified Krylov subspace approach for the solution of nonsymmetric linear systems. When a breakdown occurs, our approach seeks a modified dual Krylov subspace, which is the sum of the original subspace and a new Krylov subspaceKm(wj,AT) wherewj is a newstart vector (this approach has been studied by Ye [26] for eigenvalue computations). Based on this strategy, we have developed a practical algorithm for linear systems called the MLAN/QM algorithm, which also incorporates the residual quasi-minimization as proposed in [12]. We present a few convergence bounds for the method as well as numerical results to show its effectiveness.
A hierarchical Krylov-Bayes iterative inverse solver for MEG with physiological preconditioning
NASA Astrophysics Data System (ADS)
Calvetti, D.; Pascarella, A.; Pitolli, F.; Somersalo, E.; Vantaggi, B.
2015-12-01
The inverse problem of MEG aims at estimating electromagnetic cerebral activity from measurements of the magnetic fields outside the head. After formulating the problem within the Bayesian framework, a hierarchical conditionally Gaussian prior model is introduced, including a physiologically inspired prior model that takes into account the preferred directions of the source currents. The hyperparameter vector consists of prior variances of the dipole moments, assumed to follow a non-conjugate gamma distribution with variable scaling and shape parameters. A point estimate of both dipole moments and their variances can be computed using an iterative alternating sequential updating algorithm, which is shown to be globally convergent. The numerical solution is based on computing an approximation of the dipole moments using a Krylov subspace iterative linear solver equipped with statistically inspired preconditioning and a suitable termination rule. The shape parameters of the model are shown to control the focality, and furthermore, using an empirical Bayes argument, it is shown that the scaling parameters can be naturally adjusted to provide a statistically well justified depth sensitivity scaling. The validity of this interpretation is verified through computed numerical examples. Also, a computed example showing the applicability of the algorithm to analyze realistic time series data is presented.
Krylov subspace algorithms for computing GeneRank for the analysis of microarray data mining.
Wu, Gang; Zhang, Ying; Wei, Yimin
2010-04-01
GeneRank is a new engine technology for the analysis of microarray experiments. It combines gene expression information with a network structure derived from gene notations or expression profile correlations. Using matrix decomposition techniques, we first give a matrix analysis of the GeneRank model. We reformulate the GeneRank vector as a linear combination of three parts in the general case when the matrix in question is non-diagonalizable. We then propose two Krylov subspace methods for computing GeneRank. Numerical experiments show that, when the GeneRank problem is very large, the new algorithms are appropriate choices. PMID:20426695
Generalization of the residual cutting method based on the Krylov subspace
NASA Astrophysics Data System (ADS)
Abe, Toshihiko; Sekine, Yoshihito; Kikuchi, Kazuo
2016-06-01
The residual cutting (RC) method has been reported to have superior converging characteristics in numerically solving elliptic partial differential equations. However, its application is limited to linear problems with diagonal-dominant matrices in general, for which convergence of a relaxation method such as SOR is guaranteed. In this study, we propose the generalized residual cutting (GRC) method, which is based on the Krylov subspace and applicable to general unsymmetric linear problems. Also, we perform numerical experiments with various coefficient matrices, and show that the GRC method has some desirable properties such as convergence characteristics and memory usage, in comparison to the conventional RC, BiCGSTAB and GMRES methods.
A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester
2010-01-01
A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.
Druskin, V.; Knizhnerman, L.
1994-12-31
The authors solve the Cauchy problem for an ODE system Au + {partial_derivative}u/{partial_derivative}t = 0, u{vert_bar}{sub t=0} = {var_phi}, where A is a square real nonnegative definite symmetric matrix of the order N, {var_phi} is a vector from R{sup N}. The stiffness matrix A is obtained due to semi-discretization of a parabolic equation or system with time-independent coefficients. The authors are particularly interested in large stiff 3-D problems for the scalar diffusion and vectorial Maxwell`s equations. First they consider an explicit method in which the solution on a whole time interval is projected on a Krylov subspace originated by A. Then they suggest another Krylov subspace with better approximating properties using powers of an implicit transition operator. These Krylov subspace methods generate optimal in a spectral sense polynomial approximations for the solution of the ODE, similar to CG for SLE.
A numerical solution of a Cauchy problem for an elliptic equation by Krylov subspaces
NASA Astrophysics Data System (ADS)
Eldén, Lars; Simoncini, Valeria
2009-06-01
We study the numerical solution of a Cauchy problem for a self-adjoint elliptic partial differential equation uzz - Lu = 0 in three space dimensions (x, y, z), where the domain is cylindrical in z. Cauchy data are given on the lower boundary and the boundary values on the upper boundary are sought. The problem is severely ill-posed. The formal solution is written as a hyperbolic cosine function in terms of the two-dimensional elliptic operator L (via its eigenfunction expansion), and it is shown that the solution is stabilized (regularized) if the large eigenvalues are cut off. We suggest a numerical procedure based on the rational Krylov method, where the solution is projected onto a subspace generated using the operator L-1. This means that in each Krylov step, a well-posed two-dimensional elliptic problem involving L is solved. Furthermore, the hyperbolic cosine is evaluated explicitly only for a small symmetric matrix. A stopping criterion for the Krylov recursion is suggested based on the relative change of an approximate residual, which can be computed very cheaply. Two numerical examples are given that demonstrate the accuracy of the method and the efficiency of the stopping criterion.
Krylov subspace iterative methods for boundary element method based near-field acoustic holography.
Valdivia, Nicolas; Williams, Earl G
2005-02-01
The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results. PMID:15759691
3D-marine tCSEM inversion using model reduction in the Rational Krylov subspace
NASA Astrophysics Data System (ADS)
Sommer, M.; Jegen, M. D.
2014-12-01
Computationally, the most expensive part of a 3D time domain CSEM inversion is the computation of the Jacobian matrix in every Gauss-Newton step. An other problem is its size for large data sets. We use a model reduction method (Zaslavsky et al, 2013), that compresses the Jacobian by projecting it with a Rational Krylov Subspace (RKS). It also reduces the runtime drastically, compared to the most common adjoint approach and was implemented on GPU.It depends on an analytic derivation of the implicit Anzatz function, which solves Maxwell's diffusion equation in the Eigenspace giving a Jacobian dependent on the Eigenpairs and its derivatives of the forward problem. The Eigenpairs are approximated by Ritz-pairs in the Rational Krylov subspace. Determination of the derivived Ritz-pairs is the most time consuming and was fully GPU-optimized. Furthermore, the amount of inversion cells is reduced by using Octree meshes. The gridding allows for the incorporation of complicated survey geometries, as they are encountered in marine CSEM datasets.As a first result, the Jacobian computation is, even on a Desktop, faster than the most common adjoint approach on a super computer for realistic data sets. We will present careful benchmarking and accuracy tests of the new method and show how it can be applied to a real marine scenario.
A new Krylov-subspace method for symmetric indefinite linear systems
Freund, R.W.; Nachtigal, N.M.
1994-10-01
Many important applications involve the solution of large linear systems with symmetric, but indefinite coefficient matrices. For example, such systems arise in incompressible flow computations and as subproblems in optimization algorithms for linear and nonlinear programs. Existing Krylov-subspace iterations for symmetric indefinite systems, such as SYMMLQ and MINRES, require the use of symmetric positive definite preconditioners, which is a rather unnatural restriction when the matrix itself is highly indefinite with both many positive and many negative eigenvalues. In this note, the authors describe a new Krylov-subspace iteration for solving symmetric indefinite linear systems that can be combined with arbitrary symmetric preconditioners. The algorithm can be interpreted as a special case of the quasi-minimal residual method for general non-Hermitian linear systems, and like the latter, it produces iterates defined by a quasi-minimal residual property. The proposed method has the same work and storage requirements per iteration as SYMMLQ or MINRES, however, it usually converges in considerably fewer iterations. Results of numerical experiments are reported.
Generalization of the residual cutting method based on the Krylov subspace
NASA Astrophysics Data System (ADS)
Abe, Toshihiko; Sekine, Yoshihito; Kikuchi, Kazuo
2016-06-01
The residual cutting (RC) method has been reported to have superior converging characteristics in numerically solving elliptic partial differential equations. However, its application is limited to linear problems with diagonal-dominant matrices in general, for which convergence of a relaxation method such as SOR is guaranteed. In this study, we propose the generalized residual cutting (GRC) method, which is based on the Krylov subspace and applicable to general unsymmetric linear problems. Also, we perform numerical experiments with various coefficient matrices, and show that the GRC method has some desirable properties such as convergence characteristics and memory usage, in comparison to the conventional RC, BiCGSTAB and GMRES methods. At the request of the author of this paper, a corrigendum was issued on 22 June 2016 to correct an error in Eq. (2) and Eq. (3).
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vectored-valued function F: C approaches C sup N, which is analytic at z=0 and meromorphic in a neighborhood of z=0, and let its Maclaurin series be given. We use vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N X N matrix that may be diagonalizable or not. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory provides a set of completely new results and constructions for these Krylov subspace methods. This theory suggests at the same time a new mode of usage for these Krylov subspace methods that were observed to possess computational advantages over their common mode of usage.
Radio astronomical image formation using constrained least squares and Krylov subspaces
NASA Astrophysics Data System (ADS)
Mouri Sardarabadi, Ahmad; Leshem, Amir; van der Veen, Alle-Jan
2016-04-01
Aims: Image formation for radio astronomy can be defined as estimating the spatial intensity distribution of celestial sources throughout the sky, given an array of antennas. One of the challenges with image formation is that the problem becomes ill-posed as the number of pixels becomes large. The introduction of constraints that incorporate a priori knowledge is crucial. Methods: In this paper we show that in addition to non-negativity, the magnitude of each pixel in an image is also bounded from above. Indeed, the classical "dirty image" is an upper bound, but a much tighter upper bound can be formed from the data using array processing techniques. This formulates image formation as a least squares optimization problem with inequality constraints. We propose to solve this constrained least squares problem using active set techniques, and the steps needed to implement it are described. It is shown that the least squares part of the problem can be efficiently implemented with Krylov-subspace-based techniques. We also propose a method for correcting for the possible mismatch between source positions and the pixel grid. This correction improves both the detection of sources and their estimated intensities. The performance of these algorithms is evaluated using simulations. Results: Based on parametric modeling of the astronomical data, a new imaging algorithm based on convex optimization, active sets, and Krylov-subspace-based solvers is presented. The relation between the proposed algorithm and sequential source removing techniques is explained, and it gives a better mathematical framework for analyzing existing algorithms. We show that by using the structure of the algorithm, an efficient implementation that allows massive parallelism and storage reduction is feasible. Simulations are used to compare the new algorithm to classical CLEAN. Results illustrate that for a discrete point model, the proposed algorithm is capable of detecting the correct number of sources
Krylov methods preconditioned with incompletely factored matrices on the CM-2
NASA Technical Reports Server (NTRS)
Berryman, Harry; Saltz, Joel; Gropp, William; Mirchandaney, Ravi
1989-01-01
The performance is measured of the components of the key interative kernel of a preconditioned Krylov space interative linear system solver. In some sense, these numbers can be regarded as best case timings for these kernels. Sweeps were timed over meshes, sparse triangular solves, and inner products on a large 3-D model problem over a cube shaped domain discretized with a seven point template. The performance of the CM-2 is highly dependent on the use of very specialized programs. These programs mapped a regular problem domain onto the processor topology in a careful manner and used the optimized local NEWS communications network. The rather dramatic deterioration in performance was documented when these ideal conditions no longer apply. A synthetic workload generator was developed to produce and solve a parameterized family of increasingly irregular problems.
Comparison of Krylov subspace methods on the PageRank problem
NASA Astrophysics Data System (ADS)
Del Corso, Gianna M.; Gulli, Antonio; Romani, Francesco
2007-12-01
PageRank algorithm plays a very important role in search engine technology and consists in the computation of the eigenvector corresponding to the eigenvalue one of a matrix whose size is now in the billions. The problem incorporates a parameter [alpha] that determines the difficulty of the problem. In this paper, the effectiveness of stationary and nonstationary methods are compared on some portion of real web matrices for different choices of [alpha]. We see that stationary methods are very reliable and more competitive when the problem is well conditioned, that is for small values of [alpha]. However, for large values of the parameter [alpha] the problem becomes more difficult and methods such as preconditioned BiCGStab or restarted preconditioned GMRES become competitive with stationary methods in terms of Mflops count as well as in number of iterations necessary to reach convergence.
ODE System Solver W. Krylov Iteration & Rootfinding
Hindmarsh, Alan C.
1991-09-09
LSODKR is a new initial value ODE solver for stiff and nonstiff systems. It is a variant of the LSODPK and LSODE solvers, intended mainly for large stiff systems. The main differences between LSODKR and LSODE are the following: (a) for stiff systems, LSODKR uses a corrector iteration composed of Newton iteration and one of four preconditioned Krylov subspace iteration methods. The user must supply routines for the preconditioning operations, (b) Within the corrector iteration, LSODKR does automatic switching between functional (fixpoint) iteration and modified Newton iteration, (c) LSODKR includes the ability to find roots of given functions of the solution during the integration.
ODE System Solver W. Krylov Iteration & Rootfinding
Energy Science and Technology Software Center (ESTSC)
1991-09-09
LSODKR is a new initial value ODE solver for stiff and nonstiff systems. It is a variant of the LSODPK and LSODE solvers, intended mainly for large stiff systems. The main differences between LSODKR and LSODE are the following: (a) for stiff systems, LSODKR uses a corrector iteration composed of Newton iteration and one of four preconditioned Krylov subspace iteration methods. The user must supply routines for the preconditioning operations, (b) Within the corrector iteration,more » LSODKR does automatic switching between functional (fixpoint) iteration and modified Newton iteration, (c) LSODKR includes the ability to find roots of given functions of the solution during the integration.« less
Jiang, Mingfeng; Xia, Ling; Huang, Wenqing; Shou, Guofa; Liu, Feng; Crozier, Stuart
2009-10-01
Regularization is an effective method for the solution of ill-posed ECG inverse problems, such as computing epicardial potentials from body surface potentials. The aim of this work was to explore more robust regularization-based solutions through the application of subspace preconditioned LSQR (SP-LSQR) to the study of model-based ECG inverse problems. Here, we presented three different subspace splitting methods, i.e., SVD, wavelet transform and cosine transform schemes, to the design of the preconditioners for ill-posed problems, and to evaluate the performance of algorithms using a realistic heart-torso model simulation protocol. The results demonstrated that when compared with the LSQR, LSQR-Tik and Tik-LSQR method, the SP-LSQR produced higher efficiency and reconstructed more accurate epcicardial potential distributions. Amongst the three applied subspace splitting schemes, the SVD-based preconditioner yielded the best convergence rate and outperformed the other two in seeking the inverse solutions. Moreover, when optimized by the genetic algorithms (GA), the performances of SP-LSQR method were enhanced. The results from this investigation suggested that the SP-LSQR was a useful regularization technique for cardiac inverse problems. PMID:19564127
Luanjing Guo; Chuan Lu; Hai Huang; Derek R. Gaston
2012-06-01
Systems of multicomponent reactive transport in porous media that are large, highly nonlinear, and tightly coupled due to complex nonlinear reactions and strong solution-media interactions are often described by a system of coupled nonlinear partial differential algebraic equations (PDAEs). A preconditioned Jacobian-Free Newton-Krylov (JFNK) solution approach is applied to solve the PDAEs in a fully coupled, fully implicit manner. The advantage of the JFNK method is that it avoids explicitly computing and storing the Jacobian matrix during Newton nonlinear iterations for computational efficiency considerations. This solution approach is also enhanced by physics-based blocking preconditioning and multigrid algorithm for efficient inversion of preconditioners. Based on the solution approach, we have developed a reactive transport simulator named RAT. Numerical results are presented to demonstrate the efficiency and massive scalability of the simulator for reactive transport problems involving strong solution-mineral interactions and fast kinetics. It has been applied to study the highly nonlinearly coupled reactive transport system of a promising in situ environmental remediation that involves urea hydrolysis and calcium carbonate precipitation.
HyeongKae Park; Robert R. Nourgaliev; Richard C. Martineau; Dana A. Knoll
2008-09-01
We present high-order accurate spatiotemporal discretization of all-speed flow solvers using Jacobian-free Newton Krylov framework. One of the key developments in this work is the physics-based preconditioner for the all-speed flow, which makes use of traditional semi-implicit schemes. The physics-based preconditioner is developed in the primitive variable form, which allows a straightforward separation of physical phenomena. Numerical examples demonstrate that the developed preconditioner effectively reduces the number of the Krylov iterations, and the efficiency is independent of the Mach number and mesh sizes under a fixed CFL condition.
Luanjing Guo; Hai Huang; Derek Gaston; Cody Permann; David Andrs; George Redden; Chuan Lu; Don Fox; Yoshiko Fujita
2013-03-01
Modeling large multicomponent reactive transport systems in porous media is particularly challenging when the governing partial differential algebraic equations (PDAEs) are highly nonlinear and tightly coupled due to complex nonlinear reactions and strong solution-media interactions. Here we present a preconditioned Jacobian-Free Newton-Krylov (JFNK) solution approach to solve the governing PDAEs in a fully coupled and fully implicit manner. A well-known advantage of the JFNK method is that it does not require explicitly computing and storing the Jacobian matrix during Newton nonlinear iterations. Our approach further enhances the JFNK method by utilizing physics-based, block preconditioning and a multigrid algorithm for efficient inversion of the preconditioner. This preconditioning strategy accounts for self- and optionally, cross-coupling between primary variables using diagonal and off-diagonal blocks of an approximate Jacobian, respectively. Numerical results are presented demonstrating the efficiency and massive scalability of the solution strategy for reactive transport problems involving strong solution-mineral interactions and fast kinetics. We found that the physics-based, block preconditioner significantly decreases the number of linear iterations, directly reducing computational cost; and the strongly scalable algebraic multigrid algorithm for approximate inversion of the preconditioner leads to excellent parallel scaling performance.
Starke, G.
1994-12-31
For nonselfadjoint elliptic boundary value problems which are preconditioned by a substructuring method, i.e., nonoverlapping domain decomposition, the author introduces and studies the concept of subspace orthogonalization. In subspace orthogonalization variants of Krylov methods the computation of inner products and vector updates, and the storage of basis elements is restricted to a (presumably small) subspace, in this case the edge and vertex unknowns with respect to the partitioning into subdomains. The author investigates subspace orthogonalization for two specific iterative algorithms, GMRES and the full orthogonalization method (FOM). This is intended to eliminate certain drawbacks of the Arnoldi-based Krylov subspace methods mentioned above. Above all, the length of the Arnoldi recurrences grows linearly with the iteration index which is therefore restricted to the number of basis elements that can be held in memory. Restarts become necessary and this often results in much slower convergence. The subspace orthogonalization methods, in contrast, require the storage of only the edge and vertex unknowns of each basis element which means that one can iterate much longer before restarts become necessary. Moreover, the computation of inner products is also restricted to the edge and vertex points which avoids the disturbance of the computational flow associated with the solution of subdomain problems. The author views subspace orthogonalization as an alternative to restarting or truncating Krylov subspace methods for nonsymmetric linear systems of equations. Instead of shortening the recurrences, one restricts them to a subset of the unknowns which has to be carefully chosen in order to be able to extend this partial solution to the entire space. The author discusses the convergence properties of these iteration schemes and its advantages compared to restarted or truncated versions of Krylov methods applied to the full preconditioned system.
NASA Astrophysics Data System (ADS)
Viallet, M.; Goffrey, T.; Baraffe, I.; Folini, D.; Geroux, C.; Popov, M. V.; Pratt, J.; Walder, R.
2016-02-01
This work is a continuation of our efforts to develop an efficient implicit solver for multidimensional hydrodynamics for the purpose of studying important physical processes in stellar interiors, such as turbulent convection and overshooting. We present an implicit solver that results from the combination of a Jacobian-free Newton-Krylov method and a preconditioning technique tailored to the inviscid, compressible equations of stellar hydrodynamics. We assess the accuracy and performance of the solver for both 2D and 3D problems for Mach numbers down to 10-6. Although our applications concern flows in stellar interiors, the method can be applied to general advection and/or diffusion-dominated flows. The method presented in this paper opens up new avenues in 3D modeling of realistic stellar interiors allowing the study of important problems in stellar structure and evolution.
Globally convergent techniques in nonlinear Newton-Krylov
NASA Technical Reports Server (NTRS)
Brown, Peter N.; Saad, Youcef
1989-01-01
Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.
Polynomial preconditioning for conjugate gradient methods
Ashby, S.F.
1987-12-01
The solution of a linear system of equations, Ax = b, arises in many scientific applications. If A is large and sparse, an iterative method is required. When A is hermitian positive definite (hpd), the conjugate gradient method of Hestenes and Stiefel is popular. When A is hermitian indefinite (hid), the conjugate residual method may be used. If A is ill-conditioned, these methods may converge slowly, in which case a preconditioner is needed. In this thesis we examine the use of polynomial preconditioning in CG methods for both hermitian positive definite and indefinite matrices. Such preconditioners are easy to employ and well-suited to vector and/or parallel architectures. We first show that any CG method is characterized by three matrices: an hpd inner product matrix B, a preconditioning matrix C, and the hermitian matrix A. The resulting method, CG(B,C,A), minimizes the B-norm of the error over a Krylov subspace. We next exploit the versatility of polynomial preconditioners to design several new CG methods. To obtain an optimum preconditioner, we solve a constrained minimax approximation problem. The preconditioning polynomial, C(lambda), is optimum in that it minimizes a bound on the condition number of the preconditioned matrix, p/sub m/(A). An adaptive procedure for dynamically determining the optimum preconditioner is also discussed. Finally, in a variety of numerical experiments, conducted on a Cray X-MP/48, we demonstrate the effectiveness of polynomial preconditioning. 66 ref., 19 figs., 39 tabs.
McHugh, P.R.
1995-10-01
Fully coupled, Newton-Krylov algorithms are investigated for solving strongly coupled, nonlinear systems of partial differential equations arising in the field of computational fluid dynamics. Primitive variable forms of the steady incompressible and compressible Navier-Stokes and energy equations that describe the flow of a laminar Newtonian fluid in two-dimensions are specifically considered. Numerical solutions are obtained by first integrating over discrete finite volumes that compose the computational mesh. The resulting system of nonlinear algebraic equations are linearized using Newton`s method. Preconditioned Krylov subspace based iterative algorithms then solve these linear systems on each Newton iteration. Selected Krylov algorithms include the Arnoldi-based Generalized Minimal RESidual (GMRES) algorithm, and the Lanczos-based Conjugate Gradients Squared (CGS), Bi-CGSTAB, and Transpose-Free Quasi-Minimal Residual (TFQMR) algorithms. Both Incomplete Lower-Upper (ILU) factorization and domain-based additive and multiplicative Schwarz preconditioning strategies are studied. Numerical techniques such as mesh sequencing, adaptive damping, pseudo-transient relaxation, and parameter continuation are used to improve the solution efficiency, while algorithm implementation is simplified using a numerical Jacobian evaluation. The capabilities of standard Newton-Krylov algorithms are demonstrated via solutions to both incompressible and compressible flow problems. Incompressible flow problems include natural convection in an enclosed cavity, and mixed/forced convection past a backward facing step.
Combined incomplete LU and strongly implicit procedure preconditioning
Meese, E.A.
1996-12-31
For the solution of large sparse linear systems of equations, the Krylov-subspace methods have gained great merit. Their efficiency are, however, largely dependent upon preconditioning of the equation-system. A family of matrix factorisations often used for preconditioning, is obtained from a truncated Gaussian elimination, ILU(p). Less common, supposedly due to it`s restriction to certain sparsity patterns, is factorisations generated by the strongly implicit procedure (SIP). The ideas from ILU(p) and SIP are used in this paper to construct a generalized strongly implicit procedure, applicable to matrices with any sparsity pattern. The new algorithm has been run on some test equations, and efficiency improvements over ILU(p) was found.
Multigrid in energy preconditioner for Krylov solvers
Slaybaugh, R.N.; Evans, T.M.; Davidson, G.G.; Wilson, P.P.H.
2013-06-01
We have added a new multigrid in energy (MGE) preconditioner to the Denovo discrete-ordinates radiation transport code. This preconditioner takes advantage of a new multilevel parallel decomposition. A multigroup Krylov subspace iterative solver that is decomposed in energy as well as space-angle forms the backbone of the transport solves in Denovo. The space-angle-energy decomposition facilitates scaling to hundreds of thousands of cores. The multigrid in energy preconditioner scales well in the energy dimension and significantly reduces the number of Krylov iterations required for convergence. This preconditioner is well-suited for use with advanced eigenvalue solvers such as Rayleigh Quotient Iteration and Arnoldi.
Krylov subspace acceleration of waveform relaxation
Lumsdaine, A.; Wu, Deyun
1996-12-31
Standard solution methods for numerically solving time-dependent problems typically begin by discretizing the problem on a uniform time grid and then sequentially solving for successive time points. The initial time discretization imposes a serialization to the solution process and limits parallel speedup to the speedup available from parallelizing the problem at any given time point. This bottleneck can be circumvented by the use of waveform methods in which multiple time-points of the different components of the solution are computed independently. With the waveform approach, a problem is first spatially decomposed and distributed among the processors of a parallel machine. Each processor then solves its own time-dependent subsystem over the entire interval of interest using previous iterates from other processors as inputs. Synchronization and communication between processors take place infrequently, and communication consists of large packets of information - discretized functions of time (i.e., waveforms).
NASA Astrophysics Data System (ADS)
Jia, Jinhong; Wang, Hong
2015-10-01
Numerical methods for fractional differential equations generate full stiffness matrices, which were traditionally solved via Gaussian type direct solvers that require O (N3) of computational work and O (N2) of memory to store where N is the number of spatial grid points in the discretization. We develop a preconditioned fast Krylov subspace iterative method for the efficient and faithful solution of finite volume schemes defined on a locally refined composite mesh for fractional differential equations to resolve boundary layers of the solutions. Numerical results are presented to show the utility of the method.
Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems
Benzi, M.; Tuma, M.
1996-12-31
A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.
Portable, parallel, reusable Krylov space codes
Smith, B.; Gropp, W.
1994-12-31
Krylov space accelerators are an important component of many algorithms for the iterative solution of linear systems. Each Krylov space method has it`s own particular advantages and disadvantages, therefore it is desirable to have a variety of them available all with an identical, easy to use, interface. A common complaint application programmers have with available software libraries for the iterative solution of linear systems is that they require the programmer to use the data structures provided by the library. The library is not able to work with the data structures of the application code. Hence, application programmers find themselves constantly recoding the Krlov space algorithms. The Krylov space package (KSP) is a data-structure-neutral implementation of a variety of Krylov space methods including preconditioned conjugate gradient, GMRES, BiCG-Stab, transpose free QMR and CGS. Unlike all other software libraries for linear systems that the authors are aware of, KSP will work with any application codes data structures, in Fortran or C. Due to it`s data-structure-neutral design KSP runs unchanged on both sequential and parallel machines. KSP has been tested on workstations, the Intel i860 and Paragon, Thinking Machines CM-5 and the IBM SP1.
Block-Krylov component synthesis method for structural model reduction
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.; Hale, Arthur L.
1988-01-01
A new analytical method is presented for generating component shape vectors, or Ritz vectors, for use in component synthesis. Based on the concept of a block-Krylov subspace, easily derived recurrence relations generate blocks of Ritz vectors for each component. The subspace spanned by the Ritz vectors is called a block-Krylov subspace. The synthesis uses the new Ritz vectors rather than component normal modes to reduce the order of large, finite-element component models. An advantage of the Ritz vectors is that they involve significantly less computation than component normal modes. Both 'free-interface' and 'fixed-interface' component models are derived. They yield block-Krylov formulations paralleling the concepts of free-interface and fixed-interface component modal synthesis. Additionally, block-Krylov reduced-order component models are shown to have special disturbability/observability properties. Consequently, the method is attractive in active structural control applications, such as large space structures. The new fixed-interface methodology is demonstrated by a numerical example. The accuracy is found to be comparable to that of fixed-interface component modal synthesis.
Preconditioned conjugate gradient methods for the Navier-Stokes equations
Ajmani, K.; Ng, Wing Fai ); Liou, Meng Sing )
1994-01-01
A preconditioned Krylov subspace method (GMRES) is used to solve the linear systems of equations formed at each time-integration step of the unsteady, two-dimensional, compressible Navier-Stokes equations of fluid flow. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux-split formulations. Several preconditioning techniques are investigated to enhance the efficiency and convergence rate of the implicit solver based on the GMRES algorithm. The superiority of the new solver is established by comparisons with a (LGSR). Computational test results for low-speed (incompressible flow over a backward-facing step at Mach 0.1), transonic flow (trailing edge flow in a transonic turbine cascade), and hypersonic flow (shock-on-shock interactions on a cylindrical leading edge at Mach 6.0) are presented. For the Mach 0.1 case, overall speedup factors of up to 17 (in terms of time-steps) and 15 (in terms of CPU times on a CRAY-YMP/8) are found in favor of the preconditioned GMRES solver, when compared with the LGSR solver. The corresponding speedup factors for the transonic flow cases are 17 and 23, respectively. The hypersonic flow case shows slightly lower speedup factors of 9 and 13, respectively. The study of preconditioners conducted in this research reveals that a new LUSGS-type preconditioner is much more efficient than a conventional incomplete LU-type preconditioner. 34 refs., 15 figs.
Preconditioned conjugate gradient methods for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing
1994-01-01
A preconditioned Krylov subspace method (GMRES) is used to solve the linear systems of equations formed at each time-integration step of the unsteady, two-dimensional, compressible Navier-Stokes equations of fluid flow. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux-split formulation. Several preconditioning techniques are investigated to enhance the efficiency and convergence rate of the implicit solver based on the GMRES algorithm. The superiority of the new solver is established by comparisons with a conventional implicit solver, namely line Gauss-Seidel relaxation (LGSR). Computational test results for low-speed (incompressible flow over a backward-facing step at Mach 0.1), transonic flow (trailing edge flow in a transonic turbine cascade), and hypersonic flow (shock-on-shock interactions on a cylindrical leading edge at Mach 6.0) are presented. For the Mach 0.1 case, overall speedup factors of up to 17 (in terms of time-steps) and 15 (in terms of CPU time on a CRAY-YMP/8) are found in favor of the preconditioned GMRES solver, when compared with the LGSR solver. The corresponding speedup factors for the transonic flow case are 17 and 23, respectively. The hypersonic flow case shows slightly lower speedup factors of 9 and 13, respectively. The study of preconditioners conducted in this research reveals that a new LUSGS-type preconditioner is much more efficient than a conventional incomplete LU-type preconditioner.
Scharz Preconditioners for Krylov Methods: Theory and Practice
Szyld, Daniel B.
2013-05-10
Several numerical methods were produced and analyzed. The main thrust of the work relates to inexact Krylov subspace methods for the solution of linear systems of equations arising from the discretization of partial di erential equa- tions. These are iterative methods, i.e., where an approximation is obtained and at each step. Usually, a matrix-vector product is needed at each iteration. In the inexact methods, this product (or the application of a preconditioner) can be done inexactly. Schwarz methods, based on domain decompositions, are excellent preconditioners for thise systems. We contributed towards their under- standing from an algebraic point of view, developed new ones, and studied their performance in the inexact setting. We also worked on combinatorial problems to help de ne the algebraic partition of the domains, with the needed overlap, as well as PDE-constraint optimization using the above-mentioned inexact Krylov subspace methods.
Newton-Krylov methods applied to nonequilibrium radiation diffusion
Knoll, D.A.; Rider, W.J.; Olsen, G.L.
1998-03-10
The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton`s method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton`s method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step.
Accelerating molecular property calculations with nonorthonormal Krylov space methods.
Furche, Filipp; Krull, Brandon T; Nguyen, Brian D; Kwon, Jake
2016-05-01
We formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remain small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved. PMID:27155623
Accelerating molecular property calculations with nonorthonormal Krylov space methods
NASA Astrophysics Data System (ADS)
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; Kwon, Jake
2016-05-01
We formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remain small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.
Lattice QCD computations: Recent progress with modern Krylov subspace methods
Frommer, A.
1996-12-31
Quantum chromodynamics (QCD) is the fundamental theory of the strong interaction of matter. In order to compare the theory with results from experimental physics, the theory has to be reformulated as a discrete problem of lattice gauge theory using stochastic simulations. The computational challenge consists in solving several hundreds of very large linear systems with several right hand sides. A considerable part of the world`s supercomputer time is spent in such QCD calculations. This paper presents results on solving systems for the Wilson fermions. Recent progress is reviewed on algorithms obtained in cooperation with partners from theoretical physics.
A multigrid Newton-Krylov method for flux-limited radiation diffusion
Rider, W.J.; Knoll, D.A.; Olson, G.L.
1998-09-01
The authors focus on the integration of radiation diffusion including flux-limited diffusion coefficients. The nonlinear integration is accomplished with a Newton-Krylov method preconditioned with a multigrid Picard linearization of the governing equations. They investigate the efficiency of the linear and nonlinear iterative techniques.
Simple preconditioning for time-dependent density functional perturbation theory.
Lehtovaara, Lauri; Marques, Miguel A L
2011-07-01
By far, the most common use of time-dependent density functional theory is in the linear-reponse regime, where it provides information about electronic excitations. Ideally, the linear-response equations should be solved by a method that avoids the use of the unoccupied Kohn-Sham states--such as the Sternheimer method--as this reduces the complexity and increases the precision of the calculation. However, the Sternheimer equation becomes ill-conditioned near and indefinite above the first resonant frequency, seriously hindering the use of efficient iterative solution methods. To overcome this serious limitation, and to improve the general convergence properties of the iterative techniques, we propose a simple preconditioning strategy. In our method, the Sternheimer equation is solved directly as a linear equation using an iterative Krylov subspace method, i.e., no self-consistent cycle is required. Furthermore, the preconditioner uses the information of just a few unoccupied states and requires simple and minimal modifications to existing implementations. In this way, convergence can be reached faster and in a considerably wider frequency range than the traditional approach. PMID:21744884
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; Kwon, Jake
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Conformal mapping and convergence of Krylov iterations
Driscoll, T.A.; Trefethen, L.N.
1994-12-31
Connections between conformal mapping and matrix iterations have been known for many years. The idea underlying these connections is as follows. Suppose the spectrum of a matrix or operator A is contained in a Jordan region E in the complex plane with 0 not an element of E. Let {phi}(z) denote a conformal map of the exterior of E onto the exterior of the unit disk, with {phi}{infinity} = {infinity}. Then 1/{vert_bar}{phi}(0){vert_bar} is an upper bound for the optimal asymptotic convergence factor of any Krylov subspace iteration. This idea can be made precise in various ways, depending on the matrix iterations, on whether A is finite or infinite dimensional, and on what bounds are assumed on the non-normality of A. This paper explores these connections for a variety of matrix examples, making use of a new MATLAB Schwarz-Christoffel Mapping Toolbox developed by the first author. Unlike the earlier Fortran Schwarz-Christoffel package SCPACK, the new toolbox computes exterior as well as interior Schwarz-Christoffel maps, making it easy to experiment with spectra that are not necessarily symmetric about an axis.
Application of nonlinear Krylov acceleration to radiative transfer problems
Till, A. T.; Adams, M. L.; Morel, J. E.
2013-07-01
The iterative solution technique used for radiative transfer is normally nested, with outer thermal iterations and inner transport iterations. We implement a nonlinear Krylov acceleration (NKA) method in the PDT code for radiative transfer problems that breaks nesting, resulting in more thermal iterations but significantly fewer total inner transport iterations. Using the metric of total inner transport iterations, we investigate a crooked-pipe-like problem and a pseudo-shock-tube problem. Using only sweep preconditioning, we compare NKA against a typical inner / outer method employing GMRES / Newton and find NKA to be comparable or superior. Finally, we demonstrate the efficacy of applying diffusion-based preconditioning to grey problems in conjunction with NKA. (authors)
Krylov methods for compressible flows
NASA Technical Reports Server (NTRS)
Tidriri, M. D.
1995-01-01
We investigate the application of Krylov methods to compressible flows, and the effect of implicit boundary conditions on the implicit solution of nonlinear problems. Two defect-correction procedures, namely, approximate factorization (AF) for structured grids and ILU/GMRES for general grids, are considered. Also considered here are Newton-Krylov matrix-free methods that we combined with the use of mixed discretization schemes in the implicitly defined Jacobian and its preconditioner. Numerical experiments that show the performance of our approaches are then presented.
Harris, D B
2006-07-11
Broadband subspace detectors are introduced for seismological applications that require the detection of repetitive sources that produce similar, yet significantly variable seismic signals. Like correlation detectors, of which they are a generalization, subspace detectors often permit remarkably sensitive detection of small events. The subspace detector derives its name from the fact that it projects a sliding window of data drawn from a continuous stream onto a vector signal subspace spanning the collection of signals expected to be generated by a particular source. Empirical procedures are presented for designing subspaces from clusters of events characterizing a source. Furthermore, a solution is presented for the problem of selecting the dimension of the subspace to maximize the probability of detecting repetitive events at a fixed false alarm rate. An example illustrates subspace design and detection using events in the 2002 San Ramon, California earthquake swarm.
Improvements in Block-Krylov Ritz Vectors and the Boundary Flexibility Method of Component Synthesis
NASA Technical Reports Server (NTRS)
Carney, Kelly Scott
1997-01-01
A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, proposed by Wilson, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based upon the boundary flexibility vectors of the component. Improvements have been made in the formulation of the initial seed to the Krylov sequence, through the use of block-filtering. A method to shift the Krylov sequence to create Ritz vectors that will represent the dynamic behavior of the component at target frequencies, the target frequency being determined by the applied forcing functions, has been developed. A method to terminate the Krylov sequence has also been developed. Various orthonormalization schemes have been developed and evaluated, including the Cholesky/QR method. Several auxiliary theorems and proofs which illustrate issues in component mode synthesis and loss of orthogonality in the Krylov sequence have also been presented. The resulting methodology is applicable to both fixed and free- interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. The accuracy is found to be comparable to that of component synthesis based upon normal modes, using fewer generalized coordinates. In addition, the block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem. The requirement for less vectors to form the component, coupled with the lower computational expense of calculating these Ritz vectors, combine to create a method more efficient than traditional component mode synthesis.
NASA Astrophysics Data System (ADS)
Bisetti, Fabrizio
2012-06-01
Recent trends in hydrocarbon fuel research indicate that the number of species and reactions in chemical kinetic mechanisms is rapidly increasing in an effort to provide predictive capabilities for fuels of practical interest. In order to cope with the computational cost associated with the time integration of stiff, large chemical systems, a novel approach is proposed. The approach combines an exponential integrator and Krylov subspace approximations to the exponential function of the Jacobian matrix. The components of the approach are described in detail and applied to the ignition of stoichiometric methane-air and iso-octane-air mixtures, here described by two widely adopted chemical kinetic mechanisms. The approach is found to be robust even at relatively large time steps and the global error displays a nominal third-order convergence. The performance of the approach is improved by utilising an adaptive algorithm for the selection of the Krylov subspace size, which guarantees an approximation to the matrix exponential within user-defined error tolerance. The Krylov projection of the Jacobian matrix onto a low-dimensional space is interpreted as a local model reduction with a well-defined error control strategy. Finally, the performance of the approach is discussed with regard to the optimal selection of the parameters governing the accuracy of its individual components.
Projection preconditioning for Lanczos-type methods
Bielawski, S.S.; Mulyarchik, S.G.; Popov, A.V.
1996-12-31
We show how auxiliary subspaces and related projectors may be used for preconditioning nonsymmetric system of linear equations. It is shown that preconditioned in such a way (or projected) system is better conditioned than original system (at least if the coefficient matrix of the system to be solved is symmetrizable). Two approaches for solving projected system are outlined. The first one implies straightforward computation of the projected matrix and consequent using some direct or iterative method. The second approach is the projection preconditioning of conjugate gradient-type solver. The latter approach is developed here in context with biconjugate gradient iteration and some related Lanczos-type algorithms. Some possible particular choices of auxiliary subspaces are discussed. It is shown that one of them is equivalent to using colorings. Some results of numerical experiments are reported.
NASA Astrophysics Data System (ADS)
Koldan, Jelena; Puzyrev, Vladimir; de la Puente, Josep; Houzeaux, Guillaume; Cela, José María
2014-06-01
We present an elaborate preconditioning scheme for Krylov subspace methods which has been developed to improve the performance and reduce the execution time of parallel node-based finite-element (FE) solvers for 3-D electromagnetic (EM) numerical modelling in exploration geophysics. This new preconditioner is based on algebraic multigrid (AMG) that uses different basic relaxation methods, such as Jacobi, symmetric successive over-relaxation (SSOR) and Gauss-Seidel, as smoothers and the wave front algorithm to create groups, which are used for a coarse-level generation. We have implemented and tested this new preconditioner within our parallel nodal FE solver for 3-D forward problems in EM induction geophysics. We have performed series of experiments for several models with different conductivity structures and characteristics to test the performance of our AMG preconditioning technique when combined with biconjugate gradient stabilized method. The results have shown that, the more challenging the problem is in terms of conductivity contrasts, ratio between the sizes of grid elements and/or frequency, the more benefit is obtained by using this preconditioner. Compared to other preconditioning schemes, such as diagonal, SSOR and truncated approximate inverse, the AMG preconditioner greatly improves the convergence of the iterative solver for all tested models. Also, when it comes to cases in which other preconditioners succeed to converge to a desired precision, AMG is able to considerably reduce the total execution time of the forward-problem code-up to an order of magnitude. Furthermore, the tests have confirmed that our AMG scheme ensures grid-independent rate of convergence, as well as improvement in convergence regardless of how big local mesh refinements are. In addition, AMG is designed to be a black-box preconditioner, which makes it easy to use and combine with different iterative methods. Finally, it has proved to be very practical and efficient in the
Implementation of the block-Krylov boundary flexibility method of component synthesis
NASA Technical Reports Server (NTRS)
Carney, Kelly S.; Abdallah, Ayman A.; Hucklebridge, Arthur A.
1993-01-01
A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based on the boundary flexibility vectors of the component. This algorithm is not load-dependent, is applicable to both fixed and free-interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. This methodology was implemented in the MSC/NASTRAN normal modes solution sequence using DMAP. The accuracy is found to be comparable to that of component synthesis based upon normal modes. The block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem.
Implementation of the block-Krylov boundary flexibility method of component synthesis
NASA Astrophysics Data System (ADS)
Carney, Kelly S.; Abdallah, Ayman A.; Hucklebridge, Arthur A.
1993-05-01
A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based on the boundary flexibility vectors of the component. This algorithm is not load-dependent, is applicable to both fixed and free-interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. This methodology was implemented in the MSC/NASTRAN normal modes solution sequence using DMAP. The accuracy is found to be comparable to that of component synthesis based upon normal modes. The block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem.
Newton-Raphson preconditioner for Krylov type solvers on GPU devices.
Kushida, Noriyuki
2016-01-01
A new Newton-Raphson method based preconditioner for Krylov type linear equation solvers for GPGPU is developed, and the performance is investigated. Conventional preconditioners improve the convergence of Krylov type solvers, and perform well on CPUs. However, they do not perform well on GPGPUs, because of the complexity of implementing powerful preconditioners. The developed preconditioner is based on the BFGS Hessian matrix approximation technique, which is well known as a robust and fast nonlinear equation solver. Because the Hessian matrix in the BFGS represents the coefficient matrix of a system of linear equations in some sense, the approximated Hessian matrix can be a preconditioner. On the other hand, BFGS is required to store dense matrices and to invert them, which should be avoided on modern computers and supercomputers. To overcome these disadvantages, we therefore introduce a limited memory BFGS, which requires less memory space and less computational effort than the BFGS. In addition, a limited memory BFGS can be implemented with BLAS libraries, which are well optimized for target architectures. There are advantages and disadvantages to the Hessian matrix approximation becoming better as the Krylov solver iteration continues. The preconditioning matrix varies through Krylov solver iterations, and only flexible Krylov solvers can work well with the developed preconditioner. The GCR method, which is a flexible Krylov solver, is employed because of the prevalence of GCR as a Krylov solver with a variable preconditioner. As a result of the performance investigation, the new preconditioner indicates the following benefits: (1) The new preconditioner is robust; i.e., it converges while conventional preconditioners (the diagonal scaling, and the SSOR preconditioners) fail. (2) In the best case scenarios, it is over 10 times faster than conventional preconditioners on a CPU. (3) Because it requries only simple operations, it performs well on a GPGPU. In
Left and right preconditioning for electrical impedance tomography with structural information
NASA Astrophysics Data System (ADS)
Calvetti, Daniela; McGivney, Debra; Somersalo, Erkki
2012-05-01
A common problem in computational inverse problems is to find an efficient way of solving linear or nonlinear least-squares problems. For large-scale problems, iterative solvers are the method of choice for solving the associated linear systems, and for nonlinear problems, an additional effective local linearization method is required. In this paper, we discuss an efficient preconditioning scheme for Krylov subspace methods, based on the Bayesian analysis of the inverse problem. The model problem to which we apply this methodology is electrical impedance tomography (EIT) augmented with prior information coming from a complementary modality, such as x-ray imaging. The particular geometry considered here models the x-ray-guided EIT for breast imaging. The interest in applying EIT concurrently with x-ray breast imaging arises from the experimental observation that the impedivity spectra of certain types of malignant and benign tissues differ significantly from each other, thus offering a possibility of diagnosis without more invasive tissue sampling. After setting up the EIT inverse problem within a Bayesian framework, we present an inner and outer iteration scheme for computing a maximum a posteriori estimate. The prior covariance provides a right preconditioner and the modeling error covariance provides a left preconditioner for the iterative method used to solve the linear least-squares problem at each outer iteration of the optimization problem. Moreover, the stopping criterion for the inner iterations is coupled with the progress of the solution of the outer iteration. Besides the preconditioning scheme, the computational efficiency relies on a very efficient method to compute the Jacobian, obtained by carefully organizing the forward computation. Computed examples illustrate the robustness and computational efficiency of the proposed algorithm.
Acceleration of k-Eigenvalue / Criticality Calculations using the Jacobian-Free Newton-Krylov Method
Dana Knoll; HyeongKae Park; Chris Newman
2011-02-01
We present a new approach for the $k$--eigenvalue problem using a combination of classical power iteration and the Jacobian--free Newton--Krylov method (JFNK). The method poses the $k$--eigenvalue problem as a fully coupled nonlinear system, which is solved by JFNK with an effective block preconditioning consisting of the power iteration and algebraic multigrid. We demonstrate effectiveness and algorithmic scalability of the method on a 1-D, one group problem and two 2-D two group problems and provide comparison to other efforts using silmilar algorithmic approaches.
Efficient solution of parabolic equations by Krylov approximation methods
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Y.
1990-01-01
Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.
NASA Astrophysics Data System (ADS)
Aliaga, José I.; Alonso, Pedro; Badía, José M.; Chacón, Pablo; Davidović, Davor; López-Blanco, José R.; Quintana-Ortí, Enrique S.
2016-03-01
We introduce a new iterative Krylov subspace-based eigensolver for the simulation of macromolecular motions on desktop multithreaded platforms equipped with multicore processors and, possibly, a graphics accelerator (GPU). The method consists of two stages, with the original problem first reduced into a simpler band-structured form by means of a high-performance compute-intensive procedure. This is followed by a memory-intensive but low-cost Krylov iteration, which is off-loaded to be computed on the GPU by means of an efficient data-parallel kernel. The experimental results reveal the performance of the new eigensolver. Concretely, when applied to the simulation of macromolecules with a few thousands degrees of freedom and the number of eigenpairs to be computed is small to moderate, the new solver outperforms other methods implemented as part of high-performance numerical linear algebra packages for multithreaded architectures.
An Inexact Newton–Krylov Algorithm for Constrained Diffeomorphic Image Registration*
Mang, Andreas; Biros, George
2016-01-01
We propose numerical algorithms for solving large deformation diffeomorphic image registration problems. We formulate the nonrigid image registration problem as a problem of optimal control. This leads to an infinite-dimensional partial differential equation (PDE) constrained optimization problem. The PDE constraint consists, in its simplest form, of a hyperbolic transport equation for the evolution of the image intensity. The control variable is the velocity field. Tikhonov regularization on the control ensures well-posedness. We consider standard smoothness regularization based on H1- or H2-seminorms. We augment this regularization scheme with a constraint on the divergence of the velocity field (control variable) rendering the deformation incompressible (Stokes regularization scheme) and thus ensuring that the determinant of the deformation gradient is equal to one, up to the numerical error. We use a Fourier pseudospectral discretization in space and a Chebyshev pseudospectral discretization in time. The latter allows us to reduce the number of unknowns and enables the time-adaptive inversion for nonstationary velocity fields. We use a preconditioned, globalized, matrix-free, inexact Newton–Krylov method for numerical optimization. A parameter continuation is designed to estimate an optimal regularization parameter. Regularity is ensured by controlling the geometric properties of the deformation field. Overall, we arrive at a black-box solver that exploits computational tools that are precisely tailored for solving the optimality system. We study spectral properties of the Hessian, grid convergence, numerical accuracy, computational efficiency, and deformation regularity of our scheme. We compare the designed Newton–Krylov methods with a globalized Picard method (preconditioned gradient descent). We study the influence of a varying number of unknowns in time. The reported results demonstrate excellent numerical accuracy, guaranteed local deformation
Notes on Newton-Krylov based Incompressible Flow Projection Solver
Robert Nourgaliev; Mark Christon; J. Bakosi
2012-09-01
The purpose of the present document is to formulate Jacobian-free Newton-Krylov algorithm for approximate projection method used in Hydra-TH code. Hydra-TH is developed by Los Alamos National Laboratory (LANL) under the auspices of the Consortium for Advanced Simulation of Light-Water Reactors (CASL) for thermal-hydraulics applications ranging from grid-to-rod fretting (GTRF) to multiphase flow subcooled boiling. Currently, Hydra-TH is based on the semi-implicit projection method, which provides an excellent platform for simulation of transient single-phase thermalhydraulics problems. This algorithm however is not efficient when applied for very slow or steady-state problems, as well as for highly nonlinear multiphase problems relevant to nuclear reactor thermalhydraulics with boiling and condensation. These applications require fully-implicit tightly-coupling algorithms. The major technical contribution of the present report is the formulation of fully-implicit projection algorithm which will fulfill this purpose. This includes the definition of non-linear residuals used for GMRES-based linear iterations, as well as physics-based preconditioning techniques.
Newton-Krylov-Schwarz: An implicit solver for CFD
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Keyes, David E.; Venkatakrishnan, V.
1995-01-01
Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton's method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on aerodynamics applications emphasizing comparisons with a standard defect-correction approach, subdomain preconditioner consistency, subdomain preconditioner quality, and the effect of a coarse grid.
Newton-Krylov-Schwarz methods in unstructured grid Euler flow
Keyes, D.E.
1996-12-31
Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton`s method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on an aerodynamic application emphasizing comparisons with a standard defect-correction approach and subdomain preconditioner consistency.
Nonlinear Krylov acceleration of reacting flow codes
Kumar, S.; Rawat, R.; Smith, P.; Pernice, M.
1996-12-31
We are working on computational simulations of three-dimensional reactive flows in applications encompassing a broad range of chemical engineering problems. Examples of such processes are coal (pulverized and fluidized bed) and gas combustion, petroleum processing (cracking), and metallurgical operations such as smelting. These simulations involve an interplay of various physical and chemical factors such as fluid dynamics with turbulence, convective and radiative heat transfer, multiphase effects such as fluid-particle and particle-particle interactions, and chemical reaction. The governing equations resulting from modeling these processes are highly nonlinear and strongly coupled, thereby rendering their solution by traditional iterative methods (such as nonlinear line Gauss-Seidel methods) very difficult and sometimes impossible. Hence we are exploring the use of nonlinear Krylov techniques (such as CMRES and Bi-CGSTAB) to accelerate and stabilize the existing solver. This strategy allows us to take advantage of the problem-definition capabilities of the existing solver. The overall approach amounts to using the SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) method and its variants as nonlinear preconditioners for the nonlinear Krylov method. We have also adapted a backtracking approach for inexact Newton methods to damp the Newton step in the nonlinear Krylov method. This will be a report on work in progress. Preliminary results with nonlinear GMRES have been very encouraging: in many cases the number of line Gauss-Seidel sweeps has been reduced by about a factor of 5, and increased robustness of the underlying solver has also been observed.
Texture Representations Using Subspace Embeddings
Yang, Xiaodong; Tian, YingLi
2013-01-01
In this paper, we propose a texture representation framework to map local texture patches into a low-dimensional texture subspace. In natural texture images, textons are entangled with multiple factors, such as rotation, scaling, viewpoint variation, illumination change, and non-rigid surface deformation. Mapping local texture patches into a low-dimensional subspace can alleviate or eliminate these undesired variation factors resulting from both geometric and photometric transformations. We observe that texture representations based on subspace embeddings have strong resistance to image deformations, meanwhile, are more distinctive and more compact than traditional representations. We investigate both linear and non-linear embedding methods including Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Locality Preserving Projections (LPP) to compute the essential texture subspace. The experiments in the context of texture classification on benchmark datasets demonstrate that the proposed subspace embedding representations achieve the state-of-the-art results while with much fewer feature dimensions. PMID:23710105
Texture Representations Using Subspace Embeddings.
Yang, Xiaodong; Tian, Yingli
2013-07-15
In this paper, we propose a texture representation framework to map local texture patches into a low-dimensional texture subspace. In natural texture images, textons are entangled with multiple factors, such as rotation, scaling, viewpoint variation, illumination change, and non-rigid surface deformation. Mapping local texture patches into a low-dimensional subspace can alleviate or eliminate these undesired variation factors resulting from both geometric and photometric transformations. We observe that texture representations based on subspace embeddings have strong resistance to image deformations, meanwhile, are more distinctive and more compact than traditional representations. We investigate both linear and non-linear embedding methods including Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Locality Preserving Projections (LPP) to compute the essential texture subspace. The experiments in the context of texture classification on benchmark datasets demonstrate that the proposed subspace embedding representations achieve the state-of-the-art results while with much fewer feature dimensions. PMID:23710105
NASA Astrophysics Data System (ADS)
Jiang, Tian; Zhang, Yong-Tao
2016-04-01
Implicit integration factor (IIF) methods were developed in the literature for solving time-dependent stiff partial differential equations (PDEs). Recently, IIF methods were combined with weighted essentially non-oscillatory (WENO) schemes in Jiang and Zhang (2013) [19] to efficiently solve stiff nonlinear advection-diffusion-reaction equations. The methods can be designed for arbitrary order of accuracy. The stiffness of the system is resolved well and the methods are stable by using time step sizes which are just determined by the non-stiff hyperbolic part of the system. To efficiently calculate large matrix exponentials, Krylov subspace approximation is directly applied to the implicit integration factor (IIF) methods. So far, the IIF methods developed in the literature are multistep methods. In this paper, we develop Krylov single-step IIF-WENO methods for solving stiff advection-diffusion-reaction equations. The methods are designed carefully to avoid generating positive exponentials in the matrix exponentials, which is necessary for the stability of the schemes. We analyze the stability and truncation errors of the single-step IIF schemes. Numerical examples of both scalar equations and systems are shown to demonstrate the accuracy, efficiency and robustness of the new methods.
Unsupervised Discovery of Subspace Trends.
Xu, Yan; Qiu, Peng; Roysam, Badrinath
2015-10-01
This paper presents unsupervised algorithms for discovering previously unknown subspace trends in high-dimensional data sets without the benefit of prior information. A subspace trend is a sustained pattern of gradual/progressive changes within an unknown subset of feature dimensions. A fundamental challenge to subspace trend discovery is the presence of irrelevant data dimensions, noise, outliers, and confusion from multiple subspace trends driven by independent factors that are mixed in with each other. These factors can obscure the trends in conventional dimension reduction & projection based data visualizations. To overcome these limitations, we propose a novel graph-theoretic neighborhood similarity measure for detecting concordant progressive changes across data dimensions. Using this measure, we present an unsupervised algorithm for trend-relevant feature selection, subspace trend discovery, quantification of trend strength, and validation. Our method successfully identified verifiable subspace trends in diverse synthetic and real-world biomedical datasets. Visualizations derived from the selected trend-relevant features revealed biologically meaningful hidden subspace trend(s) that were obscured by irrelevant features and noise. Although our examples are drawn from the biological domain, the proposed algorithm is broadly applicable to exploratory analysis of high-dimensional data including visualization, hypothesis generation, knowledge discovery, and prediction in diverse other applications. PMID:26353189
A compressible Navier-Stokes flow solver using the Newton-Krylov method on unstructured grids
NASA Astrophysics Data System (ADS)
Wong, Peterson
A Newton-Krylov algorithm is presented for the compressible Navier-Stokes equations on hybrid unstructured grids. The Spalart-Allmaras turbulence model is used for turbulent flows. The spatial discretization is based on a finite-volume matrix dissipation scheme. A preconditioned matrix-free generalized minimal residual method is used to solve the linear system that arises in the Newton iterations. The incomplete lower-upper factorization based on an approximate Jacobian is used as the preconditioner after applying the reverse Cuthill-McKee reordering. Various aspects of the Newton-Krylov algorithm are studied to improve efficiency and reliability. The inexact Newton method is studied to avoid over-solving of the linear system to reduce computational cost. The ILU(1) approach is selected in three dimensions, based on a comparison among various preconditioners. Approximate viscous formulations involving only the nearest neighboring terms are studied to reduce the cost of preconditioning. The resulting preconditioners are found to be effective and provide Newton-type convergence. Scaling of the linear system is studied to improve convergence of the inexact matrix-free approach. Numerical studies are performed for two-dimensional cases as well as flows over the ONERA M6 wing and the DLR-F6 wing-body configuration. A ten-order-of-magnitude residual reduction can be obtained with a computing cost equivalent to 4,000 residual function evaluations for two-dimensional cases, while the same convergence can be obtained in 5,500 and 8,000 function evaluations for the wing and wing-body configuration, respectively, on grids with a half million nodes.
Covariance Modifications to Subspace Bases
Harris, D B
2008-11-19
Adaptive signal processing algorithms that rely upon representations of signal and noise subspaces often require updates to those representations when new data become available. Subspace representations frequently are estimated from available data with singular value (SVD) decompositions. Subspace updates require modifications to these decompositions. Updates can be performed inexpensively provided they are low-rank. A substantial literature on SVD updates exists, frequently focusing on rank-1 updates (see e.g. [Karasalo, 1986; Comon and Golub, 1990, Badeau, 2004]). In these methods, data matrices are modified by addition or deletion of a row or column, or data covariance matrices are modified by addition of the outer product of a new vector. A recent paper by Brand [2006] provides a general and efficient method for arbitrary rank updates to an SVD. The purpose of this note is to describe a closely-related method for applications where right singular vectors are not required. This note also describes the SVD updates to a particular scenario of interest in seismic array signal processing. The particular application involve updating the wideband subspace representation used in seismic subspace detectors [Harris, 2006]. These subspace detectors generalize waveform correlation algorithms to detect signals that lie in a subspace of waveforms of dimension d {ge} 1. They potentially are of interest because they extend the range of waveform variation over which these sensitive detectors apply. Subspace detectors operate by projecting waveform data from a detection window into a subspace specified by a collection of orthonormal waveform basis vectors (referred to as the template). Subspace templates are constructed from a suite of normalized, aligned master event waveforms that may be acquired by a single sensor, a three-component sensor, an array of such sensors or a sensor network. The template design process entails constructing a data matrix whose columns contain the
Some experiences with Krylov vectors and Lanczos vectors
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.; Su, Tzu-Jeng; Kim, Hyoung M.
1993-01-01
This paper illustrates the use of Krylov vectors and Lanczos vectors for reduced-order modeling in structural dynamics and for control of flexible structures. Krylov vectors and Lanczos vectors are defined and illustrated, and several applications that have been under study at The University of Texas at Austin are reviewed: model reduction for undamped structural dynamics systems, component mode synthesis using Krylov vectors, model reduction of damped structural dynamics systems, and one-sided and two-sided unsymmetric block-Lanczos model-reduction algorithms.
Fattebert, J
2008-07-29
We describe an iterative algorithm to solve electronic structure problems in Density Functional Theory. The approach is presented as a Subspace Accelerated Inexact Newton (SAIN) solver for the non-linear Kohn-Sham equations. It is related to a class of iterative algorithms known as RMM-DIIS in the electronic structure community. The method is illustrated with examples of real applications using a finite difference discretization and multigrid preconditioning.
Higher order stationary subspace analysis
NASA Astrophysics Data System (ADS)
Panknin, Danny; von Bünau, Paul; Kawanabe, Motoaki; Meinecke, Frank C.; Müller, Klaus-Robert
2016-03-01
Non-stationarity in data is an ubiquitous problem in signal processing. The recent stationary subspace analysis procedure (SSA) has enabled to decompose such data into a stationary subspace and a non-stationary part respectively. Algorithmically only weak non- stationarities could be tackled by SSA. The present paper takes the conceptual step generalizing from the use of first and second moments as in SSA to higher order moments, thus defining the proposed higher order stationary subspace analysis procedure (HOSSA). The paper derives the novel procedure and shows simulations. An obvious trade-off between the necessity of estimating higher moments and the accuracy and robustness with which they can be estimated is observed. In an ideal setting of plenty of data where higher moment information is dominating our novel approach can win against standard SSA. However, with limited data, even though higher moments actually dominate the underlying data, still SSA may arrive on par.
Molecular mechanism of preconditioning.
Das, Manika; Das, Dipak K
2008-04-01
During the last 20 years, since the appearance of the first publication on ischemic preconditioning (PC), our knowledge of this phenomenon has increased exponentially. PC is defined as an increased tolerance to ischemia and reperfusion induced by previous sublethal period ischemia. This is the most powerful mechanism known to date for limiting the infract size. This adaptation occurs in a biphasic pattern (i) early preconditioning (lasts for 2-3 h) and (ii) late preconditioning (starting at 24 h lasting until 72-96 h after initial ischemia). Early preconditioning is more potent than delayed preconditioning in reducing infract size. Late preconditioning attenuates myocardial stunning and requires genomic activation with de novo protein synthesis. Early preconditioning depends on adenosine, opioids and to a lesser degree, on bradykinin and prostaglandins, released during ischemia. These molecules activate G-protein-coupled receptor, initiate activation of K(ATP) channel and generate oxygen-free radicals, and stimulate a series of protein kinases, which include protein kinase C, tyrosine kinase, and members of MAP kinase family. Late preconditioning is triggered by a similar sequence of events, but in addition essentially depends on newly synthesized proteins, which comprise iNOS, COX-2, manganese superoxide dismutase, and possibly heat shock proteins. The final mechanism of PC is still not very clear. The present review focuses on the possible role signaling molecules that regulate cardiomyocyte life and death during ischemia and reperfusion. PMID:18344203
A Parallel Newton-Krylov-Schur Algorithm for the Reynolds-Averaged Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Osusky, Michal
Aerodynamic shape optimization and multidisciplinary optimization algorithms have the potential not only to improve conventional aircraft, but also to enable the design of novel configurations. By their very nature, these algorithms generate and analyze a large number of unique shapes, resulting in high computational costs. In order to improve their efficiency and enable their use in the early stages of the design process, a fast and robust flow solution algorithm is necessary. This thesis presents an efficient parallel Newton-Krylov-Schur flow solution algorithm for the three-dimensional Navier-Stokes equations coupled with the Spalart-Allmaras one-equation turbulence model. The algorithm employs second-order summation-by-parts (SBP) operators on multi-block structured grids with simultaneous approximation terms (SATs) to enforce block interface coupling and boundary conditions. The discrete equations are solved iteratively with an inexact-Newton method, while the linear system at each Newton iteration is solved using the flexible Krylov subspace iterative method GMRES with an approximate-Schur parallel preconditioner. The algorithm is thoroughly verified and validated, highlighting the correspondence of the current algorithm with several established flow solvers. The solution for a transonic flow over a wing on a mesh of medium density (15 million nodes) shows good agreement with experimental results. Using 128 processors, deep convergence is obtained in under 90 minutes. The solution of transonic flow over the Common Research Model wing-body geometry with grids with up to 150 million nodes exhibits the expected grid convergence behavior. This case was completed as part of the Fifth AIAA Drag Prediction Workshop, with the algorithm producing solutions that compare favourably with several widely used flow solvers. The algorithm is shown to scale well on over 6000 processors. The results demonstrate the effectiveness of the SBP-SAT spatial discretization, which can
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2014-11-01
Time step-size restrictions and low convergence rates are major bottle necks for implicit solution of the Navier-Stokes in simulations involving complex geometries with moving boundaries. Newton-Krylov method (NKM) is a combination of a Newton-type method for super-linearly convergent solution of nonlinear equations and Krylov subspace methods for solving the Newton correction equations, which can theoretically address both bottle necks. The efficiency of this method vastly depends on the Jacobian forming scheme e.g. automatic differentiation is very expensive and Jacobian-free methods slow down as the mesh is refined. A novel, computationally efficient analytical Jacobian for NKM was developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered curvilinear grids with immersed boundaries. The NKM was validated and verified against Taylor-Green vortex and pulsatile flow in a 90 degree bend and efficiently handles complex geometries such as an intracranial aneurysm with multiple overset grids, pulsatile inlet flow and immersed boundaries. The NKM method is shown to be more efficient than the semi-implicit Runge-Kutta methods and Jabobian-free Newton-Krylov methods. We believe NKM can be applied to many CFD techniques to decrease the computational cost. This work was supported partly by the NIH Grant R03EB014860, and the computational resources were partly provided by Center for Computational Research (CCR) at University at Buffalo.
Subspace methods for computational relighting
NASA Astrophysics Data System (ADS)
Nguyen, Ha Q.; Liu, Siying; Do, Minh N.
2013-02-01
We propose a vector space approach for relighting a Lambertian convex object with distant light source, whose crucial task is the decomposition of the reflectance function into albedos (or reflection coefficients) and lightings based on a set of images of the same object and its 3-D model. Making use of the fact that reflectance functions are well approximated by a low-dimensional linear subspace spanned by the first few spherical harmonics, this inverse problem can be formulated as a matrix factorization, in which the basis of the subspace is encoded in the spherical harmonic matrix S. A necessary and sufficient condition on S for unique factorization is derived with an introduction to a new notion of matrix rank called nonseparable full rank. An SVD-based algorithm for exact factorization in the noiseless case is introduced. In the presence of noise, the algorithm is slightly modified by incorporating the positivity of albedos into a convex optimization problem. Implementations of the proposed algorithms are done on a set of synthetic data.
An accelerated subspace iteration for eigenvector derivatives
NASA Technical Reports Server (NTRS)
Ting, Tienko
1991-01-01
An accelerated subspace iteration method for calculating eigenvector derivatives has been developed. Factors affecting the effectiveness and the reliability of the subspace iteration are identified, and effective strategies concerning these factors are presented. The method has been implemented, and the results of a demonstration problem are presented.
Preconditioned Iterative Solver
Energy Science and Technology Software Center (ESTSC)
2002-08-01
AztecOO contains a collection of preconditioned iterative methods for the solution of sparse linear systems of equations. In addition to providing many of the common algebraic preconditioners and basic iterative methods, AztecOO can be easily extended to interact with user-provided preconditioners and matrix operators.
Face recognition with L1-norm subspaces
NASA Astrophysics Data System (ADS)
Maritato, Federica; Liu, Ying; Colonnese, Stefania; Pados, Dimitris A.
2016-05-01
We consider the problem of representing individual faces by maximum L1-norm projection subspaces calculated from available face-image ensembles. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to image variations, disturbances, and rank selection. Face recognition becomes then the problem of associating a new unknown face image to the "closest," in some sense, L1 subspace in the database. In this work, we also introduce the concept of adaptively allocating the available number of principal components to different face image classes, subject to a given total number/budget of principal components. Experimental studies included in this paper illustrate and support the theoretical developments.
Shadid, J. N.; Pawlowski, R. P.; Cyr, E. C.; Tuminaro, R. S.; Chacon, L.; Weber, P. D.
2016-02-10
Here, we discuss that the computational solution of the governing balance equations for mass, momentum, heat transfer and magnetic induction for resistive magnetohydrodynamics (MHD) systems can be extremely challenging. These difficulties arise from both the strong nonlinear, nonsymmetric coupling of fluid and electromagnetic phenomena, as well as the significant range of time- and length-scales that the interactions of these physical mechanisms produce. This paper explores the development of a scalable, fully-implicit stabilized unstructured finite element (FE) capability for 3D incompressible resistive MHD. The discussion considers the development of a stabilized FE formulation in context of the variational multiscale (VMS) method,more » and describes the scalable implicit time integration and direct-to-steady-state solution capability. The nonlinear solver strategy employs Newton–Krylov methods, which are preconditioned using fully-coupled algebraic multilevel preconditioners. These preconditioners are shown to enable a robust, scalable and efficient solution approach for the large-scale sparse linear systems generated by the Newton linearization. Verification results demonstrate the expected order-of-accuracy for the stabilized FE discretization. The approach is tested on a variety of prototype problems, that include MHD duct flows, an unstable hydromagnetic Kelvin–Helmholtz shear layer, and a 3D island coalescence problem used to model magnetic reconnection. Initial results that explore the scaling of the solution methods are also presented on up to 128K processors for problems with up to 1.8B unknowns on a CrayXK7.« less
A Hybrid, Parallel Krylov Solver for MODFLOW using Schwarz Domain Decomposition
NASA Astrophysics Data System (ADS)
Sutanudjaja, E.; Verkaik, J.; Hughes, J. D.
2015-12-01
In order to support decision makers in solving hydrological problems, detailed high-resolution models are often needed. These models typically consist of a large number of computational cells and have large memory requirements and long run times. An efficient technique for obtaining realistic run times and memory requirements is parallel computing, where the problem is divided over multiple processor cores. The new Parallel Krylov Solver (PKS) for MODFLOW-USG is presented. It combines both distributed memory parallelization by the Message Passing Interface (MPI) and shared memory parallelization by Open Multi-Processing (OpenMP). PKS includes conjugate gradient and biconjugate gradient stabilized linear accelerators that are both preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using the METIS library; b) each subdomain uses local memory only and communicates with other subdomains by MPI within the linear accelerator; c) is fully integrated in the MODFLOW-USG code. PKS is based on the unstructured PCGU-solver, and supports OpenMP. Depending on the available hardware, PKS can run exclusively with MPI, exclusively with OpenMP, or with a hybrid MPI/OpenMP approach. Benchmarks were performed on the Cartesius Dutch supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 144 cores, for a synthetic test (~112 million cells) and the Indonesia groundwater model (~4 million 1km cells). The latter, which includes all islands in the Indonesian archipelago, was built using publically available global datasets, and is an ideal test bed for evaluating the applicability of PKS parallelization techniques to a global groundwater model consisting of multiple continents and islands. Results show that run time reductions can be greatest with the hybrid parallelization approach for the problems tested.
Bakhos, Tania; Saibaba, Arvind K.; Kitanidis, Peter K.
2015-10-15
We consider the problem of estimating parameters in large-scale weakly nonlinear inverse problems for which the underlying governing equations is a linear, time-dependent, parabolic partial differential equation. A major challenge in solving these inverse problems using Newton-type methods is the computational cost associated with solving the forward problem and with repeated construction of the Jacobian, which represents the sensitivity of the measurements to the unknown parameters. Forming the Jacobian can be prohibitively expensive because it requires repeated solutions of the forward and adjoint time-dependent parabolic partial differential equations corresponding to multiple sources and receivers. We propose an efficient method based on a Laplace transform-based exponential time integrator combined with a flexible Krylov subspace approach to solve the resulting shifted systems of equations efficiently. Our proposed solver speeds up the computation of the forward and adjoint problems, thus yielding significant speedup in total inversion time. We consider an application from Transient Hydraulic Tomography (THT), which is an imaging technique to estimate hydraulic parameters related to the subsurface from pressure measurements obtained by a series of pumping tests. The algorithms discussed are applied to a synthetic example taken from THT to demonstrate the resulting computational gains of this proposed method.
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-08-24
This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. Themore » Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.« less
Signal subspace integration for improved seizure localization
Stamoulis, Catherine; Fernández, Iván Sánchez; Chang, Bernard S.; Loddenkemper, Tobias
2012-01-01
A subspace signal processing approach is proposed for improved scalp EEG-based localization of broad-focus epileptic seizures, and estimation of the directions of source arrivals (DOA). Ictal scalp EEGs from adult and pediatric patients with broad-focus seizures were first decomposed into dominant signal modes, and signal and noise subspaces at each modal frequency, to improve the signal-to-noise ratio while preserving the original data correlation structure. Transformed (focused) modal signals were then resynthesized into wideband signals from which the number of sources and DOA were estimated. These were compared to denoised signals via principal components analysis (PCA). Coherent subspace processing performed better than PCA, significantly improved the localization of ictal EEGs and the estimation of distinct sources and corresponding DOAs. PMID:23366067
Signal subspace integration for improved seizure localization.
Stamoulis, Catherine; Fernández, Iván Sánchez; Chang, Bernard S; Loddenkemper, Tobias
2012-01-01
A subspace signal processing approach is proposed for improved scalp EEG-based localization of broad-focus epileptic seizures, and estimation of the directions of source arrivals (DOA). Ictal scalp EEGs from adult and pediatric patients with broad-focus seizures were first decomposed into dominant signal modes, and signal and noise subspaces at each modal frequency, to improve the signal-to-noise ratio while preserving the original data correlation structure. Transformed (focused) modal signals were then resynthesized into wideband signals from which the number of sources and DOA were estimated. These were compared to denoised signals via principal components analysis (PCA). Coherent subspace processing performed better than PCA, significantly improved the localization of ictal EEGs and the estimation of distinct sources and corresponding DOAs. PMID:23366067
Numerical solution of large nonsymmetric eigenvalue problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1988-01-01
Several methods are discribed for combinations of Krylov subspace techniques, deflation procedures and preconditionings, for computing a small number of eigenvalues and eigenvectors or Schur vectors of large sparse matrices. The most effective techniques for solving realistic problems from applications are those methods based on some form of preconditioning and one of several Krylov subspace techniques, such as Arnoldi's method or Lanczos procedure. Two forms of preconditioning are considered: shift-and-invert and polynomial acceleration. The latter presents some advantages for parallel/vector processing but may be ineffective if eigenvalues inside the spectrum are sought. Some algorithmic details are provided that improve the reliability and effectiveness of these techniques.
Sparse subspace clustering: algorithm, theory, and applications.
Elhamifar, Ehsan; Vidal, René
2013-11-01
Many real-world problems deal with collections of high-dimensional data, such as images, videos, text, and web documents, DNA microarray data, and more. Often, such high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories to which the data belong. In this paper, we propose and study an algorithm, called sparse subspace clustering, to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among the infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of the data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of the subspaces and the distribution of the data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm is efficient and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal directly with data nuisances, such as noise, sparse outlying entries, and missing entries, by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering. PMID:24051734
Applications of Subspace Seismicity Detection in Antarctica
NASA Astrophysics Data System (ADS)
Myers, E. K.; Aster, R. C.; Benz, H.; McMahon, N. D.; McNamara, D. E.; Lough, A. C.; Wiens, D. A.; Wilson, T. J.
2014-12-01
Subspace detection can improve event recognition by enhancing the completeness of earthquake catalogs and by improving the characterization and interpretation of seismic events, particularly in regions of clustered seismicity. Recent deployments of dense networks of seismometers enable subspace detection methods to be more broadly applied to intraplate Antarctica, where historically very limited and sporadic network coverage has inhibited understanding of dynamic glacial, volcanic, and tectonic processes. In particular, recent broad seismographic networks such as POLENET/A-Net and AGAP provide significant new opportunities for characterizing and understanding the low seismicity rates of this continent. Our methodology incorporates three-component correlation to detect events in a statistical and adaptive framework. Detection thresholds are statistically assessed using phase-randomized template correlation levels. As new events are detected and the set of subspace basis vectors is updated, the algorithm can also be directed to scan back in a search for weaker prior events that have significant correlations with the updated basis vectors. This method has the resolving power to identify previously undetected areas of seismic activity under very low signal-to-noise conditions, and thus holds promise for revealing new seismogenic phenomena within and around Antarctica. In this study we investigate two intriguing seismogenic regions and demonstrate the methodology, reporting on a subspace detection-based study of recently identified clusters of deep long-period magmatic earthquakes in Marie Byrd Land, and on shallow icequakes that are dynamically triggered by teleseismic surface waves.
Subspace Identification with Multiple Data Sets
NASA Technical Reports Server (NTRS)
Duchesne, Laurent; Feron, Eric; Paduano, James D.; Brenner, Marty
1995-01-01
Most existing subspace identification algorithms assume that a single input to output data set is available. Motivated by a real life problem on the F18-SRA experimental aircraft, we show how these algorithms are readily adapted to handle multiple data sets. We show by means of an example the relevance of such an improvement.
Real Space DFT by Locally Optimal Block Preconditioned Conjugate Gradient Method
NASA Astrophysics Data System (ADS)
Michaud, Vincent; Guo, Hong
2012-02-01
Real space approaches solve the Kohn-Sham (KS) DFT problem as a system of partial differential equations (PDE) in real space numerical grids. In such techniques, the Hamiltonian matrix is typically much larger but sparser than the matrix arising in state-of-the-art DFT codes which are often based on directly minimizing the total energy functional. Evidence of good performance of real space methods - by Chebyshev filtered subspace iteration (CFSI) - was reported by Zhou, Saad, Tiago and Chelikowsky [1]. We found that the performance of the locally optimal block preconditioned conjugate gradient method (LOGPCG) introduced by Knyazev [2], when used in conjunction with CFSI, generally exceeds that of CFSI for solving the KS equations. We will present our implementation of the LOGPCG based real space electronic structure calculator. [4pt] [1] Y. Zhou, Y. Saad, M. L. Tiago, and J. R. Chelikowsky, ``Self-consistent-field calculations using Chebyshev-filtered subspace iteration,'' J. Comput. Phys., vol. 219,pp. 172-184, November 2006. [0pt] [2] A. V. Knyazev, ``Toward the optimal preconditioned eigensolver: Locally optimal block preconditioned conjugate gradient method,'' SIAM J. Sci. Comput, vol. 23, pp. 517-541, 2001.
Exploiting Unsupervised and Supervised Constraints for Subspace Clustering.
Hu, Han; Feng, Jianjiang; Zhou, Jie
2015-08-01
Data in many image and video analysis tasks can be viewed as points drawn from multiple low-dimensional subspaces with each subspace corresponding to one category or class. One basic task for processing such kind of data is to separate the points according to the underlying subspace, referred to as subspace clustering. Extensive studies have been made on this subject, and nearly all of them use unconstrained subspace models, meaning the points can be drawn from everywhere of a subspace, to represent the data. In this paper, we attempt to do subspace clustering based on a constrained subspace assumption that the data is further restricted in the corresponding subspaces, e.g., belonging to a submanifold or satisfying the spatial regularity constraint. This assumption usually describes the real data better, such as differently moving objects in a video scene and face images of different subjects under varying illumination. A unified integer linear programming optimization framework is used to approach subspace clustering, which can be efficiently solved by a branch-and-bound (BB) method. We also show that various kinds of supervised information, such as subspace number, outlier ratio, pairwise constraints, size prior and etc., can be conveniently incorporated into the proposed framework. Experiments on real data show that the proposed method outperforms the state-of-the-art algorithms significantly in clustering accuracy. The effectiveness of the proposed method in exploiting supervised information is also demonstrated. PMID:26352994
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-06-01
We present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.
General purpose nonlinear system solver based on Newton-Krylov method.
Energy Science and Technology Software Center (ESTSC)
2013-12-01
KINSOL is part of a software family called SUNDIALS: SUite of Nonlinear and Differential/Algebraic equation Solvers [1]. KINSOL is a general-purpose nonlinear system solver based on Newton-Krylov and fixed-point solver technologies [2].
Discriminative Semantic Subspace Analysis for Relevance Feedback.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-03-01
Content-based image retrieval (CBIR) has attracted much attention during the past decades for its potential practical applications to image database management. A variety of relevance feedback (RF) schemes have been designed to bridge the gap between low-level visual features and high-level semantic concepts for an image retrieval task. In the process of RF, it would be impractical or too expensive to provide explicit class label information for each image. Instead, similar or dissimilar pairwise constraints between two images can be acquired more easily. However, most of the conventional RF approaches can only deal with training images with explicit class label information. In this paper, we propose a novel discriminative semantic subspace analysis (DSSA) method, which can directly learn a semantic subspace from similar and dissimilar pairwise constraints without using any explicit class label information. In particular, DSSA can effectively integrate the local geometry of labeled similar images, the discriminative information between labeled similar and dissimilar images, and the local geometry of labeled and unlabeled images together to learn a reliable subspace. Compared with the popular distance metric analysis approaches, our method can also learn a distance metric but perform more effectively when dealing with high-dimensional images. Extensive experiments on both the synthetic data sets and a real-world image database demonstrate the effectiveness of the proposed scheme in improving the performance of the CBIR. PMID:26780793
HyeongKae Park; R. Nourgaliev; Richard C. Martineau; Dana A. Knoll
2008-09-01
Multidimensional, higher-order (2nd and higher) numerical methods have come to the forefront in recent years due to significant advances of computer technology and numerical algorithms, and have shown great potential as viable design tools for realistic applications. To achieve this goal, implicit high-order accurate coupling of the multiphysics simulations is a critical component. One of the issues that arise from multiphysics simulation is the necessity to resolve multiple time scales. For example, the dynamical time scales of neutron kinetics, fluid dynamics and heat conduction significantly differ (typically >1010 magnitude), with the dominant (fastest) physical mode also changing during the course of transient [Pope and Mousseau, 2007]. This leads to the severe time step restriction for stability in traditional multiphysics (i.e. operator split, semi-implicit discretization) simulations. The lower order methods suffer from an undesirable numerical dissipation. Thus implicit, higher order accurate scheme is necessary to perform seamlessly-coupled multiphysics simulations that can be used to analyze the “what-if” regulatory accident scenarios, or to design and optimize engineering systems.
Iterative methods for large scale nonlinear and linear systems. Final report, 1994--1996
Walker, H.F.
1997-09-01
The major goal of this research has been to develop improved numerical methods for the solution of large-scale systems of linear and nonlinear equations, such as occur almost ubiquitously in the computational modeling of physical phenomena. The numerical methods of central interest have been Krylov subspace methods for linear systems, which have enjoyed great success in many large-scale applications, and newton-Krylov methods for nonlinear problems, which use Krylov subspace methods to solve approximately the linear systems that characterize Newton steps. Krylov subspace methods have undergone a remarkable development over the last decade or so and are now very widely used for the iterative solution of large-scale linear systems, particularly those that arise in the discretization of partial differential equations (PDEs) that occur in computational modeling. Newton-Krylov methods have enjoyed parallel success and are currently used in many nonlinear applications of great scientific and industrial importance. In addition to their effectiveness on important problems, Newton-Krylov methods also offer a nonlinear framework within which to transfer to the nonlinear setting any advances in Krylov subspace methods or preconditioning techniques, or new algorithms that exploit advanced machine architectures. This research has resulted in a number of improved Krylov and Newton-Krylov algorithms together with applications of these to important linear and nonlinear problems.
Indoor Subspacing to Implement Indoorgml for Indoor Navigation
NASA Astrophysics Data System (ADS)
Jung, H.; Lee, J.
2015-10-01
According to an increasing demand for indoor navigation, there are great attempts to develop applicable indoor network. Representation for a room as a node is not sufficient to apply complex and large buildings. As OGC established IndoorGML, subspacing to partition the space for constructing logical network is introduced. Concerning subspacing for indoor network, transition space like halls or corridors also have to be considered. This study presents the subspacing process for creating an indoor network in shopping mall. Furthermore, categorization of transition space is performed and subspacing of this space is considered. Hall and squares in mall is especially defined for subspacing. Finally, implementation of subspacing process for indoor network is presented.
Biomarkers for ischemic preconditioning: finding the responders
Koch, Sebastian; Della-Morte, David; Dave, Kunjan R; Sacco, Ralph L; Perez-Pinzon, Miguel A
2014-01-01
Ischemic preconditioning is emerging as an innovative and novel cytoprotective strategy to counter ischemic vascular disease. At the root of the preconditioning response is the upregulation of endogenous defense systems to achieve ischemic tolerance. Identifying suitable biomarkers to show that a preconditioning response has been induced remains a translational research priority. Preconditioning leads to a widespread genomic and proteonomic response with important effects on hemostatic, endothelial, and inflammatory systems. The present article summarizes the relevant preclinical studies defining the mechanisms of preconditioning, reviews how the human preconditioning response has been investigated, and which of these bioresponses could serve as a suitable biomarker. Human preconditioning studies have investigated the effects of preconditioning on coagulation, endothelial factors, and inflammatory mediators as well as on genetic expression and tissue blood flow imaging. A biomarker for preconditioning would significantly contribute to define the optimal preconditioning stimulus and the extent to which such a response can be elicited in humans and greatly aid in dose selection in the design of phase II trials. Given the manifold biologic effects of preconditioning a panel of multiple serum biomarkers or genomic assessments of upstream regulators may most accurately reflect the full spectrum of a preconditioning response. PMID:24643082
Orderings for conjugate gradient preconditionings
NASA Technical Reports Server (NTRS)
Ortega, James M.
1991-01-01
The effect of orderings on the rate of convergence of the conjugate gradient method with SSOR or incomplete Cholesky preconditioning is examined. Some results also are presented that help to explain why red/black ordering gives an inferior rate of convergence.
Hyperbaric oxygen pretreatment and preconditioning.
Camporesi, Enrico M; Bosco, Gerardo
2014-01-01
Exposure to hyperbaric oxygen (HBO2) before a crucial event, with the plan to create a preventing therapeutic situation, has been defined "preconditioning" and is emerging as a useful adjunct both in diving medicine as well before ischemic or inflammatory events. Oxygen pre-breathing before diving has been extensively documented in recreational, technical, commercial and military diving for tissue denitrogenation, resulting in reduced post-diving bubble loads, reduced decompression requirements and more rapid return to normal platelet function after a decompression. Preoxygenation at high atmospheric pressure has also been used in patients before exposure to clinical situations with beneficial effects, but the mechanisms of action have not yet been ascertained. During the reperfusion of ischemic tissue, oxygenated blood increases numbers and activities of oxidants generated in tissues. Previous reports showed that HBO2 preconditioning caused the activation of antioxidative enzymes and related genes in the central nervous system, including catalase (CAT), superoxide dismutase and heme oxygenase-1. Despite the increasing number of basic science publications on this issue, studies describing HBO2 preconditioning in the clinical practice remain scarce. To date, only a few studies have investigated the preconditioning effects of HBO2 in relation to the human brain and myocardium with robust and promising results. PMID:24984322
The variational subspace valence bond method
Fletcher, Graham D.
2015-04-07
The variational subspace valence bond (VSVB) method based on overlapping orbitals is introduced. VSVB provides variational support against collapse for the optimization of overlapping linear combinations of atomic orbitals (OLCAOs) using modified orbital expansions, without recourse to orthogonalization. OLCAO have the advantage of being naturally localized, chemically intuitive (to individually model bonds and lone pairs, for example), and transferrable between different molecular systems. Such features are exploited to avoid key computational bottlenecks. Since the OLCAO can be doubly occupied, VSVB can access very large problems, and calculations on systems with several hundred atoms are presented.
Signal Subspace Processing Of Experimental Radio Data
NASA Astrophysics Data System (ADS)
Martin, Gordon E.
1988-02-01
The research related to this paper was concerned with the application of EigenVector EigenValue ( EVEV ) signal processing techniques to experimental data. The signal subspace methods of Schmidt (called MUSIC), Johnson, and Pisarenko were considered and compared with results of conventional beamformers. Almost all oral and written papers regarding these EVEV processors involve theoretical studies, possibly using simulated data and incoherent noise, but not experimental data. Contrary to that trend, we have reported behavior of EVEV processors using experimental data in this and other papers. The data used here are predominantly due to an HF radio experiment, but the distribution of eigenvalues is also reported for acoustic data. The paper emphasizes two general subtopics of signal subspace processing. First, the eigenvalues of sampled covariance matrices are examined and related to those of incoherent noise. These results include actual data, all of which we found were not Gaussian incoherent noise. A new test related to the ratio of eigenvalues is developed. The MDL and AIC criteria give misleading results with actual noise. Second, directional responses of EVEV and conventional processors are compared using HF radio data that has high signal-to-noise ratio in the non-Gaussian noise. MUSIC is found to have very favorable directional characteristics.
On the dimension of subspaces with bounded Schmidt rank
Cubitt, Toby; Montanaro, Ashley; Winter, Andreas
2008-02-15
We consider the question of how large a subspace of a given bipartite quantum system can be when the subspace contains only highly entangled states. This is motivated in part by results of Hayden et al. [e-print arXiv:quant-ph/0407049; Commun. Math. Phys., 265, 95 (2006)], which show that in large dxd-dimensional systems there exist random subspaces of dimension almost d{sup 2}, all of whose states have entropy of entanglement at least log d-O(1). It is also a generalization of results on the dimension of completely entangled subspaces, which have connections with the construction of unextendible product bases. Here we take as entanglement measure the Schmidt rank, and determine, for every pair of local dimensions d{sub A} and d{sub B}, and every r, the largest dimension of a subspace consisting only of entangled states of Schmidt rank r or larger. This exact answer is a significant improvement on the best bounds that can be obtained using the random subspace techniques in Hayden et al. We also determine the converse: the largest dimension of a subspace with an upper bound on the Schmidt rank. Finally, we discuss the question of subspaces containing only states with Schmidt equal to r.
Subspace segmentation by dense block and sparse representation.
Tang, Kewei; Dunson, David B; Su, Zhixun; Liu, Risheng; Zhang, Jie; Dong, Jiangxin
2016-03-01
Subspace segmentation is a fundamental topic in computer vision and machine learning. However, the success of many popular methods is about independent subspace segmentation instead of the more flexible and realistic disjoint subspace segmentation. Focusing on the disjoint subspaces, we provide theoretical and empirical evidence of inferior performance for popular algorithms such as LRR. To solve these problems, we propose a novel dense block and sparse representation (DBSR) for subspace segmentation and provide related theoretical results. DBSR minimizes a combination of the 1,1-norm and maximum singular value of the representation matrix, leading to a combination of dense block and sparsity. We provide experimental results for synthetic and benchmark data showing that our method can outperform the state-of-the-art. PMID:26720247
Classes of Invariant Subspaces for Some Operator Algebras
NASA Astrophysics Data System (ADS)
Hamhalter, Jan; Turilova, Ekaterina
2014-10-01
New results showing connections between structural properties of von Neumann algebras and order theoretic properties of structures of invariant subspaces given by them are proved. We show that for any properly infinite von Neumann algebra M there is an affiliated subspace such that all important subspace classes living on are different. Moreover, we show that can be chosen such that the set of σ-additive measures on subspace classes of are empty. We generalize measure theoretic criterion on completeness of inner product spaces to affiliated subspaces corresponding to Type I factor with finite dimensional commutant. We summarize hitherto known results in this area, discuss their importance for mathematical foundations of quantum theory, and outline perspectives of further research.
Preconditioning Operators on Unstructured Grids
NASA Technical Reports Server (NTRS)
Nepomnyaschikh, S. V.
1996-01-01
We consider systems of mesh equations that approximate elliptic boundary value problems on arbitrary (unstructured) quasi-uniform triangulations and propose a method for constructing optimal preconditioning operators. The method is based upon two approaches: (1) the fictitious space method, i.e., the reduction of the original problem to a problem in an auxiliary (fictitious) space, and (2) the multilevel decomposition method, i.e., the construction of preconditioners by decomposing functions on hierarchical meshes. The convergence rate of the corresponding iterative process with the preconditioner obtained is independent of the mesh step. The preconditioner has an optimal computational cost: the number of arithmetic operations required for its implementation is proportional to the number of unknowns in the problem. The construction of the preconditioning operators for three dimensional problems can be done in the same way.
Cerebral Ischemic Preconditioning: the Road So Far….
Thushara Vijayakumar, N; Sangwan, Amit; Sharma, Bhargy; Majid, Arshad; Rajanikant, G K
2016-05-01
Cerebral preconditioning constitutes the brain's adaptation to lethal ischemia when first exposed to mild doses of a subtoxic stressor. The phenomenon of preconditioning has been largely studied in the heart, and data from in vivo and in vitro models from past 2-3 decades have provided sufficient evidence that similar machinery exists in the brain as well. Since preconditioning results in a transient protective phenotype labeled as ischemic tolerance, it can open many doors in the medical warfare against stroke, a debilitating cerebrovascular disorder that kills or cripples thousands of people worldwide every year. Preconditioning can be induced by a variety of stimuli from hypoxia to pharmacological anesthetics, and each, in turn, induces tolerance by activating a multitude of proteins, enzymes, receptors, transcription factors, and other biomolecules eventually leading to genomic reprogramming. The intracellular signaling pathways and molecular cascades behind preconditioning are extensively being investigated, and several first-rate papers have come out in the last few years centered on the topic of cerebral ischemic tolerance. However, translating the experimental knowledge into the clinical scaffold still evades practicality and faces several challenges. Of the various preconditioning strategies, remote ischemic preconditioning and pharmacological preconditioning appears to be more clinically relevant for the management of ischemic stroke. In this review, we discuss current developments in the field of cerebral preconditioning and then examine the potential of various preconditioning agents to confer neuroprotection in the brain. PMID:26081149
Preconditioning for traumatic brain injury
Yokobori, Shoji; Mazzeo, Anna T; Hosein, Khadil; Gajavelli, Shyam; Dietrich, W. Dalton; Bullock, M. Ross
2016-01-01
Traumatic brain injury (TBI) treatment is now focused on the prevention of primary injury and reduction of secondary injury. However, no single effective treatment is available as yet for the mitigation of traumatic brain damage in humans. Both chemical and environmental stresses applied before injury, have been shown to induce consequent protection against post-TBI neuronal death. This concept termed “preconditioning” is achieved by exposure to different pre-injury stressors, to achieve the induction of “tolerance” to the effect of the TBI. However, the precise mechanisms underlying this “tolerance” phenomenon are not fully understood in TBI, and therefore even less information is available about possible indications in clinical TBI patients. In this review we will summarize TBI pathophysiology, and discuss existing animal studies demonstrating the efficacy of preconditioning in diffuse and focal type of TBI. We will also review other non-TBI preconditionng studies, including ischemic, environmental, and chemical preconditioning, which maybe relevant to TBI. To date, no clinical studies exist in this field, and we speculate on possible futureclinical situation, in which pre-TBI preconditioning could be considered. PMID:24323189
[Pharmacological preconditioning in carotid endarterectomy].
Kuznetsov, M R; Karalkin, A V; Fedin, A I; Virganskii, A O; Kunitsyn, N V; Kholopova, E A; Yumin, S M
2015-01-01
The study was aimed at examining efficacy of preoperative preparation (pharmacological preconditioning) for carotid endarterectomy in patients with chronic cerebrovascular insufficiency. For this purpose, we analysed the outcomes of surgical treatment in a total of 80 patients presenting with haemodynamically significant unilateral and bilateral lesions of carotid arteries. Of these, 40 patients were operated on immediately and a further 40 patients underwent surgery after pharmacological preconditioning with Actovegin taken at a daily dose of 1,200 mg for 1.5 months. It was demonstrated that preoperative preparation prior to surgery increases cerebral perfusion which is determined by means of single-photon emission computed tomography, thus substantially improving the outcomes of surgical treatment. Statistically significant differences in cognitive function of these groups of patients were revealed 7 days and 6 months after the operation. Improvement of cognitive functions was associated with fewer symptom-free postoperative cerebral ischaemic foci in various regions of the brain. A conclusion was made on a positive role of pharmacological preconditioning with Actovegin in surgical management of cerebrovascular insufficiency, first of all in relation to more complete restoration of cognitive functions. PMID:26355920
NASA Astrophysics Data System (ADS)
Borazjani, Iman; Asgharzadeh, Hafez
2015-11-01
Flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates with explicit and semi-implicit schemes. Implicit schemes can be used to overcome these restrictions. However, implementing implicit solver for nonlinear equations including Navier-Stokes is not straightforward. Newton-Krylov subspace methods (NKMs) are one of the most advanced iterative methods to solve non-linear equations such as implicit descritization of the Navier-Stokes equation. The efficiency of NKMs massively depends on the Jacobian formation method, e.g., automatic differentiation is very expensive, and matrix-free methods slow down as the mesh is refined. Analytical Jacobian is inexpensive method, but derivation of analytical Jacobian for Navier-Stokes equation on staggered grid is challenging. The NKM with a novel analytical Jacobian was developed and validated against Taylor-Green vortex and pulsatile flow in a 90 degree bend. The developed method successfully handled the complex geometries such as an intracranial aneurysm with multiple overset grids, and immersed boundaries. It is shown that the NKM with an analytical Jacobian is 3 to 25 times faster than the fixed-point implicit Runge-Kutta method, and more than 100 times faster than automatic differentiation depending on the grid (size) and the flow problem. The developed methods are fully parallelized with parallel efficiency of 80-90% on the problems tested.
NASA Astrophysics Data System (ADS)
De Maio, Antonio; Orlando, Danilo
2016-04-01
This paper deals with adaptive radar detection of a subspace signal competing with two sources of interference. The former is Gaussian with unknown covariance matrix and accounts for the joint presence of clutter plus thermal noise. The latter is structured as a subspace signal and models coherent pulsed jammers impinging on the radar antenna. The problem is solved via the Principle of Invariance which is based on the identification of a suitable group of transformations leaving the considered hypothesis testing problem invariant. A maximal invariant statistic, which completely characterizes the class of invariant decision rules and significantly compresses the original data domain, as well as its statistical characterization are determined. Thus, the existence of the optimum invariant detector is addressed together with the design of practically implementable invariant decision rules. At the analysis stage, the performance of some receivers belonging to the new invariant class is established through the use of analytic expressions.
Faces from sketches: a subspace synthesis approach
NASA Astrophysics Data System (ADS)
Li, Yung-hui; Savvides, Marios
2006-04-01
In real life scenario, we may be interested in face recognition for identification purpose when we only got sketch of the face images, for example, when police tries to identify criminals based on sketches of suspect, which is drawn by artists according to description of witnesses, what they have in hand is a sketch of suspects, and many real face image acquired from video surveillance. So far the state-of-the-art approach toward this problem tries to transform all real face images into sketches and perform recognition on sketch domain. In our approach we propose the opposite which is a better approach; we propose to generate a realistic face image from the composite sketch using a Hybrid subspace method and then build an illumination tolerant correlation filter which can recognize the person under different illumination variations. We show experimental results on our approach on the CMU PIE (Pose Illumination and Expression) database on the effectiveness of our novel approach.
Coherent signal-subspace transformation beam former
NASA Astrophysics Data System (ADS)
Yang, J.-F.; Kaveh, M.
1990-08-01
A family of coherent signal-subspace transformation (CST) preprocessors for improving the performance of beam formers is presented. The proposed beam former includes a CST preprocessor followed by a frequency-independent mulling procedure. If the exact CST preprocessor is used, it is shown that the steering and nulling responses of the CST beam former become frequency independent. This CST beam former is also interpreted as a wideband minimum-variance distortionless response (MVDR) beam former for minimizing the total interference and noise powers. Several classes of the CST matrix are proposed and discussed. Unlike the traditional linearly constrained beam formers, the CST beam former effectively nulls the narrowband and wideband correlated interfences(s). Simulation results show that even low-order approximate CST preprocessors substantially improve the nulling and the steering bandwidth of general arrays.
Robust video hashing via multilinear subspace projections.
Li, Mu; Monga, Vishal
2012-10-01
The goal of video hashing is to design hash functions that summarize videos by short fingerprints or hashes. While traditional applications of video hashing lie in database searches and content authentication, the emergence of websites such as YouTube and DailyMotion poses a challenging problem of anti-piracy video search. That is, hashes or fingerprints of an original video (provided to YouTube by the content owner) must be matched against those uploaded to YouTube by users to identify instances of "illegal" or undesirable uploads. Because the uploaded videos invariably differ from the original in their digital representation (owing to incidental or malicious distortions), robust video hashes are desired. We model videos as order-3 tensors and use multilinear subspace projections, such as a reduced rank parallel factor analysis (PARAFAC) to construct video hashes. We observe that, unlike most standard descriptors of video content, tensor-based subspace projections can offer excellent robustness while effectively capturing the spatio-temporal essence of the video for discriminability. We introduce randomization in the hash function by dividing the video into (secret key based) pseudo-randomly selected overlapping sub-cubes to prevent against intentional guessing and forgery. Detection theoretic analysis of the proposed hash-based video identification is presented, where we derive analytical approximations for error probabilities. Remarkably, these theoretic error estimates closely mimic empirically observed error probability for our hash algorithm. Furthermore, experimental receiver operating characteristic (ROC) curves reveal that the proposed tensor-based video hash exhibits enhanced robustness against both spatial and temporal video distortions over state-of-the-art video hashing techniques. PMID:22752130
Preconditioned iterations to calculate extreme eigenvalues
Brand, C.W.; Petrova, S.
1994-12-31
Common iterative algorithms to calculate a few extreme eigenvalues of a large, sparse matrix are Lanczos methods or power iterations. They converge at a rate proportional to the separation of the extreme eigenvalues from the rest of the spectrum. Appropriate preconditioning improves the separation of the eigenvalues. Davidson`s method and its generalizations exploit this fact. The authors examine a preconditioned iteration that resembles a truncated version of Davidson`s method with a different preconditioning strategy.
Scalable parallel Newton-Krylov solvers for discontinuous Galerkin discretizations
Persson, P.-O.
2008-12-31
We present techniques for implicit solution of discontinuous Galerkin discretizations of the Navier-Stokes equations on parallel computers. While a block-Jacobi method is simple and straight-forward to parallelize, its convergence properties are poor except for simple problems. Therefore, we consider Newton-GMRES methods preconditioned with block-incomplete LU factorizations, with optimized element orderings based on a minimum discarded fill (MDF) approach. We discuss the difficulties with the parallelization of these methods, but also show that with a simple domain decomposition approach, most of the advantages of the block-ILU over the block-Jacobi preconditioner are still retained. The convergence is further improved by incorporating the matrix connectivities into the mesh partitioning process, which aims at minimizing the errors introduced from separating the partitions. We demonstrate the performance of the schemes for realistic two- and three-dimensional flow problems.
An alternative subspace approach to EEG dipole source localization
NASA Astrophysics Data System (ADS)
Xu, Xiao-Liang; Xu, Bobby; He, Bin
2004-01-01
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.
An alternative subspace approach to EEG dipole source localization.
Xu, Xiao-Liang; Xu, Bobby; He, Bin
2004-01-21
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist. PMID:15083674
Learning Markov Random Walks for robust subspace clustering and estimation.
Liu, Risheng; Lin, Zhouchen; Su, Zhixun
2014-11-01
Markov Random Walks (MRW) has proven to be an effective way to understand spectral clustering and embedding. However, due to less global structural measure, conventional MRW (e.g., the Gaussian kernel MRW) cannot be applied to handle data points drawn from a mixture of subspaces. In this paper, we introduce a regularized MRW learning model, using a low-rank penalty to constrain the global subspace structure, for subspace clustering and estimation. In our framework, both the local pairwise similarity and the global subspace structure can be learnt from the transition probabilities of MRW. We prove that under some suitable conditions, our proposed local/global criteria can exactly capture the multiple subspace structure and learn a low-dimensional embedding for the data, in which giving the true segmentation of subspaces. To improve robustness in real situations, we also propose an extension of the MRW learning model based on integrating transition matrix learning and error correction in a unified framework. Experimental results on both synthetic data and real applications demonstrate that our proposed MRW learning model and its robust extension outperform the state-of-the-art subspace clustering methods. PMID:25005156
Latent subspace sparse representation-based unsupervised domain adaptation
NASA Astrophysics Data System (ADS)
Shuai, Liu; Sun, Hao; Zhao, Fumin; Zhou, Shilin
2015-12-01
In this paper, we introduce and study a novel unsupervised domain adaptation (DA) algorithm, called latent subspace sparse representation based domain adaptation, based on the fact that source and target data that lie in different but related low-dimension subspaces. The key idea is that each point in a union of subspaces can be constructed by a combination of other points in the dataset. In this method, we propose to project the source and target data onto a common latent generalized subspace which is a union of subspaces of source and target domains and learn the sparse representation in the latent generalized subspace. By employing the minimum reconstruction error and maximum mean discrepancy (MMD) constraints, the structure of source and target domain are preserved and the discrepancy is reduced between the source and target domains and thus reflected in the sparse representation. We then utilize the sparse representation to build a weighted graph which reflect the relationship of points from the different domains (source-source, source- target, and target-target) to predict the labels of the target domain. We also proposed an efficient optimization method for the algorithm. Our method does not need to combine with any classifiers and therefore does not need train the test procedures. Various experiments show that the proposed method perform better than the competitive state of art subspace-based domain adaptation.
Management of Preconditioned Calves and Impacts of Preconditioning.
Hilton, W Mark
2015-07-01
When studying the practice of preconditioning (PC) calves, many factors need to be examined to determine if cow-calf producers should make this investment. Factors such as average daily gain, feed efficiency, available labor, length of the PC period, genetics, and marketing options must be analyzed. The health sales price advantage is an additional benefit in producing and selling PC calves but not the sole determinant of PC's financially feasibility. Studies show that a substantial advantage of PC is the selling of additional pounds at a cost of gain well below the marginal return of producing those additional pounds. PMID:26139187
Preconditioning for multidimensional TOMBO imaging.
Horisaki, Ryoichi; Tanida, Jun
2011-06-01
In this Letter, we propose a preconditioning method to improve the convergence speed of iterative reconstruction algorithms in a compact, multidimensional, compound-eye imaging system called the thin observation module by bound optics. The condition number of the system matrix is improved by using a preconditioner matrix. To calculate the preconditioner matrix, the system model is expressed in the frequency domain. The proposed method is simulated by using a compressive sensing algorithm called the two-step iterative shrinkage/thresholding algorithm. The results showed improved reconstruction fidelity with a certain number of iterations for high signal-to-noise ratio measurements. PMID:21633452
Tensor-Krylov methods for solving large-scale systems of nonlinear equations.
Bader, Brett William
2004-08-01
This paper develops and investigates iterative tensor methods for solving large-scale systems of nonlinear equations. Direct tensor methods for nonlinear equations have performed especially well on small, dense problems where the Jacobian matrix at the solution is singular or ill-conditioned, which may occur when approaching turning points, for example. This research extends direct tensor methods to large-scale problems by developing three tensor-Krylov methods that base each iteration upon a linear model augmented with a limited second-order term, which provides information lacking in a (nearly) singular Jacobian. The advantage of the new tensor-Krylov methods over existing large-scale tensor methods is their ability to solve the local tensor model to a specified accuracy, which produces a more accurate tensor step. The performance of these methods in comparison to Newton-GMRES and tensor-GMRES is explored on three Navier-Stokes fluid flow problems. The numerical results provide evidence that tensor-Krylov methods are generally more robust and more efficient than Newton-GMRES on some important and difficult problems. In addition, the results show that the new tensor-Krylov methods and tensor- GMRES each perform better in certain situations.
NASA Astrophysics Data System (ADS)
Calef, Matthew T.; Fichtl, Erin D.; Warsa, James S.; Berndt, Markus; Carlson, Neil N.
2013-04-01
We compare a variant of Anderson Mixing with the Jacobian-Free Newton-Krylov and Broyden methods applied to an instance of the k-eigenvalue formulation of the linear Boltzmann transport equation. We present evidence that one variant of Anderson Mixing finds solutions in the fewest number of iterations. We examine and strengthen theoretical results of Anderson Mixing applied to linear problems.
Manifold learning-based subspace distance for machinery damage assessment
NASA Astrophysics Data System (ADS)
Sun, Chuang; Zhang, Zhousuo; He, Zhengjia; Shen, Zhongjie; Chen, Binqiang
2016-03-01
Damage assessment is very meaningful to keep safety and reliability of machinery components, and vibration analysis is an effective way to carry out the damage assessment. In this paper, a damage index is designed by performing manifold distance analysis on vibration signal. To calculate the index, vibration signal is collected firstly, and feature extraction is carried out to obtain statistical features that can capture signal characteristics comprehensively. Then, manifold learning algorithm is utilized to decompose feature matrix to be a subspace, that is, manifold subspace. The manifold learning algorithm seeks to keep local relationship of the feature matrix, which is more meaningful for damage assessment. Finally, Grassmann distance between manifold subspaces is defined as a damage index. The Grassmann distance reflecting manifold structure is a suitable metric to measure distance between subspaces in the manifold. The defined damage index is applied to damage assessment of a rotor and the bearing, and the result validates its effectiveness for damage assessment of machinery component.
Ischemic preconditioning protects against ischemic brain injury.
Ma, Xiao-Meng; Liu, Mei; Liu, Ying-Ying; Ma, Li-Li; Jiang, Ying; Chen, Xiao-Hong
2016-05-01
In this study, we hypothesized that an increase in integrin αvβ3 and its co-activator vascular endothelial growth factor play important neuroprotective roles in ischemic injury. We performed ischemic preconditioning with bilateral common carotid artery occlusion for 5 minutes in C57BL/6J mice. This was followed by ischemic injury with bilateral common carotid artery occlusion for 30 minutes. The time interval between ischemic preconditioning and lethal ischemia was 48 hours. Histopathological analysis showed that ischemic preconditioning substantially diminished damage to neurons in the hippocampus 7 days after ischemia. Evans Blue dye assay showed that ischemic preconditioning reduced damage to the blood-brain barrier 24 hours after ischemia. This demonstrates the neuroprotective effect of ischemic preconditioning. Western blot assay revealed a significant reduction in protein levels of integrin αvβ3, vascular endothelial growth factor and its receptor in mice given ischemic preconditioning compared with mice not given ischemic preconditioning 24 hours after ischemia. These findings suggest that the neuroprotective effect of ischemic preconditioning is associated with lower integrin αvβ3 and vascular endothelial growth factor levels in the brain following ischemia. PMID:27335560
Ischemic preconditioning protects against ischemic brain injury
Ma, Xiao-meng; Liu, Mei; Liu, Ying-ying; Ma, Li-li; Jiang, Ying; Chen, Xiao-hong
2016-01-01
In this study, we hypothesized that an increase in integrin αvβ3 and its co-activator vascular endothelial growth factor play important neuroprotective roles in ischemic injury. We performed ischemic preconditioning with bilateral common carotid artery occlusion for 5 minutes in C57BL/6J mice. This was followed by ischemic injury with bilateral common carotid artery occlusion for 30 minutes. The time interval between ischemic preconditioning and lethal ischemia was 48 hours. Histopathological analysis showed that ischemic preconditioning substantially diminished damage to neurons in the hippocampus 7 days after ischemia. Evans Blue dye assay showed that ischemic preconditioning reduced damage to the blood-brain barrier 24 hours after ischemia. This demonstrates the neuroprotective effect of ischemic preconditioning. Western blot assay revealed a significant reduction in protein levels of integrin αvβ3, vascular endothelial growth factor and its receptor in mice given ischemic preconditioning compared with mice not given ischemic preconditioning 24 hours after ischemia. These findings suggest that the neuroprotective effect of ischemic preconditioning is associated with lower integrin αvβ3 and vascular endothelial growth factor levels in the brain following ischemia. PMID:27335560
40 CFR 80.52 - Vehicle preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... accordance with the “General vehicle handling requirements” per 40 CFR 86.132-96, up to and including the completion of the hot start exhaust test. (b) The preconditioning procedure prescribed at 40 CFR 86.132-96... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Vehicle preconditioning. 80.52...
Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter
Zhou, Ning; Huang, Zhenyu; Welch, Greg; Zhang, J.
2012-05-24
To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.
Object classification using local subspace projection
NASA Astrophysics Data System (ADS)
Nealy, Jennifer; Muise, Robert
2011-06-01
We consider the problem of object classification from image data. Significant challenges are presented when objects can be imaged from different view angles and have different distortions. For example, a vehicle will appear completely different depending on the viewing angle of the sensor but must still be classified as the same vehicle. In regards to face recognition, a person may have a variety of facial expressions and a pattern recognition algorithm would need to account for these distortions. Traditional algorithms such as PCA filters are linear in nature and cannot account for the underlying non-linear structure which characterizes an object. We examine nonlinear manifold techniques applied to the pattern recognition problem. One mathematical construct receiving significant research attention is diffusion maps, whereby the underlying training data are remapped so that Euclidean distance in the mapped data is equivalent to the manifold distance of the original dataset. This technique has been used successfully for applications such as data organization, noise filtering, and anomaly detection with only limited experiments with object classification. For very large datasets (size N), pattern classification with diffusion maps becomes rather onerous as there is a requirement for the eigenvectors of an NxN matrix. We characterize the performance of a 40 person facial recognition problem with standard K-NN classifier, a diffusion distance classifier, and standard PCA. We then develop a local subspace projection algorithm which approximates the diffusion distance without the prohibitive computations and shows comparable classification performance.
Application of Subspace Clustering in DNA Sequence Analysis.
Wallace, Tim; Sekmen, Ali; Wang, Xiaofei
2015-10-01
Identification and clustering of orthologous genes plays an important role in developing evolutionary models such as validating convergent and divergent phylogeny and predicting functional proteins in newly sequenced species of unverified nucleotide protein mappings. Here, we introduce an application of subspace clustering as applied to orthologous gene sequences and discuss the initial results. The working hypothesis is based upon the concept that genetic changes between nucleotide sequences coding for proteins among selected species and groups may lie within a union of subspaces for clusters of the orthologous groups. Estimates for the subspace dimensions were computed for a small population sample. A series of experiments was performed to cluster randomly selected sequences. The experimental design allows for both false positives and false negatives, and estimates for the statistical significance are provided. The clustering results are consistent with the main hypothesis. A simple random mutation binary tree model is used to simulate speciation events that show the interdependence of the subspace rank versus time and mutation rates. The simple mutation model is found to be largely consistent with the observed subspace clustering singular value results. Our study indicates that the subspace clustering method may be applied in orthology analysis. PMID:26162018
Preconditioning Strategy in Stem Cell Transplantation Therapy
Yu, Shan Ping; Wei, Zheng; Wei, Ling
2013-01-01
Stem cell transplantation therapy has emerged as a promising regenerative medicine for ischemic stroke and other neurodegenerative disorders. However, many issues and problems remain to be resolved before successful clinical applications of the cell-based therapy. To this end, some recent investigations have sought to benefit from well-known mechanisms of ischemic/hypoxic preconditioning. Ischemic/hypoxic preconditioning activates endogenous defense mechanisms that show marked protective effects against multiple insults found in ischemic stroke and other acute attacks. As in many other cell types, a sub-lethal hypoxic exposure significantly increases the tolerance and regenerative properties of stem cells and progenitor cells. So far, a variety of preconditioning triggers have been tested on different stem cells and progenitor cells. Preconditioned stem cells and progenitors generally show much better cell survival, increased neuronal differentiation, enhanced paracrine effects leading to increased trophic support, and improved homing to the lesion site. Transplantation of preconditioned cells helps to suppress inflammatory factors and immune responses, and promote functional recovery. Although the preconditioning strategy in stem cell therapy is still an emerging research area, accumulating information from reports over the last few years already indicates it as an attractive, if not essential, prerequisite for transplanted cells. It is expected that stem cell preconditioning and its clinical applications will attract more attention in both the basic research field of preconditioning as well as in the field of stem cell translational research. This review summarizes the most important findings in this active research area, covering the preconditioning triggers, potential mechanisms, mediators, and functional benefits for stem cell transplant therapy. PMID:23914259
Global and Local Sparse Subspace Optimization for Motion Segmentation
NASA Astrophysics Data System (ADS)
Yang, M. Ying; Feng, S.; Ackermann, H.; Rosenhahn, B.
2015-08-01
In this paper, we propose a new framework for segmenting feature-based moving objects under affine subspace model. Since the feature trajectories in practice are high-dimensional and contain a lot of noise, we firstly apply the sparse PCA to represent the original trajectories with a low-dimensional global subspace, which consists of the orthogonal sparse principal vectors. Subsequently, the local subspace separation will be achieved via automatically searching the sparse representation of the nearest neighbors for each projected data. In order to refine the local subspace estimation result, we propose an error estimation to encourage the projected data that span a same local subspace to be clustered together. In the end, the segmentation of different motions is achieved through the spectral clustering on an affinity matrix, which is constructed with both the error estimation and sparse neighbors optimization. We test our method extensively and compare it with state-of-the-art methods on the Hopkins 155 dataset. The results show that our method is comparable with the other motion segmentation methods, and in many cases exceed them in terms of precision and computation time.
Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.
1996-01-01
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.
Newton-Krylov-Schwarz algorithms for the 2D full potential equation
Cai, Xiao-Chuan; Gropp, W.D.; Keyes, D.E.
1996-12-31
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The main algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, can be made robust for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report favorable choices for numerical convergence rate and overall execution time on a distributed-memory parallel computer.
40 CFR 80.52 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... accordance with the “General vehicle handling requirements” per 40 CFR 86.132-96, up to and including the completion of the hot start exhaust test. (b) The preconditioning procedure prescribed at 40 CFR...
40 CFR 80.52 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... accordance with the “General vehicle handling requirements” per 40 CFR 86.132-96, up to and including the completion of the hot start exhaust test. (b) The preconditioning procedure prescribed at 40 CFR...
A Newton-Krylov Solver for Implicit Solution of Hydrodynamics in Core Collapse Supernovae
Reynolds, D R; Swesty, F D; Woodward, C S
2008-06-12
This paper describes an implicit approach and nonlinear solver for solution of radiation-hydrodynamic problems in the context of supernovae and proto-neutron star cooling. The robust approach applies Newton-Krylov methods and overcomes the difficulties of discontinuous limiters in the discretized equations and scaling of the equations over wide ranges of physical behavior. We discuss these difficulties, our approach for overcoming them, and numerical results demonstrating accuracy and efficiency of the method.
Laser thermal preconditioning enhances dermal wound repair
NASA Astrophysics Data System (ADS)
Wilmink, Gerald J.; Carter, Terry; Davidson, Jeffrey M.; Jansen, E. Duco
2008-02-01
Preconditioning tissues with an initial mild thermal stress, thereby eliciting a stress response, can serve to protect tissue from subsequent stresses. Patients at risk for impaired healing, such as diabetics, can benefit from therapeutic methods which enhance wound repair. We present a laser thermal preconditioning protocol that accelerates cutaneous wound repair in a murine model. A pulsed diode laser (λ = 1.86 μm, τ p = 2 ms, 50 Hz, H = 7.64 mJ/cm2) was used to precondition mouse skin before incisional wounds were made. The preconditioning protocol was optimized in vitro and in vivo using hsp70 expression, cell viability, and temperature measurements as benchmarks. Hsp70 expression was non-invasively monitored using a transgenic mouse strain with the hsp70 promoter driving luciferase expression. Tissue temperature recordings were acquired in real time using an infrared camera. Wound repair was assessed by measuring hsp70 expression, biomechanical properties, and wound histology for up to 24 d. Bioluminescence (BLI) was monitored with the IVIS 200 System (Xenogen) and tensile properties with a tensiometer (BTC-2000). The in vivo BLI studies indicated that the optimized laser preconditioning protocol increased hsp70 expression by 15-fold. The tensiometer data revealed that laser preconditioned wounds are ~40% stronger than control wounds at 10 days post surgery. Similar experiments in a diabetic mouse model also enhanced wound repair strength. These results indicate that 1) noninvasive imaging methods can aid in the optimization of novel laser preconditioning methods; 2) that optimized preconditioning with a 1.86 μm diode laser enhances early wound repair.
Bradykinin mediates cardiac preconditioning at a distance.
Schoemaker, R G; van Heijningen, C L
2000-05-01
Preconditioning the heart by brief coronary (CAO) or mesenteric artery occlusion (MAO) can protect against damage during subsequent prolonged CAO and reperfusion. The role of bradykinin (BK) in remote cardiac preconditioning by MAO is investigated by antagonizing the BK B(2) receptor [Hoechst 140 (HOE-140)] or simulating local BK release by mesenteric intra-arterial infusion. Anesthetized male Wistar rats (n = 6-8) were treated with HOE-140 or saline before starting the preconditioning protocol, CAO, MAO, or non-preconditioned control. Infarct size related to risk area [ratio of infarct area to area at risk (IA/AR)] was determined after 3 h of reperfusion following a 60-min CAO. IA/AR was 62 +/- 5% in controls and not affected by HOE-140 (58 +/- 6%). CAO as well as MAO significantly protected the heart (IA/AR, 37 +/- 3 and 35 +/- 5%), which was prevented by HOE-140 (IA/AR, 71 +/- 6 and 65 +/- 7%, respectively). Brief intramesenteric BK infusion mimicked MAO (IA/AR, 26 +/- 3%). Pretreatment with hexamethonium could abolish this protection (IA/AR, 67 +/- 4%). These data indicate an important role for BK in remote preconditioning by MAO. Results support the hypothesis that remote preconditioning acts through sensory nerve stimulation in the ischemic organ. PMID:10775135
Selective control of the symmetric Dicke subspace in trapped ions
Lopez, C. E.; Retamal, J. C.; Solano, E.
2007-09-15
We propose a method of manipulating selectively the symmetric Dicke subspace in the internal degrees of freedom of N trapped ions. We show that the direct access to ionic-motional subspaces, based on a suitable tuning of motion-dependent ac Stark shifts, induces a two-level dynamics involving previously selected ionic Dicke states. In this manner, it is possible to produce, sequentially and unitarily, ionic Dicke states with increasing excitation number. Moreover, we propose a probabilistic technique to produce directly any ionic Dicke state assuming suitable initial conditions.
Decoherence free subspaces of a quantum Markov semigroup
Agredo, Julián; Fagnola, Franco; Rebolledo, Rolando
2014-11-15
We give a full characterisation of decoherence free subspaces of a given quantum Markov semigroup with generator in a generalised Lindbald form which is valid also for infinite-dimensional systems. Our results, extending those available in the literature concerning finite-dimensional systems, are illustrated by some examples.
Computational Complexity of Subspace Detectors and Matched Field Processing
Harris, D B
2010-12-01
Subspace detectors implement a correlation type calculation on a continuous (network or array) data stream [Harris, 2006]. The difference between subspace detectors and correlators is that the former projects the data in a sliding observation window onto a basis of template waveforms that may have a dimension (d) greater than one, and the latter projects the data onto a single waveform template. A standard correlation detector can be considered to be a degenerate (d=1) form of a subspace detector. Figure 1 below shows a block diagram for the standard formulation of a subspace detector. The detector consists of multiple multichannel correlators operating on a continuous data stream. The correlation operations are performed with FFTs in an overlap-add approach that allows the stream to be processed in uniform, consecutive, contiguous blocks. Figure 1 is slightly misleading for a calculation of computational complexity, as it is possible, when treating all channels with the same weighting (as shown in the figure), to perform the indicated summations in the multichannel correlators before the inverse FFTs and to get by with a single inverse FFT and overlap add calculation per multichannel correlator. In what follows, we make this simplification.
Minimal residual method stronger than polynomial preconditioning
Faber, V.; Joubert, W.; Knill, E.
1994-12-31
Two popular methods for solving symmetric and nonsymmetric systems of equations are the minimal residual method, implemented by algorithms such as GMRES, and polynomial preconditioning methods. In this study results are given on the convergence rates of these methods for various classes of matrices. It is shown that for some matrices, such as normal matrices, the convergence rates for GMRES and for the optimal polynomial preconditioning are the same, and for other matrices such as the upper triangular Toeplitz matrices, it is at least assured that if one method converges then the other must converge. On the other hand, it is shown that matrices exist for which restarted GMRES always converges but any polynomial preconditioning of corresponding degree makes no progress toward the solution for some initial error. The implications of these results for these and other iterative methods are discussed.
Preconditioning the Helmholtz Equation for Rigid Ducts
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Kreider, Kevin L.
1998-01-01
An innovative hyperbolic preconditioning technique is developed for the numerical solution of the Helmholtz equation which governs acoustic propagation in ducts. Two pseudo-time parameters are used to produce an explicit iterative finite difference scheme. This scheme eliminates the large matrix storage requirements normally associated with numerical solutions to the Helmholtz equation. The solution procedure is very fast when compared to other transient and steady methods. Optimization and an error analysis of the preconditioning factors are present. For validation, the method is applied to sound propagation in a 2D semi-infinite hard wall duct.
Solving Nonlinear Solid Mechanics Problems with the Jacobian-Free Newton Krylov Method
J. D. Hales; S. R. Novascone; R. L. Williamson; D. R. Gaston; M. R. Tonks
2012-06-01
The solution of the equations governing solid mechanics is often obtained via Newton's method. This approach can be problematic if the determination, storage, or solution cost associated with the Jacobian is high. These challenges are magnified for multiphysics applications with many coupled variables. Jacobian-free Newton-Krylov (JFNK) methods avoid many of the difficulties associated with the Jacobian by using a finite difference approximation. BISON is a parallel, object-oriented, nonlinear solid mechanics and multiphysics application that leverages JFNK methods. We overview JFNK, outline the capabilities of BISON, and demonstrate the effectiveness of JFNK for solid mechanics and solid mechanics coupled to other PDEs using a series of demonstration problems.
Using generalized Cayley transformations within an inexact rational Krylov sequence method.
Lehoucq, R. B.; Meerbergen, K.; Mathematics and Computer Science; Utrecht Univ.
1999-01-01
The rational Krylov sequence (RKS) method is a generalization of Arnoldi's method. It constructs an orthogonal reduction of a matrix pencil into an upper Hessenberg pencil. The RKS method is useful when the matrix pencil may be efficiently factored. This article considers approximately solving the resulting linear systems with iterative methods. We show that a Cayley transformation leads to a more efficient and robust eigensolver than the usual shift-invert transformation when the linear systems are solved inexactly within the RKS method. A relationship with the recently introduced Jacobi--Davidson method is also established.
NASA Astrophysics Data System (ADS)
Hayes, Charles E.; McClellan, James H.; Scott, Waymond R.; Kerr, Andrew J.
2016-05-01
This work introduces two advances in wide-band electromagnetic induction (EMI) processing: a novel adaptive matched filter (AMF) and matched subspace detection methods. Both advances make use of recent work with a subspace SVD approach to separating the signal, soil, and noise subspaces of the frequency measurements The proposed AMF provides a direct approach to removing the EMI self-response while improving the signal to noise ratio of the data. Unlike previous EMI adaptive downtrack filters, this new filter will not erroneously optimize the EMI soil response instead of the EMI target response because these two responses are projected into separate frequency subspaces. The EMI detection methods in this work elaborate on how the signal and noise subspaces in the frequency measurements are ideal for creating the matched subspace detection (MSD) and constant false alarm rate matched subspace detection (CFAR) metrics developed by Scharf The CFAR detection metric has been shown to be the uniformly most powerful invariant detector.
NASA Astrophysics Data System (ADS)
Aguiar, Manuela A. D.; Dias, Ana Paula S.
2014-12-01
Coupled cell systems are networks of dynamical systems (the cells), where the links between the cells are described through the network structure, the coupled cell network. Synchrony subspaces are spaces defined in terms of equalities of certain cell coordinates that are flow-invariant for all coupled cell systems associated with a given network structure. The intersection of synchrony subspaces of a network is also a synchrony subspace of the network. It follows, then, that, given a coupled cell network, its set of synchrony subspaces, taking the inclusion partial order relation, forms a lattice. In this paper we show how to obtain the lattice of synchrony subspaces for a general network and present an algorithm that generates that lattice. We prove that this problem is reduced to obtain the lattice of synchrony subspaces for regular networks. For a regular network we obtain the lattice of synchrony subspaces based on the eigenvalue structure of the network adjacency matrix.
Resveratrol and ischemic preconditioning in the brain.
Raval, Ami P; Lin, Hung Wen; Dave, Kunjan R; Defazio, R Anthony; Della Morte, David; Kim, Eun Joo; Perez-Pinzon, Miguel A
2008-01-01
Cardiovascular pathologies in the French are not prevalent despite high dietary saturated fat consumption. This is commonly referred to as the "French Paradox" attributing its anti-lipidemic effects to moderate consumption of red wine. Resveratrol, a phytoalexin found in red wine, is currently the focus of intense research both in the cardiovascular system and the brain. Current research suggests resveratrol may enhance prognosis of neurological disorders such as, Parkinson's, Huntington's, Alzheimer's diseases and stroke. The beneficial effects of resveratrol include: antioxidation, free radical scavenger, and modulation of neuronal energy homeostasis and glutamatergic receptors/ion channels. Resveratrol directly increases sirtuin 1 (SIRT1) activity, a NAD(+) (oxidized form of nicotinamide adenine dinucleotide)-dependent histone deacetylase related to increased lifespan in various species similar to calorie restriction. We recently demonstrated that brief resveratrol pretreatment conferred neuroprotection against cerebral ischemia via SIRT1 activation. This neuroprotective effect produced by resveratrol was similar to ischemic preconditioning-induced neuroprotection, which protects against lethal ischemic insults in the brain and other organ systems. Inhibition of SIRT1 abolished ischemic preconditioning-induced neuroprotection in CA1 region of the hippocampus. Since resveratrol and ischemic preconditioning-induced neuroprotection require activation of SIRT1, this common signaling pathway may provide targeted therapeutic treatment modalities as it relates to stroke and other brain pathologies. In this review, we will examine common signaling pathways, cellular targets of resveratrol, and ischemic preconditioning-induced neuroprotection as it relates to the brain. PMID:18537630
Health and Nutrition: Preconditions for Educational Achievement.
ERIC Educational Resources Information Center
Negussie, Birgit
This paper discusses the importance of maternal and infant health for children's educational achievement. Education, health, and nutrition are so closely related that changes in one causes changes in the others. Improvement of maternal and preschooler health and nutrition is a precondition for improved educational achievement. Although parental…
Preconditioning matrices for Chebyshev derivative operators
NASA Technical Reports Server (NTRS)
Rothman, Ernest E.
1986-01-01
The problem of preconditioning the matrices arising from pseudo-spectral Chebyshev approximations of first order operators is considered in both one and two dimensions. In one dimension a preconditioner represented by a full matrix which leads to preconditioned eigenvalues that are real, positive, and lie between 1 and pi/2, is already available. Since there are cases in which it is not computationally convenient to work with such a preconditioner, a large number of preconditioners were studied which were more sparse (in particular three and four diagonal matrices). The eigenvalues of such preconditioned matrices are compared. The results were applied to the problem of finding the steady state solution to an equation of the type u sub t = u sub x + f, where the Chebyshev collocation is used for the spatial variable and time discretization is performed by the Richardson method. In two dimensions different preconditioners are proposed for the matrix which arises from the pseudo-spectral discretization of the steady state problem. Results are given for the CPU time and the number of iterations using a Richardson iteration method for the unpreconditioned and preconditioned cases.
Revealing Preconditions for Trustful Collaboration in CSCL
ERIC Educational Resources Information Center
Gerdes, Anne
2010-01-01
This paper analyses preconditions for trust in virtual learning environments. The concept of trust is discussed with reference to cases reporting trust in cyberspace and through a philosophical clarification holding that trust in the form of self-surrender is a common characteristic of all human co-existence. In virtual learning environments,…
40 CFR 1065.518 - Engine preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., such as with a diesel engine that relies on urea-based selective catalytic reduction. Note that § 1065... cycle specified in 40 CFR 1039.505(b)(1), the second half of the cycle consists of modes three through... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Engine preconditioning....
Smooth local subspace projection for nonlinear noise reduction
Chelidze, David
2014-03-15
Many nonlinear or chaotic time series exhibit an innate broad spectrum, which makes noise reduction difficult. Local projective noise reduction is one of the most effective tools. It is based on proper orthogonal decomposition (POD) and works for both map-like and continuously sampled time series. However, POD only looks at geometrical or topological properties of data and does not take into account the temporal characteristics of time series. Here, we present a new smooth projective noise reduction method. It uses smooth orthogonal decomposition (SOD) of bundles of reconstructed short-time trajectory strands to identify smooth local subspaces. Restricting trajectories to these subspaces imposes temporal smoothness on the filtered time series. It is shown that SOD-based noise reduction significantly outperforms the POD-based method for continuously sampled noisy time series.
Low complex subspace minimum variance beamformer for medical ultrasound imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2016-03-01
Minimum variance (MV) beamformer enhances the resolution and contrast in the medical ultrasound imaging at the expense of higher computational complexity with respect to the non-adaptive delay-and-sum beamformer. The major complexity arises from the estimation of the L×L array covariance matrix using spatial averaging, which is required to more accurate estimation of the covariance matrix of correlated signals, and inversion of it, which is required for calculating the MV weight vector which are as high as O(L(2)) and O(L(3)), respectively. Reducing the number of array elements decreases the computational complexity but degrades the imaging resolution. In this paper, we propose a subspace MV beamformer which preserves the advantages of the MV beamformer with lower complexity. The subspace MV neglects some rows of the array covariance matrix instead of reducing the array size. If we keep η rows of the array covariance matrix which leads to a thin non-square matrix, the weight vector of the subspace beamformer can be achieved in the same way as the MV obtains its weight vector with lower complexity as high as O(η(2)L). More calculations would be saved because an η×L covariance matrix must be estimated instead of a L×L. We simulated a wire targets phantom and a cyst phantom to evaluate the performance of the proposed beamformer. The results indicate that we can keep about 16 from 43 rows of the array covariance matrix which reduces the order of complexity to 14% while the image resolution is still comparable to that of the standard MV beamformer. We also applied the proposed method to an experimental RF data and showed that the subspace MV beamformer performs like the standard MV with lower computational complexity. PMID:26678788
Condition number estimation of preconditioned matrices.
Kushida, Noriyuki
2015-01-01
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method. PMID:25816331
Condition Number Estimation of Preconditioned Matrices
Kushida, Noriyuki
2015-01-01
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager’s method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei’s matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei’s matrix, and matrices generated with the finite element method. PMID:25816331
Globalized Newton-Krylov-Schwarz Algorithms and Software for Parallel Implicit CFD
NASA Technical Reports Server (NTRS)
Gropp, W. D.; Keyes, D. E.; McInnes, L. C.; Tidriri, M. D.
1998-01-01
Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, "routine" parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz (Psi-NKS) algorithmic framework is presented as an answer. We show that, for the classical problem of three-dimensional transonic Euler flow about an M6 wing, Psi-NKS can simultaneously deliver: globalized, asymptotically rapid convergence through adaptive pseudo- transient continuation and Newton's method-, reasonable parallelizability for an implicit method through deferred synchronization and favorable communication-to-computation scaling in the Krylov linear solver; and high per- processor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of Psi-NKS methods are their sensitivity to the coding of the underlying PDE discretization and the large number of parameters that must be selected to govern convergence. We therefore distill several recommendations from our experience and from our reading of the literature on various algorithmic components of Psi-NKS, and we describe a freely available, MPI-based portable parallel software implementation of the solver employed here.
Krylov iterative methods and synthetic acceleration for transport in binary statistical media
Fichtl, Erin D; Warsa, James S; Prinja, Anil K
2008-01-01
In particle transport applications there are numerous physical constructs in which heterogeneities are randomly distributed. The quantity of interest in these problems is the ensemble average of the flux, or the average of the flux over all possible material 'realizations.' The Levermore-Pomraning closure assumes Markovian mixing statistics and allows a closed, coupled system of equations to be written for the ensemble averages of the flux in each material. Generally, binary statistical mixtures are considered in which there are two (homogeneous) materials and corresponding coupled equations. The solution process is iterative, but convergence may be slow as either or both materials approach the diffusion and/or atomic mix limits. A three-part acceleration scheme is devised to expedite convergence, particularly in the atomic mix-diffusion limit where computation is extremely slow. The iteration is first divided into a series of 'inner' material and source iterations to attenuate the diffusion and atomic mix error modes separately. Secondly, atomic mix synthetic acceleration is applied to the inner material iteration and S{sup 2} synthetic acceleration to the inner source iterations to offset the cost of doing several inner iterations per outer iteration. Finally, a Krylov iterative solver is wrapped around each iteration, inner and outer, to further expedite convergence. A spectral analysis is conducted and iteration counts and computing cost for the new two-step scheme are compared against those for a simple one-step iteration, to which a Krylov iterative method can also be applied.
Relations Among Some Low-Rank Subspace Recovery Models.
Zhang, Hongyang; Lin, Zhouchen; Zhang, Chao; Gao, Junbin
2015-09-01
Recovering intrinsic low-dimensional subspaces from data distributed on them is a key preprocessing step to many applications. In recent years, a lot of work has modeled subspace recovery as low-rank minimization problems. We find that some representative models, such as robust principal component analysis (R-PCA), robust low-rank representation (R-LRR), and robust latent low-rank representation (R-LatLRR), are actually deeply connected. More specifically, we discover that once a solution to one of the models is obtained, we can obtain the solutions to other models in closed-form formulations. Since R-PCA is the simplest, our discovery makes it the center of low-rank subspace recovery models. Our work has two important implications. First, R-PCA has a solid theoretical foundation. Under certain conditions, we could find globally optimal solutions to these low-rank models at an overwhelming probability, although these models are nonconvex. Second, we can obtain significantly faster algorithms for these models by solving R-PCA first. The computation cost can be further cut by applying low-complexity randomized algorithms, for example, our novel l2,1 filtering algorithm, to R-PCA. Although for the moment the formal proof of our l2,1 filtering algorithm is not yet available, experiments verify the advantages of our algorithm over other state-of-the-art methods based on the alternating direction method. PMID:26161818
Subspace-based Inverse Uncertainty Quantification for Nuclear Data Assessment
NASA Astrophysics Data System (ADS)
Khuwaileh, B. A.; Abdel-Khalik, H. S.
2015-01-01
Safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. An inverse problem can be defined and solved to assess the sources of uncertainty, and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this work a subspace-based algorithm for inverse sensitivity/uncertainty quantification (IS/UQ) has been developed to enable analysts account for all sources of nuclear data uncertainties in support of target accuracy assessment-type analysis. An approximate analytical solution of the optimization problem is used to guide the search for the dominant uncertainty subspace. By limiting the search to a subspace, the degrees of freedom available for the optimization search are significantly reduced. A quarter PWR fuel assembly is modeled and the accuracy of the multiplication factor and the fission reaction rate are used as reactor attributes whose uncertainties are to be reduced. Numerical experiments are used to demonstrate the computational efficiency of the proposed algorithm. Our ongoing work is focusing on extending the proposed algorithm to account for various forms of feedback, e.g., thermal-hydraulics and depletion effects.
Subspace-based Inverse Uncertainty Quantification for Nuclear Data Assessment
Khuwaileh, B.A. Abdel-Khalik, H.S.
2015-01-15
Safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. An inverse problem can be defined and solved to assess the sources of uncertainty, and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this work a subspace-based algorithm for inverse sensitivity/uncertainty quantification (IS/UQ) has been developed to enable analysts account for all sources of nuclear data uncertainties in support of target accuracy assessment-type analysis. An approximate analytical solution of the optimization problem is used to guide the search for the dominant uncertainty subspace. By limiting the search to a subspace, the degrees of freedom available for the optimization search are significantly reduced. A quarter PWR fuel assembly is modeled and the accuracy of the multiplication factor and the fission reaction rate are used as reactor attributes whose uncertainties are to be reduced. Numerical experiments are used to demonstrate the computational efficiency of the proposed algorithm. Our ongoing work is focusing on extending the proposed algorithm to account for various forms of feedback, e.g., thermal-hydraulics and depletion effects.
Improved Stochastic Subspace System Identification for Structural Health Monitoring
NASA Astrophysics Data System (ADS)
Chang, Chia-Ming; Loh, Chin-Hsiung
2015-07-01
Structural health monitoring acquires structural information through numerous sensor measurements. Vibrational measurement data render the dynamic characteristics of structures to be extracted, in particular of the modal properties such as natural frequencies, damping, and mode shapes. The stochastic subspace system identification has been recognized as a power tool which can present a structure in the modal coordinates. To obtain qualitative identified data, this tool needs to spend computational expense on a large set of measurements. In study, a stochastic system identification framework is proposed to improve the efficiency and quality of the conventional stochastic subspace system identification. This framework includes 1) measured signal processing, 2) efficient space projection, 3) system order selection, and 4) modal property derivation. The measured signal processing employs the singular spectrum analysis algorithm to lower the noise components as well as to present a data set in a reduced dimension. The subspace is subsequently derived from the data set presented in a delayed coordinate. With the proposed order selection criteria, the number of structural modes is determined, resulting in the modal properties. This system identification framework is applied to a real-world bridge for exploring the feasibility in real-time applications. The results show that this improved system identification method significantly decreases computational time, while qualitative modal parameters are still attained.
Universal Subspaces for Local Unitary Groups of Fermionic Systems
NASA Astrophysics Data System (ADS)
Chen, Lin; Chen, Jianxin; Đoković, Dragomir Ž.; Zeng, Bei
2015-01-01
Let be the N-fermion Hilbert space with M-dimensional single particle space V and 2 N ≤ M. We refer to the unitary group G of V as the local unitary (LU) group. We fix an orthonormal (o.n.) basis | v 1⟩,...,| v M > of V. Then the Slater determinants with i 1 < ... < i N form an o.n. basis of . Let be the subspace spanned by all such that the set { i 1,..., i N } contains no pair {2 k-1,2 k}, k an integer. We say that the are single occupancy states (with respect to the basis | v 1⟩,...,| v M ⟩). We prove that for N = 3 the subspace is universal, i.e., each G-orbit in meets , and that this is false for N > 3. If M is even, the well known BCS states are not LU-equivalent to any single occupancy state. Our main result is that for N = 3 and M even there is a universal subspace spanned by M( M-1)( M-5)/6 states . Moreover, the number M( M-1)( M-5)/6 is minimal.
Self-adjoint time operators and invariant subspaces
NASA Astrophysics Data System (ADS)
Gómez, Fernando
2008-02-01
The question of existence of self-adjoint time operators for unitary evolutions in classical and quantum mechanics is revisited on the basis of Halmos-Helson theory of invariant subspaces, Sz.-Nagy-Foiaş dilation theory and Misra-Prigogine-Courbage theory of irreversibility. It is shown that the existence of self-adjoint time operators is equivalent to the intertwining property of the evolution plus the existence of simply invariant subspaces or rigid operator-valued functions for its Sz.-Nagy-Foiaş functional model. Similar equivalent conditions are given in terms of intrinsic randomness in the context of statistical mechanics. The rest of the contents are mainly a unifying review of the subject scattered throughout an unconnected literature. A well-known extensive set of equivalent conditions is derived from the above results; such conditions are written in terms of Schrrdinger couples, the Weyl commutation relation, incoming and outgoing subspaces, innovation processes, Lax-Phillips scattering, translation and spectral representations, and spectral properties. Also the natural procedure dealing with symmetric time operators in standard quantum mechanics involving their self-adjoint extensions is illustrated by considering the quantum Aharonov-Bohm time-of-arrival operator.
Analysis of some large-scale nonlinear stochastic dynamic systems with subspace-EPC method
NASA Astrophysics Data System (ADS)
Er, GuoKang; Iu, VaiPan
2011-09-01
The probabilistic solutions to some nonlinear stochastic dynamic (NSD) systems with various polynomial types of nonlinearities in displacements are analyzed with the subspace-exponential polynomial closure (subspace-EPC) method. The space of the state variables of the large-scale nonlinear stochastic dynamic system excited by Gaussian white noises is separated into two subspaces. Both sides of the Fokker-Planck-Kolmogorov (FPK) equation corresponding to the NSD system are then integrated over one of the subspaces. The FPK equation for the joint probability density function of the state variables in the other subspace is formulated. Therefore, the FPK equations in low dimensions are obtained from the original FPK equation in high dimensions and the FPK equations in low dimensions are solvable with the exponential polynomial closure method. Examples about multi-degree-offreedom NSD systems with various polynomial types of nonlinearities in displacements are given to show the effectiveness of the subspace-EPC method in these cases.
Preconditioning Stem Cells for In Vivo Delivery
Sart, Sébastien; Ma, Teng
2014-01-01
Abstract Stem cells have emerged as promising tools for the treatment of incurable neural and heart diseases and tissue damage. However, the survival of transplanted stem cells is reported to be low, reducing their therapeutic effects. The major causes of poor survival of stem cells in vivo are linked to anoikis, potential immune rejection, and oxidative damage mediating apoptosis. This review investigates novel methods and potential molecular mechanisms for stem cell preconditioning in vitro to increase their retention after transplantation in damaged tissues. Microenvironmental preconditioning (e.g., hypoxia, heat shock, and exposure to oxidative stress), aggregate formation, and hydrogel encapsulation have been revealed as promising strategies to reduce cell apoptosis in vivo while maintaining biological functions of the cells. Moreover, this review seeks to identify methods of optimizing cell dose preparation to enhance stem cell survival and therapeutic function after transplantation. PMID:25126478
A Hybrid Parallel Preconditioning Algorithm For CFD
NASA Technical Reports Server (NTRS)
Barth,Timothy J.; Tang, Wei-Pai; Kwak, Dochan (Technical Monitor)
1995-01-01
A new hybrid preconditioning algorithm will be presented which combines the favorable attributes of incomplete lower-upper (ILU) factorization with the favorable attributes of the approximate inverse method recently advocated by numerous researchers. The quality of the preconditioner is adjustable and can be increased at the cost of additional computation while at the same time the storage required is roughly constant and approximately equal to the storage required for the original matrix. In addition, the preconditioning algorithm suggests an efficient and natural parallel implementation with reduced communication. Sample calculations will be presented for the numerical solution of multi-dimensional advection-diffusion equations. The matrix solver has also been embedded into a Newton algorithm for solving the nonlinear Euler and Navier-Stokes equations governing compressible flow. The full paper will show numerous examples in CFD to demonstrate the efficiency and robustness of the method.
M-step preconditioned conjugate gradient methods
NASA Technical Reports Server (NTRS)
Adams, L.
1983-01-01
Preconditioned conjugate gradient methods for solving sparse symmetric and positive finite systems of linear equations are described. Necessary and sufficient conditions are given for when these preconditioners can be used and an analysis of their effectiveness is given. Efficient computer implementations of these methods are discussed and results on the CYBER 203 and the Finite Element Machine under construction at NASA Langley Research Center are included.
Video background tracking and foreground extraction via L1-subspace updates
NASA Astrophysics Data System (ADS)
Pierantozzi, Michele; Liu, Ying; Pados, Dimitris A.; Colonnese, Stefania
2016-05-01
We consider the problem of online foreground extraction from compressed-sensed (CS) surveillance videos. A technically novel approach is suggested and developed by which the background scene is captured by an L1- norm subspace sequence directly in the CS domain. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to outliers, disturbances, and rank selection. Subtraction of the L1-subspace tracked background leads then to effective foreground/moving objects extraction. Experimental studies included in this paper illustrate and support the theoretical developments.
On polynomial preconditioning for indefinite Hermitian matrices
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1989-01-01
The minimal residual method is studied combined with polynomial preconditioning for solving large linear systems (Ax = b) with indefinite Hermitian coefficient matrices (A). The standard approach for choosing the polynomial preconditioners leads to preconditioned systems which are positive definite. Here, a different strategy is studied which leaves the preconditioned coefficient matrix indefinite. More precisely, the polynomial preconditioner is designed to cluster the positive, resp. negative eigenvalues of A around 1, resp. around some negative constant. In particular, it is shown that such indefinite polynomial preconditioners can be obtained as the optimal solutions of a certain two parameter family of Chebyshev approximation problems. Some basic results are established for these approximation problems and a Remez type algorithm is sketched for their numerical solution. The problem of selecting the parameters such that the resulting indefinite polynomial preconditioners speeds up the convergence of minimal residual method optimally is also addressed. An approach is proposed based on the concept of asymptotic convergence factors. Finally, some numerical examples of indefinite polynomial preconditioners are given.
Subspace-based analysis of the ERT inverse problem
NASA Astrophysics Data System (ADS)
Ben Hadj Miled, Mohamed Khames; Miller, Eric L.
2004-05-01
In a previous work, we proposed a source-type formulation to the electrical resistance tomography (ERT) problem. Specifically, we showed that inhomogeneities in the medium can be viewed as secondary sources embedded in the homogeneous background medium and located at positions associated with variation in electrical conductivity. Assuming a piecewise constant conductivity distribution, the support of equivalent sources is equal to the boundary of the inhomogeneity. The estimation of the anomaly shape takes the form of an inverse source-type problem. In this paper, we explore the use of subspace methods to localize the secondary equivalent sources associated with discontinuities in the conductivity distribution. Our first alternative is the multiple signal classification (MUSIC) algorithm which is commonly used in the localization of multiple sources. The idea is to project a finite collection of plausible pole (or dipole) sources onto an estimated signal subspace and select those with largest correlations. In ERT, secondary sources are excited simultaneously but in different ways, i.e. with distinct amplitude patterns, depending on the locations and amplitudes of primary sources. If the number of receivers is "large enough", different source configurations can lead to a set of observation vectors that span the data subspace. However, since sources that are spatially close to each other have highly correlated signatures, seperation of such signals becomes very difficult in the presence of noise. To overcome this problem we consider iterative MUSIC algorithms like R-MUSIC and RAP-MUSIC. These recursive algorithms pose a computational burden as they require multiple large combinatorial searches. Results obtained with these algorithms using simulated data of different conductivity patterns are presented.
SKRYN: A fast semismooth-Krylov-Newton method for controlling Ising spin systems
NASA Astrophysics Data System (ADS)
Ciaramella, G.; Borzì, A.
2015-05-01
The modeling and control of Ising spin systems is of fundamental importance in NMR spectroscopy applications. In this paper, two computer packages, ReHaG and SKRYN, are presented. Their purpose is to set-up and solve quantum optimal control problems governed by the Liouville master equation modeling Ising spin-1/2 systems with pointwise control constraints. In particular, the MATLAB package ReHaG allows to compute a real matrix representation of the master equation. The MATLAB package SKRYN implements a new strategy resulting in a globalized semismooth matrix-free Krylov-Newton scheme. To discretize the real representation of the Liouville master equation, a norm-preserving modified Crank-Nicolson scheme is used. Results of numerical experiments demonstrate that the SKRYN code is able to provide fast and accurate solutions to the Ising spin quantum optimization problem.
The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces
NASA Astrophysics Data System (ADS)
Vuik, C.; Saghir, A.; Boerstoel, G. P.
2000-08-01
Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright
Software for computing eigenvalue bounds for iterative subspace matrix methods
NASA Astrophysics Data System (ADS)
Shepard, Ron; Minkoff, Michael; Zhou, Yunkai
2005-07-01
This paper describes software for computing eigenvalue bounds to the standard and generalized hermitian eigenvalue problem as described in [Y. Zhou, R. Shepard, M. Minkoff, Computing eigenvalue bounds for iterative subspace matrix methods, Comput. Phys. Comm. 167 (2005) 90-102]. The software discussed in this manuscript applies to any subspace method, including Lanczos, Davidson, SPAM, Generalized Davidson Inverse Iteration, Jacobi-Davidson, and the Generalized Jacobi-Davidson methods, and it is applicable to either outer or inner eigenvalues. This software can be applied during the subspace iterations in order to truncate the iterative process and to avoid unnecessary effort when converging specific eigenvalues to a required target accuracy, and it can be applied to the final set of Ritz values to assess the accuracy of the converged results. Program summaryTitle of program: SUBROUTINE BOUNDS_OPT Catalogue identifier: ADVE Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVE Computers: any computer that supports a Fortran 90 compiler Operating systems: any computer that supports a Fortran 90 compiler Programming language: Standard Fortran 90 High speed storage required:5m+5 working-precision and 2m+7 integer for m Ritz values No. of bits in a word: The floating point working precision is parameterized with the symbolic constant WP No. of lines in distributed program, including test data, etc.: 2452 No. of bytes in distributed program, including test data, etc.: 281 543 Distribution format: tar.gz Nature of physical problem: The computational solution of eigenvalue problems using iterative subspace methods has widespread applications in the physical sciences and engineering as well as other areas of mathematical modeling (economics, social sciences, etc.). The accuracy of the solution of such problems and the utility of those errors is a fundamental problem that is of
Accurate Excited State Geometries within Reduced Subspace TDDFT/TDA.
Robinson, David
2014-12-01
A method for the calculation of TDDFT/TDA excited state geometries within a reduced subspace of Kohn-Sham orbitals has been implemented and tested. Accurate geometries are found for all of the fluorophore-like molecules tested, with at most all valence occupied orbitals and half of the virtual orbitals included but for some molecules even fewer orbitals. Efficiency gains of between 15 and 30% are found for essentially the same level of accuracy as a standard TDDFT/TDA excited state geometry optimization calculation. PMID:26583218
Is longer sevoflurane preconditioning neuroprotective in permanent focal cerebral ischemia?
Qiu, Caiwei; Sheng, Bo; Wang, Shurong; Liu, Jin
2013-08-15
Sevoflurane preconditioning has neuroprotective effects in the cerebral ischemia/reperfusion model. However, its influence on permanent cerebral ischemia remains unclear. In the present study, the rats were exposed to sevoflurane for 15, 30, 60, and 120 minutes, followed by induction of permanent cerebral ischemia. Results demonstrated that 30- and 60-minute sevoflurane preconditioning significantly reduced the infarct volume at 24 hours after cerebral ischemia, and 60-minute lurane preconditioning additionally reduced the number of TUNEL- and caspase-3-positive cells in the ischemic penumbra. However, 120-minute sevoflurane preconditioning did not show evident neuroprotective effects. Moreover, 60-minute sevoflurane preconditioning significantly attenuated neurological deficits and infarct volume in rats at 4 days after cerebral ischemia. These findings indicated that 60-minute sevoflurane preconditioning can induce the best neuroprotective effects in rats with permanent cerebral ischemia through the inhibition of apoptosis. PMID:25206521
Implicit preconditioned WENO scheme for steady viscous flow computation
NASA Astrophysics Data System (ADS)
Huang, Juan-Chen; Lin, Herng; Yang, Jaw-Yen
2009-02-01
A class of lower-upper symmetric Gauss-Seidel implicit weighted essentially nonoscillatory (WENO) schemes is developed for solving the preconditioned Navier-Stokes equations of primitive variables with Spalart-Allmaras one-equation turbulence model. The numerical flux of the present preconditioned WENO schemes consists of a first-order part and high-order part. For first-order part, we adopt the preconditioned Roe scheme and for the high-order part, we employ preconditioned WENO methods. For comparison purpose, a preconditioned TVD scheme is also given and tested. A time-derivative preconditioning algorithm is devised and a discriminant is devised for adjusting the preconditioning parameters at low Mach numbers and turning off the preconditioning at intermediate or high Mach numbers. The computations are performed for the two-dimensional lid driven cavity flow, low subsonic viscous flow over S809 airfoil, three-dimensional low speed viscous flow over 6:1 prolate spheroid, transonic flow over ONERA-M6 wing and hypersonic flow over HB-2 model. The solutions of the present algorithms are in good agreement with the experimental data. The application of the preconditioned WENO schemes to viscous flows at all speeds not only enhances the accuracy and robustness of resolving shock and discontinuities for supersonic flows, but also improves the accuracy of low Mach number flow with complicated smooth solution structures.
Ischemic preconditioning protects against gap junctional uncoupling in cardiac myofibroblasts.
Sundset, Rune; Cooper, Marie; Mikalsen, Svein-Ole; Ytrehus, Kirsti
2004-01-01
Ischemic preconditioning increases the heart's tolerance to a subsequent longer ischemic period. The purpose of this study was to investigate the role of gap junction communication in simulated preconditioning in cultured neonatal rat cardiac myofibroblasts. Gap junctional intercellular communication was assessed by Lucifer yellow dye transfer. Preconditioning preserved intercellular coupling after prolonged ischemia. An initial reduction in coupling in response to the preconditioning stimulus was also observed. This may protect neighboring cells from damaging substances produced during subsequent regional ischemia in vivo, and may preserve gap junctional communication required for enhanced functional recovery during subsequent reperfusion. PMID:16247851
Preconditioning and the limit to the incompressible flow equations
NASA Technical Reports Server (NTRS)
Turkel, E.; Fiterman, A.; Vanleer, B.
1993-01-01
The use of preconditioning methods to accelerate the convergence to a steady state for both the incompressible and compressible fluid dynamic equations are considered. The relation between them for both the continuous problem and the finite difference approximation is also considered. The analysis relies on the inviscid equations. The preconditioning consists of a matrix multiplying the time derivatives. Hence, the steady state of the preconditioned system is the same as the steady state of the original system. For finite difference methods the preconditioning can change and improve the steady state solutions. An application to flow around an airfoil is presented.
LogDet Rank Minimization with Application to Subspace Clustering.
Kang, Zhao; Peng, Chong; Cheng, Jie; Cheng, Qiang
2015-01-01
Low-rank matrix is desired in many machine learning and computer vision problems. Most of the recent studies use the nuclear norm as a convex surrogate of the rank operator. However, all singular values are simply added together by the nuclear norm, and thus the rank may not be well approximated in practical problems. In this paper, we propose using a log-determinant (LogDet) function as a smooth and closer, though nonconvex, approximation to rank for obtaining a low-rank representation in subspace clustering. Augmented Lagrange multipliers strategy is applied to iteratively optimize the LogDet-based nonconvex objective function on potentially large-scale data. By making use of the angular information of principal directions of the resultant low-rank representation, an affinity graph matrix is constructed for spectral clustering. Experimental results on motion segmentation and face clustering data demonstrate that the proposed method often outperforms state-of-the-art subspace clustering algorithms. PMID:26229527
Efficient Calibration of Categorical Parameter Distributions using Subspace Methods
NASA Astrophysics Data System (ADS)
Khambhammettu, P.; Renard, P.; Doherty, J.
2014-12-01
Categorical parameter distributions are common-place in hydrogeological systems consisting of rock-types / aquifer materials with distinct properties, eg: sand channels in a clay matrix. Model calibration is difficult in such systems because the inverse problem is hindered by the discontinuities in the parameter space. In this paper, we present two approaches based on sub-space methods to generate categorical parameter distributions of aquifer parameters that meet calibration constraints (eg:- measured water level data, gradients) while honoring prior geological constraints. In the first approach, the prior geological information and acceptable parameter distributions are encapsulated in a simple object-based model. In the second approach, a Multiple-Point Statistics simulator is used to represent the prior geological information. Sub-space methods in conjunction with dynamic pilot points are then employed to explore the parameter space and determine the parameter combinations that optimally honor geologic and calibration constraints. Using a simple aquifer system, we demonstrate that the new approach is capable of quickly generating multiple multiple parameter distributions that honor both geological and calibration constraints. We also explore the underlying parameter and predictive uncertainty using Null Space Monte Carlo techniques.
Inverse rendering of Lambertian surfaces using subspace methods.
Nguyen, Ha Q; Do, Minh N
2014-12-01
We propose a vector space approach for inverse rendering of a Lambertian convex object with distant light sources. In this problem, the texture of the object and arbitrary lightings are both to be recovered from multiple images of the object and its 3D model. Our work is motivated by the observation that all possible images of a Lambertian object lie around a low-dimensional linear subspace spanned by the first few spherical harmonics. The inverse rendering can therefore be formulated as a matrix factorization, in which the basis of the subspace is encoded in a spherical harmonic matrix S associated with the object’s geometry. A necessary and sufficient condition on S for unique factorization is derived with an introduction to a new notion of matrix rank called nonseparable full rank. A singular value decomposition-based algorithm for exact factorization in the noiseless case is introduced. In the presence of noise, two algorithms, namely, alternating and optimization based are proposed to deal with two different types of noise. A random sample consensus-based algorithm is introduced to reduce the size of the optimization problem, which is equal to the number of pixels in each image. Implementations of the proposed algorithms are done on a real data set. PMID:25373083
Conformal Laplace superintegrable systems in 2D: polynomial invariant subspaces
NASA Astrophysics Data System (ADS)
Escobar-Ruiz, M. A.; Miller, Willard, Jr.
2016-07-01
2nd-order conformal superintegrable systems in n dimensions are Laplace equations on a manifold with an added scalar potential and 2n-1 independent 2nd order conformal symmetry operators. They encode all the information about Helmholtz (eigenvalue) superintegrable systems in an efficient manner: there is a 1-1 correspondence between Laplace superintegrable systems and Stäckel equivalence classes of Helmholtz superintegrable systems. In this paper we focus on superintegrable systems in two-dimensions, n = 2, where there are 44 Helmholtz systems, corresponding to 12 Laplace systems. For each Laplace equation we determine the possible two-variate polynomial subspaces that are invariant under the action of the Laplace operator, thus leading to families of polynomial eigenfunctions. We also study the behavior of the polynomial invariant subspaces under a Stäckel transform. The principal new results are the details of the polynomial variables and the conditions on parameters of the potential corresponding to polynomial solutions. The hidden gl 3-algebraic structure is exhibited for the exact and quasi-exact systems. For physically meaningful solutions, the orthogonality properties and normalizability of the polynomials are presented as well. Finally, for all Helmholtz superintegrable solvable systems we give a unified construction of one-dimensional (1D) and two-dimensional (2D) quasi-exactly solvable potentials possessing polynomial solutions, and a construction of new 2D PT-symmetric potentials is established.
NASA Astrophysics Data System (ADS)
Zhang, Xing; Wen, Gongjian
2015-10-01
Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.
ERIC Educational Resources Information Center
Wawro, Megan; Sweeney, George F.; Rabin, Jeffrey M.
2011-01-01
This paper reports on a study investigating students' ways of conceptualizing key ideas in linear algebra, with the particular results presented here focusing on student interactions with the notion of subspace. In interviews conducted with eight undergraduates, we found students' initial descriptions of subspace often varied substantially from…
Parallel preconditioning techniques for sparse CG solvers
Basermann, A.; Reichel, B.; Schelthoff, C.
1996-12-31
Conjugate gradient (CG) methods to solve sparse systems of linear equations play an important role in numerical methods for solving discretized partial differential equations. The large size and the condition of many technical or physical applications in this area result in the need for efficient parallelization and preconditioning techniques of the CG method. In particular for very ill-conditioned matrices, sophisticated preconditioner are necessary to obtain both acceptable convergence and accuracy of CG. Here, we investigate variants of polynomial and incomplete Cholesky preconditioners that markedly reduce the iterations of the simply diagonally scaled CG and are shown to be well suited for massively parallel machines.
Domain-decomposed preconditionings for transport operators
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Gropp, William D.; Keyes, David E.
1991-01-01
The performance was tested of five different interface preconditionings for domain decomposed convection diffusion problems, including a novel one known as the spectral probe, while varying mesh parameters, Reynolds number, ratio of subdomain diffusion coefficients, and domain aspect ratio. The preconditioners are representative of the range of practically computable possibilities that have appeared in the domain decomposition literature for the treatment of nonoverlapping subdomains. It is shown that through a large number of numerical examples that no single preconditioner can be considered uniformly superior or uniformly inferior to the rest, but that knowledge of particulars, including the shape and strength of the convection, is important in selecting among them in a given problem.
Extremely Intense Magnetospheric Substorms : External Triggering? Preconditioning?
NASA Astrophysics Data System (ADS)
Tsurutani, Bruce; Echer, Ezequiel; Hajra, Rajkumar
2016-07-01
We study particularly intense substorms using a variety of near-Earth spacecraft data and ground observations. We will relate the solar cycle dependences of events, determine whether the supersubstorms are externally or internally triggered, and their relationship to other factors such as magnetospheric preconditioning. If time permits, we will explore the details of the events and whether they are similar to regular (Akasofu, 1964) substorms or not. These intense substorms are an important feature of space weather since they may be responsible for power outages.
H(curl) Auxiliary Mesh Preconditioning
Kolev, T V; Pasciak, J E; Vassilevski, P S
2006-08-31
This paper analyzes a two-level preconditioning scheme for H(curl) bilinear forms. The scheme utilizes an auxiliary problem on a related mesh that is more amenable for constructing optimal order multigrid methods. More specifically, we analyze the case when the auxiliary mesh only approximately covers the original domain. The latter assumption is important since it allows for easy construction of nested multilevel spaces on regular auxiliary meshes. Numerical experiments in both two and three space dimensions illustrate the optimal performance of the method.
Towards bulk based preconditioning for quantum dotcomputations
Dongarra, Jack; Langou, Julien; Tomov, Stanimire; Channing,Andrew; Marques, Osni; Vomel, Christof; Wang, Lin-Wang
2006-05-25
This article describes how to accelerate the convergence of Preconditioned Conjugate Gradient (PCG) type eigensolvers for the computation of several states around the band gap of colloidal quantum dots. Our new approach uses the Hamiltonian from the bulk materials constituent for the quantum dot to design an efficient preconditioner for the folded spectrum PCG method. The technique described shows promising results when applied to CdSe quantum dot model problems. We show a decrease in the number of iteration steps by at least a factor of 4 compared to the previously used diagonal preconditioner.
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... tested for the SFTP supplemental tests of aggressive driving (US06) and air conditioning (SC03). Section...) Air Conditioning Test (SC03) Preconditioning. (1) If the SC03 test follows the exhaust emission FTP or... preconditioning cycles for the SC03 air conditioning test and the 10 minute soak are conducted at the same...
40 CFR 1066.407 - Vehicle preparation and preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) Prepare the vehicle for testing as described in 40 CFR 86.131. (b) If testing will include measurement of refueling emissions, perform the vehicle preconditioning steps as described in 40 CFR 86.153. Otherwise, perform the vehicle preconditioning steps as described in 40 CFR 86.132....
40 CFR 1066.407 - Vehicle preparation and preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) Prepare the vehicle for testing as described in 40 CFR 86.131. (b) If testing will include measurement of refueling emissions, perform the vehicle preconditioning steps as described in 40 CFR 86.153. Otherwise, perform the vehicle preconditioning steps as described in 40 CFR 86.132....
[STRESS AND INFARCT LIMITING EFFECTS OF EARLY HYPOXIC PRECONDITIONING].
Lishmanov, Yu B; Maslov, L N; Sementsov, A S; Naryzhnaya, N V; Tsibulnikov, S Yu
2015-09-01
It was established that early hypoxic preconditioning is an adaptive state different from eustress and distress. Hypoxic preconditioning has the cross effects, increasing the tolerance of the heart to ischemia-reperfusion and providing antiulcerogenic effect during immobilization stress. PMID:26672158
40 CFR 86.532-78 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Vehicle preconditioning. 86.532-78... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.532-78 Vehicle preconditioning. (a) The...
40 CFR 86.532-78 - Vehicle preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Vehicle preconditioning. 86.532-78... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.532-78 Vehicle preconditioning. (a) The...
40 CFR 86.532-78 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Vehicle preconditioning. 86.532-78... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.532-78 Vehicle preconditioning. (a) The...
Parallel Preconditioning for CFD Problems on the CM-5
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)
1994-01-01
Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.
Reduced-order preconditioning for bidomain simulations.
Deo, Makarand; Bauer, Steffen; Plank, Gernot; Vigmond, Edward
2007-05-01
Simulations of the bidomain equations involve solving large, sparse, linear systems of the form Ax = b. Being an initial value problems, it is solved at every time step. Therefore, efficient solvers are essential to keep simulations tractable. Iterative solvers, especially the preconditioned conjugate gradient (PCG) method, are attractive since memory demands are minimized compared to direct methods, albeit at the cost of solution speed. However, a proper preconditioner can drastically speed up the solution process by reducing the number of iterations. In this paper, a novel preconditioner for the PCG method based on system order reduction using the Arnoldi method (A-PCG) is proposed. Large order systems, generated during cardiac bidomain simulations employing a finite element method formulation, are solved with the A-PCG method. Its performance is compared with incomplete LU (ILU) preconditioning. Results indicate that the A-PCG estimates an approximate solution considerably faster than the ILU, often within a single iteration. To reduce the computational demands in terms of memory and run time, the use of a cascaded preconditioner was suggested. The A-PCG was applied to quickly obtain an approximate solution, and subsequently a cheap iterative method such as successive overrelaxation (SOR) is applied to further refine the solution to arrive at a desired accuracy. The memory requirements are less than those of direct LU but more than ILU method. The proposed scheme is shown to yield significant speedups when solving time evolving systems. PMID:17518292
The multigrid preconditioned conjugate gradient method
NASA Technical Reports Server (NTRS)
Tatebe, Osamu
1993-01-01
A multigrid preconditioned conjugate gradient method (MGCG method), which uses the multigrid method as a preconditioner of the PCG method, is proposed. The multigrid method has inherent high parallelism and improves convergence of long wavelength components, which is important in iterative methods. By using this method as a preconditioner of the PCG method, an efficient method with high parallelism and fast convergence is obtained. First, it is considered a necessary condition of the multigrid preconditioner in order to satisfy requirements of a preconditioner of the PCG method. Next numerical experiments show a behavior of the MGCG method and that the MGCG method is superior to both the ICCG method and the multigrid method in point of fast convergence and high parallelism. This fast convergence is understood in terms of the eigenvalue analysis of the preconditioned matrix. From this observation of the multigrid preconditioner, it is realized that the MGCG method converges in very few iterations and the multigrid preconditioner is a desirable preconditioner of the conjugate gradient method.
Universal quantum computation in waveguide QED using decoherence free subspaces
NASA Astrophysics Data System (ADS)
Paulisch, V.; Kimble, H. J.; González-Tudela, A.
2016-04-01
The interaction of quantum emitters with one-dimensional photon-like reservoirs induces strong and long-range dissipative couplings that give rise to the emergence of the so-called decoherence free subspaces (DFSs) which are decoupled from dissipation. When introducing weak perturbations on the emitters, e.g., driving, the strong collective dissipation enforces an effective coherent evolution within the DFS. In this work, we show explicitly how by introducing single-site resolved drivings, we can use the effective dynamics within the DFS to design a universal set of one and two-qubit gates within the DFS of an ensemble of two-level atom-like systems. Using Liouvillian perturbation theory we calculate the scaling with the relevant figures of merit of the systems, such as the Purcell factor and imperfect control of the drivings. Finally, we compare our results with previous proposals using atomic Λ systems in leaky cavities.
Cross-Modal Subspace Learning via Pairwise Constraints.
He, Ran; Zhang, Man; Wang, Liang; Ji, Ye; Yin, Qiyue
2015-12-01
In multimedia applications, the text and image components in a web document form a pairwise constraint that potentially indicates the same semantic concept. This paper studies cross-modal learning via the pairwise constraint and aims to find the common structure hidden in different modalities. We first propose a compound regularization framework to address the pairwise constraint, which can be used as a general platform for developing cross-modal algorithms. For unsupervised learning, we propose a multi-modal subspace clustering method to learn a common structure for different modalities. For supervised learning, to reduce the semantic gap and the outliers in pairwise constraints, we propose a cross-modal matching method based on compound ℓ21 regularization. Extensive experiments demonstrate the benefits of joint text and image modeling with semantically induced pairwise constraints, and they show that the proposed cross-modal methods can further reduce the semantic gap between different modalities and improve the clustering/matching accuracy. PMID:26259218
A nested iterative scheme for computation of incompressible flows in long domains
NASA Astrophysics Data System (ADS)
Manguoglu, Murat; Sameh, Ahmed H.; Tezduyar, Tayfun E.; Sathe, Sunil
2008-12-01
We present an effective preconditioning technique for solving the nonsymmetric linear systems encountered in computation of incompressible flows in long domains. The application category we focus on is arterial fluid mechanics. These linear systems are solved using a nested iterative scheme with an outer Richardson scheme and an inner iteration that is handled via a Krylov subspace method. Test computations that demonstrate the robustness of our nested scheme are presented.
Parallel iterative methods for sparse linear and nonlinear equations
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
As three-dimensional models are gaining importance, iterative methods will become almost mandatory. Among these, preconditioned Krylov subspace methods have been viewed as the most efficient and reliable, when solving linear as well as nonlinear systems of equations. There has been several different approaches taken to adapt iterative methods for supercomputers. Some of these approaches are discussed and the methods that deal more specifically with general unstructured sparse matrices, such as those arising from finite element methods, are emphasized.
Recovery Discontinuous Galerkin Jacobian-free Newton-Krylov Method for all-speed flows
HyeongKae Park; Robert Nourgaliev; Vincent Mousseau; Dana Knoll
2008-07-01
There is an increasing interest to develop the next generation simulation tools for the advanced nuclear energy systems. These tools will utilize the state-of-art numerical algorithms and computer science technology in order to maximize the predictive capability, support advanced reactor designs, reduce uncertainty and increase safety margins. In analyzing nuclear energy systems, we are interested in compressible low-Mach number, high heat flux flows with a wide range of Re, Ra, and Pr numbers. Under these conditions, the focus is placed on turbulent heat transfer, in contrast to other industries whose main interest is in capturing turbulent mixing. Our objective is to develop singlepoint turbulence closure models for large-scale engineering CFD code, using Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES) tools, requireing very accurate and efficient numerical algorithms. The focus of this work is placed on fully-implicit, high-order spatiotemporal discretization based on the discontinuous Galerkin method solving the conservative form of the compressible Navier-Stokes equations. The method utilizes a local reconstruction procedure derived from weak formulation of the problem, which is inspired by the recovery diffusion flux algorithm of van Leer and Nomura [?] and by the piecewise parabolic reconstruction [?] in the finite volume method. The developed methodology is integrated into the Jacobianfree Newton-Krylov framework [?] to allow a fully-implicit solution of the problem.
A fast, preconditioned conjugate gradient Toeplitz solver
NASA Technical Reports Server (NTRS)
Pan, Victor; Schrieber, Robert
1989-01-01
A simple factorization is given of an arbitrary hermitian, positive definite matrix in which the factors are well-conditioned, hermitian, and positive definite. In fact, given knowledge of the extreme eigenvalues of the original matrix A, an optimal improvement can be achieved, making the condition numbers of each of the two factors equal to the square root of the condition number of A. This technique is to applied to the solution of hermitian, positive definite Toeplitz systems. Large linear systems with hermitian, positive definite Toeplitz matrices arise in some signal processing applications. A stable fast algorithm is given for solving these systems that is based on the preconditioned conjugate gradient method. The algorithm exploits Toeplitz structure to reduce the cost of an iteration to O(n log n) by applying the fast Fourier Transform to compute matrix-vector products. Matrix factorization is used as a preconditioner.
Iterative image restoration using approximate inverse preconditioning.
Nagy, J G; Plemmons, R J; Torgersen, T C
1996-01-01
Removing a linear shift-invariant blur from a signal or image can be accomplished by inverse or Wiener filtering, or by an iterative least-squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise, filtering methods often yield poor results. On the other hand, iterative methods often suffer from slow convergence at high spatial frequencies. This paper concerns solving deconvolution problems for atmospherically blurred images by the preconditioned conjugate gradient algorithm, where a new approximate inverse preconditioner is used to increase the rate of convergence. Theoretical results are established to show that fast convergence can be expected, and test results are reported for a ground-based astronomical imaging problem. PMID:18285203
Mitochondria: the missing link between preconditioning and neuroprotection.
Correia, Sónia C; Santos, Renato X; Perry, George; Zhu, Xiongwei; Moreira, Paula I; Smith, Mark A
2010-01-01
The quote "what does not kill you makes you stronger" perfectly describes the preconditioning phenomenon - a paradigm that affords robust brain tolerance in the face of neurodegenerative insults. Over the last few decades, many attempts have been made to identify the molecular mechanisms involved in preconditioning-induced protective responses, and recent data suggests that many of these mechanisms converge on the mitochondria, positing mitochondria as master regulators of preconditioning-triggered endogenous neuroprotection. In this review, we critically discuss evidence for the involvement of mitochondria within the preconditioning paradigm. We will highlight the crucial targets and mediators by which mitochondria are integrated into neuroprotective signaling pathways that underlie preconditioning, putting focus on mitochondrial respiratory chain and mitochondrial reactive oxygen species, mitochondrial ATP-sensitive potassium channels, mitochondrial permeability transition pore, uncoupling proteins, and mitochondrial antioxidant enzyme manganese superoxide dismutase. We also discuss the role of mitochondria in the induction of hypoxia-inducible factor-1, a transcription factor engaged in preconditioning-mediated neuroprotective effects. The identification of intrinsic mitochondrial mechanisms involved in preconditioning will provide new insights which can be translated into potential pharmacological interventions aimed at counteracting neurodegeneration. PMID:20463394
Hyperbaric oxygen preconditioning protects rats against CNS oxygen toxicity.
Arieli, Yehuda; Kotler, Doron; Eynan, Mirit; Hochman, Ayala
2014-06-15
We examined the hypothesis that repeated exposure to non-convulsive hyperbaric oxygen (HBO) as preconditioning provides protection against central nervous system oxygen toxicity (CNS-OT). Four groups of rats were used in the study. Rats in the control and the negative control (Ctl-) groups were kept in normobaric air. Two groups of rats were preconditioned to non-convulsive HBO at 202 kPa for 1h once every other day for a total of three sessions. Twenty-four hours after preconditioning, one of the preconditioned groups and the control rats were exposed to convulsive HBO at 608 kPa, and latency to CNS-OT was measured. Ctl- rats and the second preconditioned group (PrC-) were not subjected to convulsive HBO exposure. Tissues harvested from the hippocampus and frontal cortex were evaluated for enzymatic activity and nitrotyrosine levels. In the group exposed to convulsive oxygen at 608 kPa, latency to CNS-OT increased from 12.8 to 22.4 min following preconditioning. A significant decrease in the activity of glutathione reductase and glucose-6-phosphate dehydrogenase, and a significant increase in glutathione peroxidase activity, was observed in the hippocampus of preconditioned rats. Nitrotyrosine levels were significantly lower in the preconditioned animals, the highest level being observed in the control rats. In the cortex of the preconditioned rats, a significant increase was observed in glutathione S-transferase and glutathione peroxidase activity. Repeated exposure to non-convulsive HBO provides protection against CNS-OT. The protective mechanism involves alterations in the enzymatic activity of the antioxidant system and lower levels of peroxynitrite, mainly in the hippocampus. PMID:24675062
Building Ultra-Low False Alarm Rate Support Vector Classifier Ensembles Using Random Subspaces
Chen, B Y; Lemmond, T D; Hanley, W G
2008-10-06
This paper presents the Cost-Sensitive Random Subspace Support Vector Classifier (CS-RS-SVC), a new learning algorithm that combines random subspace sampling and bagging with Cost-Sensitive Support Vector Classifiers to more effectively address detection applications burdened by unequal misclassification requirements. When compared to its conventional, non-cost-sensitive counterpart on a two-class signal detection application, random subspace sampling is shown to very effectively leverage the additional flexibility offered by the Cost-Sensitive Support Vector Classifier, yielding a more than four-fold increase in the detection rate at a false alarm rate (FAR) of zero. Moreover, the CS-RS-SVC is shown to be fairly robust to constraints on the feature subspace dimensionality, enabling reductions in computation time of up to 82% with minimal performance degradation.
Subspace-based CFAR detection of vehicles from forests in SAR image
NASA Astrophysics Data System (ADS)
Zhang, Yanfei; Guan, Jian; Wang, Jie; Li, Hongwei
2005-11-01
Automatic detection of military vehicles hidden among forests in synthetic aperture radar (SAR) image is a challenging and difficult task because tree trunk clutter often appears as locally bright as obscured targets. We apply subspace-based detection methods to treat the problem as detecting subspace signals in structured subspace interference and broadband noise of unknown level. Specifically, tree trunk clutter is modeled as structured isotropic interferences with unknown amplitudes while dominant vehicle scatterers as anisotropic dihedral responses. Matched subspace detector (MSD) with constant false alarm rate (CFAR) property is derived. Experiments on both simulated and real foliage-penetrating (FOPEN) SAR image show that the proposed detection scheme has good detection performance even at low false alarm rates.
Modulated Hebb-Oja learning rule--a method for principal subspace analysis.
Jankovic, Marko V; Ogawa, Hidemitsu
2006-03-01
This paper presents analysis of the recently proposed modulated Hebb-Oja (MHO) method that performs linear mapping to a lower-dimensional subspace. Principal component subspace is the method that will be analyzed. Comparing to some other well-known methods for yielding principal component subspace (e.g., Oja's Subspace Learning Algorithm), the proposed method has one feature that could be seen as desirable from the biological point of view--synaptic efficacy learning rule does not need the explicit information about the value of the other efficacies to make individual efficacy modification. Also, the simplicity of the "neural circuits" that perform global computations and a fact that their number does not depend on the number of input and output neurons, could be seen as good features of the proposed method. PMID:16566463
Visual Exploration of High-Dimensional Data through Subspace Analysis and Dynamic Projections
Liu, S.; Wang, B.; Thiagarajan, Jayaraman J.; Bremer, Peer -Timo; Pascucci, Valerio
2015-06-01
We introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.
Universal quantum computation in decoherence-free subspaces with hot trapped ions
Aolita, Leandro; Davidovich, Luiz; Kim, Kihwan; Haeffner, Hartmut
2007-05-15
We consider interactions that generate a universal set of quantum gates on logical qubits encoded in a collective-dephasing-free subspace, and discuss their implementations with trapped ions. This allows for the removal of the by-far largest source of decoherence in current trapped-ion experiments, collective dephasing. In addition, an explicit parametrization of all two-body Hamiltonians able to generate such gates without the system's state ever exiting the protected subspace is provided.
Quantum Recurrence of a Subspace and Operator-Valued Schur Functions
NASA Astrophysics Data System (ADS)
Bourgain, J.; Grünbaum, F. A.; Velázquez, L.; Wilkening, J.
2014-08-01
A notion of monitored recurrence for discrete-time quantum processes was recently introduced in Grünbaum et al. (Commun Math Phys (2), 320:543-569,
Efficient variational Bayesian approximation method based on subspace optimization.
Zheng, Yuling; Fraysse, Aurélia; Rodet, Thomas
2015-02-01
Variational Bayesian approximations have been widely used in fully Bayesian inference for approximating an intractable posterior distribution by a separable one. Nevertheless, the classical variational Bayesian approximation (VBA) method suffers from slow convergence to the approximate solution when tackling large dimensional problems. To address this problem, we propose in this paper a more efficient VBA method. Actually, variational Bayesian issue can be seen as a functional optimization problem. The proposed method is based on the adaptation of subspace optimization methods in Hilbert spaces to the involved function space, in order to solve this optimization problem in an iterative way. The aim is to determine an optimal direction at each iteration in order to get a more efficient method. We highlight the efficiency of our new VBA method and demonstrate its application to image processing by considering an ill-posed linear inverse problem using a total variation prior. Comparisons with state of the art variational Bayesian methods through a numerical example show a notable improvement in computation time. PMID:25532179
Linear parameter varying battery model identification using subspace methods
NASA Astrophysics Data System (ADS)
Hu, Y.; Yurkovich, S.
2011-03-01
The advent of hybrid and plug-in hybrid electric vehicles has created a demand for more precise battery pack management systems (BMS). Among methods used to design various components of a BMS, such as state-of-charge (SoC) estimators, model based approaches offer a good balance between accuracy, calibration effort and implementability. Because models used for these approaches are typically low in order and complexity, the traditional approach is to identify linear (or slightly nonlinear) models that are scheduled based on operating conditions. These models, formally known as linear parameter varying (LPV) models, tend to be difficult to identify because they contain a large amount of coefficients that require calibration. Consequently, the model identification process can be very laborious and time-intensive. This paper describes a comprehensive identification algorithm that uses linear-algebra-based subspace methods to identify a parameter varying state variable model that can describe the input-to-output dynamics of a battery under various operating conditions. Compared with previous methods, this approach is much faster and provides the user with information on the order of the system without placing an a priori structure on the system matrices. The entire process and various nuances are demonstrated using data collected from a lithium ion battery, and the focus is on applications for energy storage in automotive applications.
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Kawabata, Yuya; Ushio, Shuta; Sumiya, Satoshi; Kawabata, Shigenori; Adachi, Yoshiaki; Nagarajan, Srikantan S.
2016-06-01
Objective. In functional electrophysiological imaging, signals are often contaminated by interference that can be of considerable magnitude compared to the signals of interest. This paper proposes a novel algorithm for removing such interferences that does not require separate noise measurements. Approach. The algorithm is based on a dual definition of the signal subspace in the spatial- and time-domains. Since the algorithm makes use of this duality, it is named the dual signal subspace projection (DSSP). The DSSP algorithm first projects the columns of the measured data matrix onto the inside and outside of the spatial-domain signal subspace, creating a set of two preprocessed data matrices. The intersection of the row spans of these two matrices is estimated as the time-domain interference subspace. The original data matrix is projected onto the subspace that is orthogonal to this interference subspace. Main results. The DSSP algorithm is validated by using the computer simulation, and using two sets of real biomagnetic data: spinal cord evoked field data measured from a healthy volunteer and magnetoencephalography data from a patient with a vagus nerve stimulator. Significance. The proposed DSSP algorithm is effective for removing overlapped interference in a wide variety of biomagnetic measurements.
Subspace Leakage Analysis and Improved DOA Estimation With Small Sample Size
NASA Astrophysics Data System (ADS)
Shaghaghi, Mahdi; Vorobyov, Sergiy A.
2015-06-01
Classical methods of DOA estimation such as the MUSIC algorithm are based on estimating the signal and noise subspaces from the sample covariance matrix. For a small number of samples, such methods are exposed to performance breakdown, as the sample covariance matrix can largely deviate from the true covariance matrix. In this paper, the problem of DOA estimation performance breakdown is investigated. We consider the structure of the sample covariance matrix and the dynamics of the root-MUSIC algorithm. The performance breakdown in the threshold region is associated with the subspace leakage where some portion of the true signal subspace resides in the estimated noise subspace. In this paper, the subspace leakage is theoretically derived. We also propose a two-step method which improves the performance by modifying the sample covariance matrix such that the amount of the subspace leakage is reduced. Furthermore, we introduce a phenomenon named as root-swap which occurs in the root-MUSIC algorithm in the low sample size region and degrades the performance of the DOA estimation. A new method is then proposed to alleviate this problem. Numerical examples and simulation results are given for uncorrelated and correlated sources to illustrate the improvement achieved by the proposed methods. Moreover, the proposed algorithms are combined with the pseudo-noise resampling method to further improve the performance.
Robust Semi-Supervised Subspace Clustering via Non-Negative Low-Rank Representation.
Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung
2016-08-01
Low-rank representation (LRR) has been successfully applied in exploring the subspace structures of data. However, in previous LRR-based semi-supervised subspace clustering methods, the label information is not used to guide the affinity matrix construction so that the affinity matrix cannot deliver strong discriminant information. Moreover, these methods cannot guarantee an overall optimum since the affinity matrix construction and subspace clustering are often independent steps. In this paper, we propose a robust semi-supervised subspace clustering method based on non-negative LRR (NNLRR) to address these problems. By combining the LRR framework and the Gaussian fields and harmonic functions method in a single optimization problem, the supervision information is explicitly incorporated to guide the affinity matrix construction and the affinity matrix construction and subspace clustering are accomplished in one step to guarantee the overall optimum. The affinity matrix is obtained by seeking a non-negative low-rank matrix that represents each sample as a linear combination of others. We also explicitly impose the sparse constraint on the affinity matrix such that the affinity matrix obtained by NNLRR is non-negative low-rank and sparse. We introduce an efficient linearized alternating direction method with adaptive penalty to solve the corresponding optimization problem. Extensive experimental results demonstrate that NNLRR is effective in semi-supervised subspace clustering and robust to different types of noise than other state-of-the-art methods. PMID:26259210
Preconditioning methods for improved convergence rates in iterative reconstructions
Clinthorne, N.H.; Chiao, Pingchun; Rogers, W.L. . Div. of Nuclear Medicine); Pan, T.S. . Dept. of Nuclear Medicine); Stamos, J.A. . Dept. of Nuclear Engineering)
1993-03-01
Because of the characteristics of the tomographic inversion problem, iterative reconstruction techniques often suffer from poor convergence rates--especially at high spatial frequencies. By using preconditioning methods, the convergence properties of most iterative methods can be greatly enhanced without changing their ultimate solution. To increase reconstruction speed, the authors have applied spatially-invariant preconditioning filters that can be designed using the tomographic system response and implemented using 2-D frequency-domain filtering techniques. In a sample application, the authors performed reconstructions from noiseless, simulated projection data, using preconditioned and conventional steepest-descent algorithms. The preconditioned methods demonstrated residuals that were up to a factor of 30 lower than the unassisted algorithms at the same iteration. Applications of these methods to regularized reconstructions from projection data containing Poisson noise showed similar, although not as dramatic, behavior.
The Galvanotactic Migration of Keratinocytes is Enhanced by Hypoxic Preconditioning
Guo, Xiaowei; Jiang, Xupin; Ren, Xi; Sun, Huanbo; Zhang, Dongxia; Zhang, Qiong; Zhang, Jiaping; Huang, Yuesheng
2015-01-01
The endogenous electric field (EF)-directed migration of keratinocytes (galvanotaxis) into wounds is an essential step in wound re-epithelialization. Hypoxia, which occurs immediately after injury, acts as an early stimulus to initiate the healing process; however, the mechanisms for this effect, remain elusive. We show here that the galvanotactic migration of keratinocytes was enhanced by hypoxia preconditioning as a result of the increased directionality rather than the increased motility of keratinocytes. This enhancement was both oxygen tension- and preconditioning time-dependent, with the maximum effects achieved using 2% O2 preconditioning for 6 hours. Hypoxic preconditioning (2% O2, 6 hours) decreased the threshold voltage of galvanotaxis to < 25 mV/mm, whereas this value was between 25 and 50 mV/mm in the normal culture control. In a scratch-wound monolayer assay in which the applied EF was in the default healing direction, hypoxic preconditioning accelerated healing by 1.38-fold compared with the control conditions. Scavenging of the induced ROS by N-acetylcysteine (NAC) abolished the enhanced galvanotaxis and the accelerated healing by hypoxic preconditioning. Our data demonstrate a novel and unsuspected role of hypoxia in supporting keratinocyte galvanotaxis. Enhancing the galvanotactic response of cells might therefore be a clinically attractive approach to induce improved wound healing. PMID:25988491
A Weakest Precondition Approach to Robustness
NASA Astrophysics Data System (ADS)
Balliu, Musard; Mastroeni, Isabella
With the increasing complexity of information management computer systems, security becomes a real concern. E-government, web-based financial transactions or military and health care information systems are only a few examples where large amount of information can reside on different hosts distributed worldwide. It is clear that any disclosure or corruption of confidential information in these contexts can result fatal. Information flow controls constitute an appealing and promising technology to protect both data confidentiality and data integrity. The certification of the security degree of a program that runs in untrusted environments still remains an open problem in the area of language-based security. Robustness asserts that an active attacker, who can modify program code in some fixed points (holes), is unable to disclose more private information than a passive attacker, who merely observes unclassified data. In this paper, we extend a method recently proposed for checking declassified non-interference in presence of passive attackers only, in order to check robustness by means of weakest precondition semantics. In particular, this semantics simulates the kind of analysis that can be performed by an attacker, i.e., from public output towards private input. The choice of semantics allows us to distinguish between different attacks models and to characterize the security of applications in different scenarios.
Responsive corneosurfametry following in vivo skin preconditioning.
Uhoda, E; Goffin, V; Pierard, G E
2003-12-01
Skin is subjected to many environmental threats, some of which altering the structure and function of the stratum corneum. Among them, surfactants are recognized factors that may influence irritant contact dermatitis. The present study was conducted to compare the variations in skin capacitance and corneosurfametry (CSM) reactivity before and after skin exposure to repeated subclinical injuries by 2 hand dishwashing liquids. A forearm immersion test was performed on 30 healthy volunteers. 2 daily soak sessions were performed for 5 days. At inclusion and the day following the last soak session, skin capacitance was measured and cyanoacrylate skin-surface strippings were harvested. The latter specimens were used for the ex vivo microwave CSM. Both types of assessments clearly differentiated the 2 hand dishwashing liquids. The forearm immersion test allowed the discriminant sensitivity of CSM to increase. Intact skin capacitance did not predict CSM data. By contrast, a significant correlation was found between the post-test conductance and the corresponding CSM data. In conclusion, a forearm immersion test under realistic conditions can discriminate the irritation potential between surfactant-based products by measuring skin conductance and performing CSM. In vivo skin preconditioning by surfactants increases CSM sensitivity to the same surfactants. PMID:15025702
Discrete sensitivity derivatives of the Navier-Stokes equations with a parallel Krylov solver
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Taylor, Arthur C., III
1994-01-01
This paper solves an 'incremental' form of the sensitivity equations derived by differentiating the discretized thin-layer Navier Stokes equations with respect to certain design variables of interest. The equations are solved with a parallel, preconditioned Generalized Minimal RESidual (GMRES) solver on a distributed-memory architecture. The 'serial' sensitivity analysis code is parallelized by using the Single Program Multiple Data (SPMD) programming model, domain decomposition techniques, and message-passing tools. Sensitivity derivatives are computed for low and high Reynolds number flows over a NACA 1406 airfoil on a 32-processor Intel Hypercube, and found to be identical to those computed on a single-processor Cray Y-MP. It is estimated that the parallel sensitivity analysis code has to be run on 40-50 processors of the Intel Hypercube in order to match the single-processor processing time of a Cray Y-MP.
Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties
Sila, Andrew M.; Shepherd, Keith D.; Pokhariyal, Ganesh P.
2016-01-01
We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky–Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries. PMID:27110048
Sensitivity analysis of a nonlinear Newton-Krylov solver for heat transfer with phase change.
Henninger, Rudolph J.; Knoll, D. A.; Kothe, D. B.; Lally, B. R.
2002-01-01
Development of a complex metal-casting computer model requires information about how varying the problem parameters affects the results (metal flow and solidification). For example, we would like to know how the last point to solidify or the cooling rate at a given location changes when the physical properties of the metal, boundary conditions, or mold geometry are changed. As a preliminary step towards a complete sensitivity analysis of a three-dimensional casting simulation, we examine a one-dimensional version of a metal-alloy phase-change conductive-heat-transfer model by means of Automatic Differentiation (AD). This non-linear 'Jacobian-free' method is a combination of an outer Newton-based iteration and an inner conjugate gradient-like (Krylov) iteration. The implicit solution algorithm has enthalpy as the dependent variable from which temperatures are determined. We examine the sensitivities of the difference between an exact analytical solution for the final temperature and that produced by this algorithm to the problem parameters. In all there are 17 parameters (12 physical constants such as liquid density, heat capacity, and thermal conductivity, 2 initial and boundary condition parameters, the final solution time, and 2 algorithm tolerances). We apply AD in the forward and reverse mode and verify the sensitivities by means of finite differences. In general, the finite-difference method requires at least N+1 computer runs to determine sensitivities for N problem parameters. By forward and reverse, we mean the direction through the solution and in time and space in which the derivative values are obtained. The forward mode is typically more efficient for determining the sensitivity of many responses to one or a few parameters, while the reverse mode is better suited for sensitivities of one or a few responses with respect to many parameters. The sensitivities produced by all the methods agreed to at least three significant figures. The forward and reverse
Fetal asphyctic preconditioning alters the transcriptional response to perinatal asphyxia
2014-01-01
Background Genomic reprogramming is thought to be, at least in part, responsible for the protective effect of brain preconditioning. Unraveling mechanisms of this endogenous neuroprotection, activated by preconditioning, is an important step towards new clinical strategies for treating asphyctic neonates. Therefore, we investigated whole-genome transcriptional changes in the brain of rats which underwent perinatal asphyxia (PA), and rats where PA was preceded by fetal asphyctic preconditioning (FAPA). Offspring were sacrificed 6 h and 96 h after birth, and whole-genome transcription was investigated using the Affymetrix Gene1.0ST chip. Microarray data were analyzed with the Bioconductor Limma package. In addition to univariate analysis, we performed Gene Set Enrichment Analysis (GSEA) in order to derive results with maximum biological relevance. Results We observed minimal, 25% or less, overlap of differentially regulated transcripts across different experimental groups which leads us to conclude that the transcriptional phenotype of these groups is largely unique. In both the PA and FAPA group we observe an upregulation of transcripts involved in cellular stress. Contrastingly, transcripts with a function in the cell nucleus were mostly downregulated in PA animals, while we see considerable upregulation in the FAPA group. Furthermore, we observed that histone deacetylases (HDACs) are exclusively regulated in FAPA animals. Conclusions This study is the first to investigate whole-genome transcription in the neonatal brain after PA alone, and after perinatal asphyxia preceded by preconditioning (FAPA). We describe several genes/pathways, such as ubiquitination and proteolysis, which were not previously linked to preconditioning-induced neuroprotection. Furthermore, we observed that the majority of upregulated genes in preconditioned animals have a function in the cell nucleus, including several epigenetic players such as HDACs, which suggests that epigenetic
Reduction in postsystolic wall thickening during late preconditioning.
Monnet, Xavier; Lucats, Laurence; Colin, Patrice; Derumeaux, Geneviève; Dubois-Rande, Jean-Luc; Hittinger, Luc; Ghaleh, Bijan; Berdeaux, Alain
2007-01-01
Brief coronary artery occlusion (CAO) and reperfusion induce myocardial stunning and late preconditioning. Postsystolic wall thickening (PSWT) also develops with CAO and reperfusion. However, the time course of PSWT during stunning and the regional function pattern of the preconditioned myocardium remain unknown. The goal of this study was to investigate the evolution of PSWT during myocardial stunning and its modifications during late preconditioning. Dogs were chronically instrumented to measure (sonomicrometry) systolic wall thickening (SWT), PSWT, total wall thickening (TWT = SWT + PSWT), and maximal rate of thickening (dWT/dt(max)). Two 10-min CAO (circumflex artery) were performed 24 h apart (day 0 and day 1, n = 7). At day 0, CAO decreased SWT and increased PSWT. During the first hours of the subsequent stunning, evolution of PSWT was symmetrical to that of SWT. At day 1, baseline SWT was similar to day 0, but PSWT was reduced (-66%), while dWT/dt(max) and SWT/TWT ratio increased (+48 and +14%, respectively). After CAO at day 1, stunning was reduced, indicating late preconditioning. Simultaneously vs. day 0, PSWT was significantly reduced, and dWT/dt(max) as well as SWT/TWT ratio were increased, i.e., a greater part of TWT was devoted to ejection. Similar decrease in PSWT was observed with a nonischemic preconditioning stimulus (rapid ventricular pacing, n = 4). In conclusion, a major contractile adaptation occurs during late preconditioning, i.e., the rate of wall thickening is enhanced and PWST is almost abolished. These phenotype adaptations represent potential approaches for characterizing stunning and late preconditioning with repetitive ischemia in humans. PMID:16920813
Modified principal component analysis: an integration of multiple similarity subspace models.
Fan, Zizhu; Xu, Yong; Zuo, Wangmeng; Yang, Jian; Tang, Jinhui; Lai, Zhihui; Zhang, David
2014-08-01
We modify the conventional principal component analysis (PCA) and propose a novel subspace learning framework, modified PCA (MPCA), using multiple similarity measurements. MPCA computes three similarity matrices exploiting the similarity measurements: 1) mutual information; 2) angle information; and 3) Gaussian kernel similarity. We employ the eigenvectors of similarity matrices to produce new subspaces, referred to as similarity subspaces. A new integrated similarity subspace is then generated using a novel feature selection approach. This approach needs to construct a kind of vector set, termed weak machine cell (WMC), which contains an appropriate number of the eigenvectors spanning the similarity subspaces. Combining the wrapper method and the forward selection scheme, MPCA selects a WMC at a time that has a powerful discriminative capability to classify samples. MPCA is very suitable for the application scenarios in which the number of the training samples is less than the data dimensionality. MPCA outperforms the other state-of-the-art PCA-based methods in terms of both classification accuracy and clustering result. In addition, MPCA can be applied to face image reconstruction. MPCA can use other types of similarity measurements. Extensive experiments on many popular real-world data sets, such as face databases, show that MPCA achieves desirable classification results, as well as has a powerful capability to represent data. PMID:25050950
NASA Astrophysics Data System (ADS)
Hu, Xingzhi; Parks, Geoffrey T.; Chen, Xiaoqian; Seshadri, Pranay
2016-03-01
Uncertainty quantification has recently been receiving much attention from aerospace engineering community. With ever-increasing requirements for robustness and reliability, it is crucial to quantify multidisciplinary uncertainty in satellite system design which dominates overall design direction and cost. However, coupled multi-disciplines and cross propagation hamper the efficiency and accuracy of high-dimensional uncertainty analysis. In this study, an uncertainty quantification methodology based on active subspaces is established for satellite conceptual design. The active subspace effectively reduces the dimension and measures the contributions of input uncertainties. A comprehensive characterization of associated uncertain factors is made and all subsystem models are built for uncertainty propagation. By integrating a system decoupling strategy, the multidisciplinary uncertainty effect is efficiently represented by a one-dimensional active subspace for each design. The identified active subspace is checked by bootstrap resampling for confidence intervals and verified by Monte Carlo propagation for the accuracy. To show the performance of active subspaces, 18 uncertainty parameters of an Earth observation small satellite are exemplified and then another 5 design uncertainties are incorporated. The uncertainties that contribute the most to satellite mass and total cost are ranked, and the quantification of high-dimensional uncertainty is achieved by a relatively small number of support samples. The methodology with considerably less cost exhibits high accuracy and strong adaptability, which provides a potential template to tackle multidisciplinary uncertainty in practical satellite systems.
Ischemic Preconditioning in White Matter: Magnitude and Mechanism
Hamner, Margaret A.; Ye, Zucheng; Lee, Richard V.; Colman, Jamie R.; Le, Thu; Gong, Davin C.; Weinstein, Jonathan R.
2015-01-01
Ischemic preconditioning (IPC) is a robust neuroprotective phenomenon whereby brief ischemic exposure confers tolerance to a subsequent ischemic challenge. IPC has not been studied selectively in CNS white matter (WM), although stroke frequently involves WM. We determined whether IPC is present in WM and, if so, its mechanism. We delivered a brief in vivo preconditioning ischemic insult (unilateral common carotid artery ligation) to 12- to 14-week-old mice and determined WM ischemic vulnerability [oxygen–glucose deprivation (OGD)] 72 h later, using acutely isolated optic nerves (CNS WM tracts) from the preconditioned (ipsilateral) and control (contralateral) hemispheres. Functional and structural recovery was assessed by quantitative measurement of compound action potentials (CAPs) and immunofluorescent microscopy. Preconditioned mouse optic nerves (MONs) showed better functional recovery after OGD than the non-preconditioned MONs (31 ± 3 vs 17 ± 3% normalized CAP area, p < 0.01). Preconditioned MONs also showed improved axon integrity and reduced oligodendrocyte injury compared with non-preconditioned MONs. Toll-like receptor-4 (TLR4) and type 1 interferon receptor (IFNAR1), key receptors in innate immune response, are implicated in gray matter preconditioning. Strikingly, IPC-mediated WM protection was abolished in both TLR4−/− and IFNAR1−/− mice. In addition, IPC-mediated protection in WM was also abolished in IFNAR1fl/fl LysMcre, but not in IFNAR1fl/fl control, mice. These findings demonstrated for the first time that IPC was robust in WM, the phenomenon being intrinsic to WM itself. Furthermore, WM IPC was dependent on innate immune cell signaling pathways. Finally, these data demonstrated that microglial-specific expression of IFNAR1 plays an indispensable role in WM IPC. SIGNIFICANCE STATEMENT Ischemic preconditioning (IPC) has been studied predominantly in gray matter, but stroke in humans frequently involves white matter (WM) as well. Here we
Heat shock proteins, end effectors of myocardium ischemic preconditioning?
Guisasola, María Concepcion; Desco, Maria del Mar; Gonzalez, Fernanda Silvana; Asensio, Fernando; Dulin, Elena; Suarez, Antonio; Garcia Barreno, Pedro
2006-01-01
The purpose of this study was to investigate (1) whether ischemia-reperfusion increased the content of heat shock protein 72 (Hsp72) transcripts and (2) whether myocardial content of Hsp72 is increased by ischemic preconditioning so that they can be considered as end effectors of preconditioning. Twelve male minipigs (8 protocol, 4 sham) were used, with the following ischemic preconditioning protocol: 3 ischemia and reperfusion 5-minute alternative cycles and last reperfusion cycle of 3 hours. Initial and final transmural biopsies (both in healthy and ischemic areas) were taken in all animals. Heat shock protein 72 messenger ribonucleic acid (mRNA) expression was measured by a semiquantitative reverse transcriptase-polymerase chain reaction (RT-PCR) method using complementary DNA normalized against the housekeeping gene cyclophilin. The identification of heat shock protein 72 was performed by immunoblot. In our “classic” preconditioning model, we found no changes in mRNA hsp72 levels or heat shock protein 72 content in the myocardium after 3 hours of reperfusion. Our experimental model is valid and the experimental techniques are appropriate, but the induction of heat shock proteins 72 as end effectors of cardioprotection in ischemic preconditioning does not occur in the first hours after ischemia, but probably at least 24 hours after it, in the so-called “second protection window.” PMID:17009598
Implementation of Preconditioned Dual-Time Procedures in OVERFLOW
NASA Technical Reports Server (NTRS)
Pandya, Shishir A.; Venkateswaran, Sankaran; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2003-01-01
Preconditioning methods have become the method of choice for the solution of flowfields involving the simultaneous presence of low Mach and transonic regions. It is well known that these methods are important for insuring accurate numerical discretization as well as convergence efficiency over various operating conditions such as low Mach number, low Reynolds number and high Strouhal numbers. For unsteady problems, the preconditioning is introduced within a dual-time framework wherein the physical time-derivatives are used to march the unsteady equations and the preconditioned time-derivatives are used for purposes of numerical discretization and iterative solution. In this paper, we describe the implementation of the preconditioned dual-time methodology in the OVERFLOW code. To demonstrate the performance of the method, we employ both simple and practical unsteady flowfields, including vortex propagation in a low Mach number flow, flowfield of an impulsively started plate (Stokes' first problem) arid a cylindrical jet in a low Mach number crossflow with ground effect. All the results demonstrate that the preconditioning algorithm is responsible for improvements to both numerical accuracy and convergence efficiency and, thereby, enables low Mach number unsteady computations to be performed at a fraction of the cost of traditional time-marching methods.
Crevecoeur, Guillaume; Yitembe, Bertrand; Dupre, Luc; Van Keer, Roger
2013-01-01
This paper proposes a modification of the subspace correlation cost function and the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) method for electroencephalography (EEG) source analysis in epilepsy. This enables to reconstruct neural source locations and orientations that are less degraded due to the uncertain knowledge of the head conductivity values. An extended linear forward model is used in the subspace correlation cost function that incorporates the sensitivity of the EEG potentials to the uncertain conductivity value parameter. More specifically, the principal vector of the subspace correlation function is used to provide relevant information for solving the EEG inverse problems. A simulation study is carried out on a simplified spherical head model with uncertain skull to soft tissue conductivity ratio. Results show an improvement in the reconstruction accuracy of source parameters compared to traditional methodology, when using conductivity ratio values that are different from the actual conductivity ratio. PMID:24111154
Estimation of direction of arrival of a moving target using subspace based approaches
NASA Astrophysics Data System (ADS)
Ghosh, Ripul; Das, Utpal; Akula, Aparna; Kumar, Satish; Sardana, H. K.
2016-05-01
In this work, array processing techniques based on subspace decomposition of signal have been evaluated for estimation of direction of arrival of moving targets using acoustic signatures. Three subspace based approaches - Incoherent Wideband Multiple Signal Classification (IWM), Least Square-Estimation of Signal Parameters via Rotation Invariance Techniques (LS-ESPRIT) and Total Least Square- ESPIRIT (TLS-ESPRIT) are considered. Their performance is compared with conventional time delay estimation (TDE) approaches such as Generalized Cross Correlation (GCC) and Average Square Difference Function (ASDF). Performance evaluation has been conducted on experimentally generated data consisting of acoustic signatures of four different types of civilian vehicles moving in defined geometrical trajectories. Mean absolute error and standard deviation of the DOA estimates w.r.t. ground truth are used as performance evaluation metrics. Lower statistical values of mean error confirm the superiority of subspace based approaches over TDE based techniques. Amongst the compared methods, LS-ESPRIT indicated better performance.
Multi-qubit non-adiabatic holonomic controlled quantum gates in decoherence-free subspaces
NASA Astrophysics Data System (ADS)
Hu, Shi; Cui, Wen-Xue; Guo, Qi; Wang, Hong-Fu; Zhu, Ai-Dong; Zhang, Shou
2016-06-01
Non-adiabatic holonomic quantum gate in decoherence-free subspaces is of greatly practical importance due to its built-in fault tolerance, coherence stabilization virtues, and short run-time. Here, we propose some compact schemes to implement two- and three-qubit controlled unitary quantum gates and Fredkin gate. For the controlled unitary quantum gates, the unitary operator acting on the target qubit is an arbitrary single-qubit gate operation. The controlled quantum gates can be directly implemented by utilizing non-adiabatic holonomy in decoherence-free subspaces and the required resource for the decoherence-free subspace encoding is minimal by using only two neighboring physical qubits undergoing collective dephasing to encode a logical qubit.
Gpu Implementation of Preconditioning Method for Low-Speed Flows
NASA Astrophysics Data System (ADS)
Zhang, Jiale; Chen, Hongquan
2016-06-01
An improved preconditioning method for low-Mach-number flows is implemented on a GPU platform. The improved preconditioning method employs the fluctuation of the fluid variables to weaken the influence of accuracy caused by the truncation error. The GPU parallel computing platform is implemented to accelerate the calculations. Both details concerning the improved preconditioning method and the GPU implementation technology are described in this paper. Then a set of typical low-speed flow cases are simulated for both validation and performance analysis of the resulting GPU solver. Numerical results show that dozens of times speedup relative to a serial CPU implementation can be achieved using a single GPU desktop platform, which demonstrates that the GPU desktop can serve as a cost-effective parallel computing platform to accelerate CFD simulations for low-Speed flows substantially.
Operator-Based Preconditioning of Stiff Hyperbolic Systems
Reynolds, Daniel R.; Samtaney, Ravi; Woodward, Carol S.
2009-02-09
We introduce an operator-based scheme for preconditioning stiff components encoun- tered in implicit methods for hyperbolic systems of partial differential equations posed on regular grids. The method is based on a directional splitting of the implicit operator, followed by a char- acteristic decomposition of the resulting directional parts. This approach allows for solution to any number of characteristic components, from the entire system to only the fastest, stiffness-inducing waves. We apply the preconditioning method to stiff hyperbolic systems arising in magnetohydro- dynamics and gas dynamics. We then present numerical results showing that this preconditioning scheme works well on problems where the underlying stiffness results from the interaction of fast transient waves with slowly-evolving dynamics, scales well to large problem sizes and numbers of processors, and allows for additional customization based on the specific problems under study.
Liquid hydrogen turbopump rapid start program. [thermal preconditioning using coatings
NASA Technical Reports Server (NTRS)
Wong, G. S.
1973-01-01
This program was to analyze, test, and evaluate methods of achieving rapid-start of a liquid hydrogen feed system (inlet duct and turbopump) using a minimum of thermal preconditioning time and propellant. The program was divided into four tasks. Task 1 includes analytical studies of the testing conducted in the other three tasks. Task 2 describes the results from laboratory testing of coating samples and the successful adherence of a KX-635 coating to the internal surfaces of the feed system tested in Task 4. Task 3 presents results of testing an uncoated feed system. Tank pressure was varied to determine the effect of flowrate on preconditioning. The discharge volume and the discharge pressure which initiates opening of the discharge valve were varied to determine the effect on deadhead (no through-flow) start transients. Task 4 describes results of testing a similar, internally coated feed system and illustrates the savings in preconditioning time and propellant resulting from the coatings.
On adaptive weighted polynomial preconditioning for Hermitian positive definite matrices
NASA Technical Reports Server (NTRS)
Fischer, Bernd; Freund, Roland W.
1992-01-01
The conjugate gradient algorithm for solving Hermitian positive definite linear systems is usually combined with preconditioning in order to speed up convergence. In recent years, there has been a revival of polynomial preconditioning, motivated by the attractive features of the method on modern architectures. Standard techniques for choosing the preconditioning polynomial are based only on bounds for the extreme eigenvalues. Here a different approach is proposed, which aims at adapting the preconditioner to the eigenvalue distribution of the coefficient matrix. The technique is based on the observation that good estimates for the eigenvalue distribution can be derived after only a few steps of the Lanczos process. This information is then used to construct a weight function for a suitable Chebyshev approximation problem. The solution of this problem yields the polynomial preconditioner. In particular, we investigate the use of Bernstein-Szego weights.
Preconditioning boosts regenerative programmes in the adult zebrafish heart
de Preux Charles, Anne-Sophie; Bise, Thomas; Baier, Felix; Sallin, Pauline; Jaźwińska, Anna
2016-01-01
During preconditioning, exposure to a non-lethal harmful stimulus triggers a body-wide increase of survival and pro-regenerative programmes that enable the organism to better withstand the deleterious effects of subsequent injuries. This phenomenon has first been described in the mammalian heart, where it leads to a reduction of infarct size and limits the dysfunction of the injured organ. Despite its important clinical outcome, the actual mechanisms underlying preconditioning-induced cardioprotection remain unclear. Here, we describe two independent models of cardiac preconditioning in the adult zebrafish. As noxious stimuli, we used either a thoracotomy procedure or an induction of sterile inflammation by intraperitoneal injection of immunogenic particles. Similar to mammalian preconditioning, the zebrafish heart displayed increased expression of cardioprotective genes in response to these stimuli. As zebrafish cardiomyocytes have an endogenous proliferative capacity, preconditioning further elevated the re-entry into the cell cycle in the intact heart. This enhanced cycling activity led to a long-term modification of the myocardium architecture. Importantly, the protected phenotype brought beneficial effects for heart regeneration within one week after cryoinjury, such as a more effective cell-cycle reentry, enhanced reactivation of embryonic gene expression at the injury border, and improved cell survival shortly after injury. This study reveals that exposure to antecedent stimuli induces adaptive responses that render the fish more efficient in the activation of the regenerative programmes following heart damage. Our results open a new field of research by providing the adult zebrafish as a model system to study remote cardiac preconditioning. PMID:27440423
Recursive encoding and decoding of the noiseless subsystem and decoherence-free subspace
Li, Chi-Kwong; Nakahara, Mikio; Poon, Yiu-Tung; Sze, Nung-Sing; Tomita, Hiroyuki
2011-10-15
When an environmental disturbance to a quantum system has a wavelength much larger than the system size, all qubits in the system are under the action of the same error operator. The noiseless subsystem and decoherence-free subspace are immune to such collective noise. We construct simple quantum circuits that implement these error-avoiding codes for a small number n of physical qubits. A single logical qubit is encoded with n=3 and 4, while two and three logical qubits are encoded with n=5 and 7, respectively. Recursive relations among subspaces employed in these codes play essential roles in our implementation.
The Subspace Projected Approximate Matrix (SPAM) modification of the Davidson method
Shepard, R.; Tilson, J.L.; Wagner, A.F.; Minkoff, M.
1997-12-31
A modification of the Davidson subspace expansion method, a Ritz approach, is proposed in which the expansion vectors are computed from a {open_quotes}cheap{close_quotes} approximating eigenvalue equation. This approximate eigenvalue equation is assembled using projection operators constructed from the subspace expansion vectors. The method may be implemented using an inner/outer iteration scheme, or it may be implemented by modifying the usual Davidson algorithm in such a way that exact and approximate matrix-vector product computations are intersperced. A multi-level algorithm is proposed in which several levels of approximate matrices are used.
Boundary regularity of Nevanlinna domains and univalent functions in model subspaces
NASA Astrophysics Data System (ADS)
Baranov, Anton D.; Fedorovskiy, Konstantin Yu
2011-12-01
In the paper we study boundary regularity of Nevanlinna domains, which have appeared in problems of uniform approximation by polyanalytic polynomials. A new method for constructing Nevanlinna domains with essentially irregular nonanalytic boundaries is suggested; this method is based on finding appropriate univalent functions in model subspaces, that is, in subspaces of the form K_\\varTheta=H^2\\ominus\\varTheta H^2, where \\varTheta is an inner function. To describe the irregularity of the boundaries of the domains obtained, recent results by Dolzhenko about boundary regularity of conformal mappings are used. Bibliography: 18 titles.
Incomplete block SSOR preconditionings for high order discretizations
Kolotilina, L.
1994-12-31
This paper considers the solution of linear algebraic systems Ax = b resulting from the p-version of the Finite Element Method (FEM) using PCG iterations. Contrary to the h-version, the p-version ensures the desired accuracy of a discretization not by refining an original finite element mesh but by introducing higher degree polynomials as additional basis functions which permits to reduce the size of the resulting linear system as compared with the h-version. The suggested preconditionings are the so-called Incomplete Block SSOR (IBSSOR) preconditionings.
Choice of Variables and Preconditioning for Time Dependent Problems
NASA Technical Reports Server (NTRS)
Turkel, Eli; Vatsa, Verr N.
2003-01-01
We consider the use of low speed preconditioning for time dependent problems. These are solved using a dual time step approach. We consider the effect of this dual time step on the parameter of the low speed preconditioning. In addition, we compare the use of two sets of variables, conservation and primitive variables, to solve the system. We show the effect of these choices on both the convergence to a steady state and the accuracy of the numerical solutions for low Mach number steady state and time dependent flows.
Fourier analysis of finite element preconditioned collocation schemes
NASA Technical Reports Server (NTRS)
Deville, Michel O.; Mund, Ernest H.
1990-01-01
The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.
Preconditioning for multiplexed imaging with spatially coded PSFs.
Horisaki, Ryoichi; Tanida, Jun
2011-06-20
We propose a preconditioning method to improve the convergence of iterative reconstruction algorithms in multiplexed imaging based on convolution-based compressive sensing with spatially coded point spread functions (PSFs). The system matrix is converted to improve the condition number with a preconditioner matrix. The preconditioner matrix is calculated by Tikhonov regularization in the frequency domain. The method was demonstrated with simulations and an experiment involving a range detection system with a grating based on the multiplexed imaging framework. The results of the demonstrations showed improved reconstruction fidelity by using the proposed preconditioning method. PMID:21716495
Joseph, Ilon
2014-05-27
Jacobian-free Newton-Krylov (JFNK) algorithms are a potentially powerful class of methods for solving the problem of coupling codes that address dfferent physics models. As communication capability between individual submodules varies, different choices of coupling algorithms are required. The more communication that is available, the more possible it becomes to exploit the simple sparsity pattern of the Jacobian, albeit of a large system. The less communication that is available, the more dense the Jacobian matrices become and new types of preconditioners must be sought to efficiently take large time steps. In general, methods that use constrained or reduced subsystems can offer a compromise in complexity. The specific problem of coupling a fluid plasma code to a kinetic neutrals code is discussed as an example.
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Vehicle preconditioning. 86.132-00 Section 86.132-00 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1977 and Later Model Year New...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Vehicle preconditioning. 86.132-00 Section 86.132-00 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1977 and Later Model Year New...
40 CFR 1066.405 - Vehicle preparation and preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Vehicle preparation and preconditioning. 1066.405 Section 1066.405 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Preparing Vehicles and Running an Exhaust...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Vehicle preconditioning. 86.132-00 Section 86.132-00 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1977 and Later Model Year New...
33 CFR 183.220 - Preconditioning for tests.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Preconditioning for tests. 183.220 Section 183.220 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY BOATS AND ASSOCIATED EQUIPMENT Flotation Requirements for Outboard Boats Rated for Engines of More Than 2 Horsepower General...
33 CFR 183.320 - Preconditioning for tests.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Preconditioning for tests. 183.320 Section 183.320 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) BOATING SAFETY BOATS AND ASSOCIATED EQUIPMENT Flotation Requirements for Outboard Boats Rated for Engines of 2 Horsepower or Less General §...
40 CFR 1065.516 - Sample system decontamination and preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Sample system decontamination and preconditioning. 1065.516 Section 1065.516 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Performing an Emission Test Over Specified Duty Cycles § 1065.516 Sample...
Combustion of coal/water mixtures with thermal preconditioning
Novack, M.; Roffe, G.; Miller, G.
1987-01-01
Thermal preconditioning is a process in which coal/water mixtures are vaporized to produce coal/steam suspensions, and then superheated to allow the coal to devolitalize producing suspensions of char particles in hydrocarbon gases and steam. This final product of the process can be injected without atomization, and burned directly in a gas turbine combustor. This paper reports on the results of an experimental program in which thermally preconditioned coal/water mixture was successfully burned with a stable flame in a gas turbine combustor test rig. Tests were performed at a mixture flowrate of 300 1b/hr and combustor pressure of 8 atmospheres. The coal/water mixture was thermally preconditioned and injected into the combustor over a temperature range from 350/sup 0/F to 600/sup 0/F, and combustion air was supplied at between 600/sup 0/F to 725/sup 0/F. Test durations varied between 10 to 20 minutes. The original mean coal particle size for these tests, prior to preconditioning was 25 microns. Results of additional tests showed that one-third of the sulfur contained in the solids of a coal/water mixture with 3 percent sulfur were evolved in gaseous form (under mild thermolized conditions) mainly as H/sub 2/S with the remainder as light mercaptans.
Improvement in computational fluid dynamics through boundary verification and preconditioning
NASA Astrophysics Data System (ADS)
Folkner, David E.
This thesis provides improvements to computational fluid dynamics accuracy and efficiency through two main methods: a new boundary condition verification procedure and preconditioning techniques. First, a new verification approach that addresses boundary conditions was developed. In order to apply the verification approach to a large range of arbitrary boundary conditions, it was necessary to develop unifying mathematical formulation. A framework was developed that allows for the application of Dirichlet, Neumann, and extrapolation boundary condition, or in some cases the equations of motion directly. Verification of boundary condition techniques was performed using exact solutions from canonical fluid dynamic test cases. Second, to reduce computation time and improve accuracy, preconditioning algorithms were applied via artificial dissipation schemes. A new convective upwind and split pressure (CUSP) scheme was devised and was shown to be more effective than traditional preconditioning schemes in certain scenarios. The new scheme was compared with traditional schemes for unsteady flows for which both convective and acoustic effects dominated. Both boundary conditions and preconditioning algorithms were implemented in the context of a "strand grid" solver. While not the focus of this thesis, strand grids provide automatic viscous quality meshing and are suitable for moving mesh overset problems.
40 CFR 86.132-96 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Vehicle preconditioning. 86.132-96 Section 86.132-96 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1977 and Later Model Year New...
40 CFR 86.1774-99 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Vehicle preconditioning. 86.1774-99... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Provisions for the Voluntary National Low Emission Vehicle Program for Light-Duty Vehicles and...
40 CFR 86.1774-99 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Vehicle preconditioning. 86.1774-99... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Provisions for the Voluntary National Low Emission Vehicle Program for Light-Duty Vehicles and...
40 CFR 86.1774-99 - Vehicle preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 20 2013-07-01 2013-07-01 false Vehicle preconditioning. 86.1774-99... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Provisions for the Voluntary National Low Emission Vehicle Program for Light-Duty Vehicles and...
Subspace scheduling and parallel implementation of non-systolic regular iterative algorithms
Roychowdhury, V.P.; Kailath, T.
1989-01-01
The study of Regular Iterative Algorithms (RIAs) was introduced in a seminal paper by Karp, Miller, and Winograd in 1967. In more recent years, the study of systolic architectures has led to a renewed interest in this class of algorithms, and the class of algorithms implementable on systolic arrays (as commonly understood) has been identified as a precise subclass of RIAs include matrix pivoting algorithms and certain forms of numerically stable two-dimensional filtering algorithms. It has been shown that the so-called hyperplanar scheduling for systolic algorithms can no longer be used to schedule and implement non-systolic RIAs. Based on the analysis of a so-called computability tree we generalize the concept of hyperplanar scheduling and determine linear subspaces in the index space of a given RIA such that all variables lying on the same subspace can be scheduled at the same time. This subspace scheduling technique is shown to be asymptotically optimal, and formal procedures are developed for designing processor arrays that will be compatible with our scheduling schemes. Explicit formulas for the schedule of a given variable are determined whenever possible; subspace scheduling is also applied to obtain lower dimensional processor arrays for systolic algorithms.
Detecting and characterizing coal mine related seismicity in the Western U.S. using subspace methods
NASA Astrophysics Data System (ADS)
Chambers, Derrick J. A.; Koper, Keith D.; Pankow, Kristine L.; McCarter, Michael K.
2015-11-01
We present an approach for subspace detection of small seismic events that includes methods for estimating magnitudes and associating detections from multiple stations into unique events. The process is used to identify mining related seismicity from a surface coal mine and an underground coal mining district, both located in the Western U.S. Using a blasting log and a locally derived seismic catalogue as ground truth, we assess detector performance in terms of verified detections, false positives and failed detections. We are able to correctly identify over 95 per cent of the surface coal mine blasts and about 33 per cent of the events from the underground mining district, while keeping the number of potential false positives relatively low by requiring all detections to occur on two stations. We find that most of the potential false detections for the underground coal district are genuine events missed by the local seismic network, demonstrating the usefulness of regional subspace detectors in augmenting local catalogues. We note a trade-off in detection performance between stations at smaller source-receiver distances, which have increased signal-to-noise ratio, and stations at larger distances, which have greater waveform similarity. We also explore the increased detection capabilities of a single higher dimension subspace detector, compared to multiple lower dimension detectors, in identifying events that can be described as linear combinations of training events. We find, in our data set, that such an advantage can be significant, justifying the use of a subspace detection scheme over conventional correlation methods.
Reference antenna-based subspace tracking for RFI mitigation in radio astronomy
NASA Astrophysics Data System (ADS)
Hellbourg, G.; Chippendale, A. P.; Kesteven, M. J.; Jeffs, B. D.
2014-12-01
Interference mitigation is becoming necessary to make radio astronomy work in bands that are heavily used to support our modern lives. It is becoming particularly difficult to work at frequencies between 1100 MHz and 1300 MHz that are rapidly filling up with satellite navigation signals. Antenna array radio telescopes present the possibility of applying spatial Radio Frequency Interference (RFI) mitigation. Spatial filtering techniques for RFI mitigation have been introduced to radio astronomy in the last decades. The success of these techniques relies on accurately estimating the RFI spatial signature (or RFI subspace). The use of a reference antenna steering at the RFI sources provides a good estimation of the RFI subspace when correlated with an array radio telescope. However, predicting the evolution of this subspace with time is necessary in a multiple RFI scenario, when only a single RFI source can be monitored at a time with the reference antenna. This paper introduces a subspace tracking approach, based on the power method applied to covariance data. The RFI spatial signature estimates provided by the reference antenna are used to initialize the power method to support a faster convergence. Practical examples are shown, applying the method to real data from a single 188 element phased array feed designed for the Australian Square Kilometre Array Pathfinder (ASKAP) telescope.
Jankovic, Marko; Ogawa, Hidemitsu
2004-10-01
Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method. PMID:15593379
Locally indistinguishable subspaces spanned by three-qubit unextendible product bases
Duan Runyao; Ying Mingsheng; Xin Yu
2010-03-15
We study the local distinguishability of general multiqubit states and show that local projective measurements and classical communication are as powerful as the most general local measurements and classical communication. Remarkably, this indicates that the local distinguishability of multiqubit states can be decided efficiently. Another useful consequence is that a set of orthogonal n-qubit states is locally distinguishable only if the summation of their orthogonal Schmidt numbers is less than the total dimension 2{sup n}. Employing these results, we show that any orthonormal basis of a subspace spanned by arbitrary three-qubit orthogonal unextendible product bases (UPB) cannot be exactly distinguishable by local operations and classical communication. This not only reveals another intrinsic property of three-qubit orthogonal UPB but also provides a class of locally indistinguishable subspaces with dimension 4. We also explicitly construct locally indistinguishable subspaces with dimensions 3 and 5, respectively. Similar to the bipartite case, these results on multipartite locally indistinguishable subspaces can be used to estimate the one-shot environment-assisted classical capacity of a class of quantum broadcast channels.
Adaptive Projection Subspace Dimension for the Thick-Restart Lanczos Method
Yamazaki, Ichitaro; Bai, Zhaojun; Simon, Horst; Wang, Lin-Wang; Wu, K.
2008-10-01
The Thick-Restart Lanczos (TRLan) method is an effective method for solving large-scale Hermitian eigenvalue problems. However, its performance strongly depends on the dimension of the projection subspace. In this paper, we propose an objective function to quantify the effectiveness of a chosen subspace dimension, and then introduce an adaptive scheme to dynamically adjust the dimension at each restart. An open-source software package, nu-TRLan, which implements the TRLan method with this adaptive projection subspace dimension is available in the public domain. The numerical results of synthetic eigenvalue problems are presented to demonstrate that nu-TRLan achieves speedups of between 0.9 and 5.1 over the static method using a default subspace dimension. To demonstrate the effectiveness of nu-TRLan in a real application, we apply it to the electronic structure calculations of quantum dots. We show that nu-TRLan can achieve speedups of greater than 1.69 over the state-of-the-art eigensolver for this application, which is based on the Conjugate Gradient method with a powerful preconditioner.
The Role of Preconditioning on Thermosphere Mass Density (Invited)
NASA Astrophysics Data System (ADS)
Thayer, J. P.; Liu, X.
2013-12-01
The ability to determine the amount of change expected in thermosphere mass density at a specific altitude in response to solar and geomagnetic disturbances is of major importance in predicting how low-earth orbiting satellites will be affected by atmospheric drag. It is also of importance in understanding how energy deposited in the thermosphere results in a certain change in thermosphere mass density. The response at a particular altitude will depend on the type of energy input, the altitude distribution of energy input, the internal processing of the energy input, and the state properties of the thermosphere gas (i.e. density, temperature, composition, winds) prior to the energy input - that is, the preconditioned state. As an example of preconditioning, the mass density at an altitude a few scale heights above the altitude where a fixed amount of energy is deposited will have a greater relative change when the initial state of the thermosphere exospheric temperature is cold. Thus, the preconditioned state of the thermosphere must be described properly in order to adequately predict the mass density response to a given energy input. Furthermore, the preconditioned composition structure will be important and can also cause the level of response in mass density to vary with altitude. The solar EUV flux is the major contributor in establishing the preconditioned state of thermosphere and the recent solar cycle with its extreme solar minimum EUV flux has resulted in an unprecedented cold and contracted thermosphere. High-resolution mass density observations inferred from accelerometer measurements on the CHAMP and GRACE satellites are used to demonstrate the role of preconditioning in mass density response to geomagnetic disturbances. Temperature and composition conditions are evaluated to explain observed changes in mass density. In particular, the dynamic action of the oxygen to helium transition region in both latitude and altitude is used to explain complex
Thermal preconditioning of coal/water mixtures. Final report
Roffe, G.; Miller, G.
1984-10-01
Thermal preconditioning of coal/water mixtures is a process proposed for use with stationary gas turbine engines in which the CWM is heated before delivery to the combustor in order to accomplish the water vaporization and coal pyrolysis/devolatilization steps prior to injection. The process offers a number of potential advantages such as the ability to start the engine without the use of an auxiliary fuel system, the elimination of atomizing nozzles, increased flame stability for proper turndown, compatibility with NO/sub x/-control techniques such as rich-burn-quick quench combustors, and potentially faster char burnout. The objective of the program was to obtain information which will allow the feasibility of thermal preconditioning to be evaluated. The economics of the process and its impact on a combined cycle system have been addressed. The slurry heating and boiling processes have been studied and the relationships between fuel properties, temperature and residence time in the processing apparatus and the evolution of combustible gases have been measured. A special apparatus was designed and constructed for the experimental portion of the program. Results indicate that at temperatures above 900/sup 0/F significant devolatilization can be accomplished in residence times on the order of one second. A preliminary economic and performance analysis has been completed for the thermal preconditioning process. Four gas turbine power plant concepts incorporating thermal preconditioning of CWS have been investigated. These concepts differ from one another in the source of heat used for the preconditioning process. Heat paths have been defined and the relationships between the efficiencies and operating conditions of the various components on heat rate and plant output have been determined. The analysis indicates that increases in heat rate of less than 5% can be expected. 4 refs., 10 figs., 2 tabs.
Combustion of coal/water mixtures with thermal preconditioning
Novack, M.; Roffe, G.; Miller, G.
1987-07-01
Thermal preconditioning is a process in which coal/water mixtures are vaporized to produce coal/steam suspensions, and then superheated to allow the coal to devolatilize producing suspensions of char particles in hydrocarbon gases and steam. This final product of the process can be injected without atomization, and burned directly in a gas turbine combustor. This paper reports on the results of an experimental program in which thermally preconditioned coal/water mixture was successfully burned with a stable flame in a gas turbine combustor test rig. Tests were performed at a mixture flowrate of 300 lb/hr and combustor pressure of 8 atm. The coal/water mixture was thermally preconditioned and injected into the combustor over a temperature range from 340/sup 0/F to 600/sup 0/F, and combustion air was supplied at between 600/sup 0/F to 725/sup 0/F. Test durations varied between 10 and 20 min. Major results of the combustion testing were that: A stable flame was maintained over a wide equivalence ratio range, between phi = 2.2 (rich) and 0.2 (lean); and combustion efficiency of over 99 percent was achieved when the mixture was preconditioned to 600/sup 0/F and the combustion air preheated to 725/sup 0/F. Measurements of ash particulates, captured in the exhaust sampling probe located 20 in. from the injector face, show typical sizes collected to be about 1 ..mu..m, with agglomerates of these particulates to be not more than 8 ..mu..m. The original mean coal particle size for these tests, prior to preconditioning, was 25 ..mu..m. Results of additional tests showed that one third of the sulfur contained in the solids of a coal/water mixture with 3 percent sulfur was evolved in gaseous form (under mild thermolized conditions) mainly as H/sub 2/S with the remainder as light mercaptans.
PSPIKE: A Parallel Hybrid Sparse Linear System Solver
NASA Astrophysics Data System (ADS)
Manguoglu, Murat; Sameh, Ahmed H.; Schenk, Olaf
The availability of large-scale computing platforms comprised of tens of thousands of multicore processors motivates the need for the next generation of highly scalable sparse linear system solvers. These solvers must optimize parallel performance, processor (serial) performance, as well as memory requirements, while being robust across broad classes of applications and systems. In this paper, we present a new parallel solver that combines the desirable characteristics of direct methods (robustness) and effective iterative solvers (low computational cost), while alleviating their drawbacks (memory requirements, lack of robustness). Our proposed hybrid solver is based on the general sparse solver PARDISO, and the “Spike” family of hybrid solvers. The resulting algorithm, called PSPIKE, is as robust as direct solvers, more reliable than classical preconditioned Krylov subspace methods, and much more scalable than direct sparse solvers. We support our performance and parallel scalability claims using detailed experimental studies and comparison with direct solvers, as well as classical preconditioned Krylov methods.
On the role of subspace zeros in retrospective cost adaptive control of non-square plants
NASA Astrophysics Data System (ADS)
Dogan Sumer, E.; Bernstein, Dennis S.
2015-02-01
We consider adaptive control of non-square plants, that is, plants that have an unequal number of inputs and outputs. In particular, we focus on retrospective cost adaptive control (RCAC), which is a direct, discrete-time adaptive control algorithm that is applicable to stabilisation, command following, disturbance rejection, and model reference control problems. Previous studies on RCAC have focused on control of square plants. In the square case, RCAC requires knowledge of the first non-zero Markov parameter and the non-minimum-phase (NMP) transmission zeros of the plant, if any. No additional information about the plant or the exogenous signals need be known. The goal of the present paper is to consider RCAC for non-square plants. Unlike the square case, we show that the assumption that the non-square plant is minimum phase does not guarantee closed-loop stability and signal boundedness. The main purpose of this paper is to establish the existence of time-invariant input and output subspaces corresponding to the adaptive controller. In particular, we show that RCAC implicitly squares down non-square plants through pre-/post-compensation of the non-square plant with a constant matrix. We show that, for wide plants, the control input generated by RCAC lies in a time-invariant 'input subspace', which is equivalent to pre-compensating the plant with a constant matrix. On the other hand, for tall plants, we show that the controller update is driven by the output of the plant post-compensated with a constant matrix. Accordingly, in either case, signal boundedness properties of the closed-loop system are determined by the transmission zeros of the squared system, which we call the 'subspace zeros'. To deal with NMP subspace zeros, we introduce a robustness modification, which prevents RCAC from cancelling the NMP subspace zeros.
Brunton, Steven L.; Brunton, Bingni W.; Proctor, Joshua L.; Kutz, J. Nathan
2016-01-01
In this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control. PMID:26919740
Improved Detection of Local Earthquakes in the Vienna Basin (Austria), using Subspace Detectors
NASA Astrophysics Data System (ADS)
Apoloner, Maria-Theresia; Caffagni, Enrico; Bokelmann, Götz
2016-04-01
The Vienna Basin in Eastern Austria is densely populated and highly-developed; it is also a region of low to moderate seismicity, yet the seismological network coverage is relatively sparse. This demands improving our capability of earthquake detection by testing new methods, enlarging the existing local earthquake catalogue. This contributes to imaging tectonic fault zones for better understanding seismic hazard, also through improved earthquake statistics (b-value, magnitude of completeness). Detection of low-magnitude earthquakes or events for which the highest amplitudes slightly exceed the signal-to-noise-ratio (SNR), may be possible by using standard methods like the short-term over long-term average (STA/LTA). However, due to sparse network coverage and high background noise, such a technique may not detect all potentially recoverable events. Yet, earthquakes originating from the same source region and relatively close to each other, should be characterized by similarity in seismic waveforms, at a given station. Therefore, waveform similarity can be exploited by using specific techniques such as correlation-template based (also known as matched filtering) or subspace detection methods (based on the subspace theory). Matching techniques basically require a reference or template event, usually characterized by high waveform coherence in the array receivers, and high SNR, which is cross-correlated with the continuous data. Instead, subspace detection methods overcome in principle the necessity of defining template events as single events, but use a subspace extracted from multiple events. This approach theoretically should be more robust in detecting signals that exhibit a strong variability (e.g. because of source or magnitude). In this study we scan the continuous data recorded in the Vienna Basin with a subspace detector to identify additional events. This will allow us to estimate the increase of the seismicity rate in the local earthquake catalogue
On the convergence of (ensemble) Kalman filters and smoothers onto the unstable subspace
NASA Astrophysics Data System (ADS)
Bocquet, Marc
2016-04-01
The characteristics of the model dynamics are critical in the performance of (ensemble) Kalman filters and smoothers. In particular, as emphasised in the seminal work of Anna Trevisan and co-authors, the error covariance matrix is asymptotically supported by the unstable and neutral subspace only, i.e. it is span by the backward Lyapunov vectors with non-negative exponents. This behaviour is at the heart of algorithms known as Assimilation in the Unstable Subspace, although its formal proof was still missing. This convergence property, its analytic proof, meaning and implications for the design of efficient reduced-order data assimilation algorithms are the topics of this talk. The structure of the talk is as follows. Firstly, we provide the analytic proof of the convergence on the unstable and neutral subspace in the linear dynamics and linear observation operator case, along with rigorous results giving the rate of such convergence. The derivation is based on an expression that relates explicitly the covariance matrix at an arbitrary time with the initial error covariance. Numerical results are also shown to illustrate and support the mathematical claims. Secondly, we discuss how this neat picture is modified when the dynamics become nonlinear and chaotic and when it is not possible to derive analytic formulas. In this case an ensemble Kalman filter (EnKF) is used and the connection between the convergence properties on the unstable-neutral subspace and the EnKF covariance inflation is discussed. We also explain why, in the perfect model setting, the iterative ensemble Kalman smoother (IEnKS), as an efficient filtering and smoothing technique, has an error covariance matrix whose projection is more focused on the unstable-neutral subspace than that of the EnKF. This contribution results from collaborations with A. Carrassi, K. S. Gurumoorthy, A. Apte, C. Grudzien, and C. K. R. T. Jones.
Zhang, Ying; Liu, Xiangrong; Yan, Feng; Min, Lianqiu; Ji, Xunming; Luo, Yumin
2012-01-01
Three cycles of remote ischemic pre-conditioning induced by temporarily occluding the bilateral femoral arteries (10 minutes) prior to 10 minutes of reperfusion were given once a day for 3 days before the animal received middle artery occlusion and reperfusion surgery. The results showed that brain infarct volume was significantly reduced after remote ischemic pre-conditioning. Scores in the forelimb placing test and the postural reflex test were significantly lower in rats having undergone remote ischemic pre-conditioning compared with those who did not receive remote ischemic pre-conditioning. Thus, neurological function was better in rats having undergone remote ischemic pre-conditioning compared with those who did not receive remote ischemic pre-conditioning. These results indicate that remote ischemic pre-conditioning in rat hindlimb exerts protective effects in ischemia-reperfusion injury. PMID:25745448
Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers
NASA Technical Reports Server (NTRS)
Guru Prasad, K.; Kane, J. H.
1992-01-01
The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.
Investigation of Reperfusion Injury and Ischemic Preconditioning in Microsurgry
Wang, Wei Zhong
2008-01-01
Ischemia/reperfusion (I/R) is inevitable in many vascular and musculoskeletal traumas, diseases, free tissue transfers, and during time-consuming reconstructive surgeries in the extremities. Salvage of a prolonged ischemic extremity or flap still remains a challenge for the microvascular surgeon. One of the common complications after microsurgery is I/R-induced tissue death or I/R injury. Twenty years after the discovery, ischemic preconditioning (IPC) has emerged as a powerful method for attenuating I/R injury in a variety of organs or tissues. However, its therapeutic expectations still need to be fulfilled. In this article, the author reviews some important experimental evidences of I/R injury as well as preconditioning-induced protection in the fields relevant to microsurgery. PMID:18946882
Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol
2015-01-01
Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys.2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum under DFT by several fold. The approach also shows promise for free energy calculations when thermal noise can be controlled. PMID:25516726
Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol
Kale, Seyit; Sode, Olaseni; Weare, Jonathan; Dinner, Aaron R.
2014-11-07
Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys. 2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum undermore » DFT by several fold. In conclusion, the approach also shows promise for free energy calculations when thermal noise can be controlled.« less
Reconstructing Clusters for Preconditioned Short-term Load Forecasting
NASA Astrophysics Data System (ADS)
Itagaki, Tadahiro; Mori, Hiroyuki
This paper presents a new preconditioned method for short-term load forecasting that focuses on more accurate predicted value. In recent years, the deregulated and competitive power market increases the degree of uncertainty. As a result, more sophisticated short-term load forecasting techniques are required to deal with more complicated load behavior. To alleviate the complexity of load behavior, this paper presents a new preconditioned model. In this paper, clustering results are reconstructed to equalize the number of learning data after clustering with the Kohonen-based neural network. That enhances a short-term load forecasting model at each reconstructed cluster. The proposed method is successfully applied to real data of one-step ahead daily maximum load forecasting.
Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol
Kale, Seyit; Sode, Olaseni; Weare, Jonathan; Dinner, Aaron R.
2014-11-07
Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys. 2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum under DFT by several fold. In conclusion, the approach also shows promise for free energy calculations when thermal noise can be controlled.
Fan, Ran; Yu, Tao; Lin, Jia-Li; Ren, Guang-Dong; Li, Yi; Liao, Xiao-Xing; Huang, Zi-Tong; Jiang, Chong-Hui
2016-10-01
In this study, we investigated the effects of remote ischemic preconditioning on post resuscitation cerebral function in a rat model of cardiac arrest and resuscitation. The animals were randomized into six groups: 1) sham operation, 2) lateral ventricle injection and sham operation, 3) cardiac arrest induced by ventricular fibrillation, 4) lateral ventricle injection and cardiac arrest, 5) remote ischemic preconditioning initiated 90min before induction of ventricular fibrillation, and 6) lateral ventricle injection and remote ischemic preconditioning before cardiac arrest. Reagent of Lateral ventricle injection is neuroglobin antisense oligodeoxynucleotides which initiated 24h before sham operation, cardiac arrest or remote ischemic preconditioning. Remote ischemic preconditioning was induced by four cycles of 5min of limb ischemia, followed by 5min of reperfusion. Ventricular fibrillation was induced by current and lasted for 6min. Defibrillation was attempted after 6min of cardiopulmonary resuscitation. The animals were then monitored for 2h and observed for an additionally maximum 70h. Post resuscitation cerebral function was evaluated by neurologic deficit score at 72h after return of spontaneous circulation. Results showed that remote ischemic preconditioning increased neurologic deficit scores. To investigate the neuroprotective effects of remote ischemic preconditioning, we observed neuronal injury at 48 and 72h after return of spontaneous circulation and found that remote ischemic preconditioning significantly decreased the occurrence of neuronal apoptosis and necrosis. To further comprehend mechanism of neuroprotection induced by remote ischemic preconditioning, we found expression of neuroglobin at 24h after return of spontaneous circulation was enhanced. Furthermore, administration of neuroglobin antisense oligodeoxynucleotides before induction of remote ischemic preconditioning showed that the level of neuroglobin was decreased then partly abrogated
Parallel Domain Decomposition Preconditioning for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai; Kutler, Paul (Technical Monitor)
1998-01-01
This viewgraph presentation gives an overview of the parallel domain decomposition preconditioning for computational fluid dynamics. Details are given on some difficult fluid flow problems, stabilized spatial discretizations, and Newton's method for solving the discretized flow equations. Schur complement domain decomposition is described through basic formulation, simplifying strategies (including iterative subdomain and Schur complement solves, matrix element dropping, localized Schur complement computation, and supersparse computations), and performance evaluation.
Object-oriented design of preconditioned iterative methods
Bruaset, A.M.
1994-12-31
In this talk the author discusses how object-oriented programming techniques can be used to develop a flexible software package for preconditioned iterative methods. The ideas described have been used to implement the linear algebra part of Diffpack, which is a collection of C++ class libraries that provides high-level tools for the solution of partial differential equations. In particular, this software package is aimed at rapid development of PDE-based numerical simulators, primarily using finite element methods.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction.
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
The evolving concept of physiological ischemia training vs. ischemia preconditioning.
Ni, Jun; Lu, Hongjian; Lu, Xiao; Jiang, Minghui; Peng, Qingyun; Ren, Caili; Xiang, Jie; Mei, Chengyao; Li, Jianan
2015-11-01
Ischemic heart diseases are the leading cause of death with increasing numbers of patients worldwide. Despite advances in revascularization techniques, angiogenic therapies remain highly attractive. Physiological ischemia training, which is first proposed in our laboratory, refers to reversible ischemia training of normal skeletal muscles by using a tourniquet or isometric contraction to cause physiologic ischemia for about 4 weeks for the sake of triggering molecular and cellular mechanisms to promote angiogenesis and formation of collateral vessels and protect remote ischemia areas. Physiological ischemia training therapy augments angiogenesis in the ischemic myocardium by inducing differential expression of proteins involved in energy metabolism, cell migration, protein folding, and generation. It upregulates the expressions of vascular endothelial growth factor, and induces angiogenesis, protects the myocardium when infarction occurs by increasing circulating endothelial progenitor cells and enhancing their migration, which is in accordance with physical training in heart disease rehabilitation. These findings may lead to a new approach of therapeutic angiogenesis for patients with ischemic heart diseases. On the basis of the promising results in animal studies, studies were also conducted in patients with coronary artery disease without any adverse effect in vivo, indicating that physiological ischemia training therapy is a safe, effective and non-invasive angiogenic approach for cardiovascular rehabilitation. Preconditioning is considered to be the most protective intervention against myocardial ischemia-reperfusion injury to date. Physiological ischemia training is different from preconditioning. This review summarizes the preclinical and clinical data of physiological ischemia training and its difference from preconditioning. PMID:26664354
Preconditioning the bidomain model with almost linear complexity
NASA Astrophysics Data System (ADS)
Pierre, Charles
2012-01-01
The bidomain model is widely used in electro-cardiology to simulate spreading of excitation in the myocardium and electrocardiograms. It consists of a system of two parabolic reaction diffusion equations coupled with an ODE system. Its discretisation displays an ill-conditioned system matrix to be inverted at each time step: simulations based on the bidomain model therefore are associated with high computational costs. In this paper we propose a preconditioning for the bidomain model either for an isolated heart or in an extended framework including a coupling with the surrounding tissues (the torso). The preconditioning is based on a formulation of the discrete problem that is shown to be symmetric positive semi-definite. A block LU decomposition of the system together with a heuristic approximation (referred to as the monodomain approximation) are the key ingredients for the preconditioning definition. Numerical results are provided for two test cases: a 2D test case on a realistic slice of the thorax based on a segmented heart medical image geometry, a 3D test case involving a small cubic slab of tissue with orthotropic anisotropy. The analysis of the resulting computational cost (both in terms of CPU time and of iteration number) shows an almost linear complexity with the problem size, i.e. of type nlog α( n) (for some constant α) which is optimal complexity for such problems.
Preconditioning for edge-preserving image super resolution.
Pelletier, Stéphane; Cooperstock, Jeremy R
2012-01-01
We propose a simple preconditioning method for accelerating the solution of edge-preserving image super-resolution (SR) problems in which a linear shift-invariant point spread function is employed. Our technique involves reordering the high-resolution (HR) pixels in a similar manner to what is done in preconditioning methods for quadratic SR formulations. However, due to the edge preserving requirements, the Hessian matrix of the cost function varies during the minimization process. We develop an efficient update scheme for the preconditioner in order to cope with this situation. Unlike some other acceleration strategies that round the displacement values between the low-resolution (LR) images on the HR grid, the proposed method does not sacrifice the optimality of the observation model. In addition, we describe a technique for preconditioning SR problems involving rational magnification factors. The use of such factors is motivated in part by the fact that, under certain circumstances, optimal SR zooms are nonintegers. We show that, by reordering the pixels of the LR images, the structure of the problem to solve is modified in such a way that preconditioners based on circulant operators can be used. PMID:21693419
Universal holonomic quantum gates in decoherence-free subspace on superconducting circuits
NASA Astrophysics Data System (ADS)
Xue, Zheng-Yuan; Zhou, Jian; Wang, Z. D.
2015-08-01
To implement a set of universal quantum logic gates based on non-Abelian geometric phases, it is conventional wisdom that quantum systems beyond two levels are required, which is extremely difficult to fulfill for superconducting qubits and appears to be a main reason why only single-qubit gates were implemented in a recent experiment [A. A. Abdumalikov, Jr. et al., Nature (London) 496, 482 (2013), 10.1038/nature12010]. Here we propose to realize nonadiabatic holonomic quantum computation in decoherence-free subspace on circuit QED, where one can use only the two levels in transmon qubits, a usual interaction, and a minimal resource for the decoherence-free subspace encoding. In particular, our scheme not only overcomes the difficulties encountered in previous studies but also can still achieve considerably large effective coupling strength, such that high-fidelity quantum gates can be achieved. Therefore, the present scheme makes realizing robust holonomic quantum computation with superconducting circuits very promising.
Cumulant-Based Coherent Signal Subspace Method for Bearing and Range Estimation
NASA Astrophysics Data System (ADS)
Saidi, Zineb; Bourennane, Salah
2006-12-01
A new method for simultaneous range and bearing estimation for buried objects in the presence of an unknown Gaussian noise is proposed. This method uses the MUSIC algorithm with noise subspace estimated by using the slice fourth-order cumulant matrix of the received data. The higher-order statistics aim at the removal of the additive unknown Gaussian noise. The bilinear focusing operator is used to decorrelate the received signals and to estimate the coherent signal subspace. A new source steering vector is proposed including the acoustic scattering model at each sensor. Range and bearing of the objects at each sensor are expressed as a function of those at the first sensor. This leads to the improvement of object localization anywhere, in the near-field or in the far-field zone of the sensor array. Finally, the performances of the proposed method are validated on data recorded during experiments in a water tank.
NASA Astrophysics Data System (ADS)
Moroz, Alexander
2016-03-01
A discrete parity {Z}2 -symmetry of a two-parameter extension of the quantum Rabi model which smoothly interpolates between the latter and the Jaynes-Cummings model, and of the two-photon and the two-mode quantum Rabi models, enables their diagonalization in the spin subspace. A more general statement is that the respective sets of 2× 2 Hermitian operators of the Fulton-Gouterman type and those diagonal in the spin subspace are unitary equivalent. The diagonalized representation makes it transparent that any question about integrability and solvability can be addressed only at the level of ordinary differential operators of Dunkl type. Braak's definition of integrability is shown i) to contradict earlier numerical studies and ii) to imply that any physically reasonable differential operator of Fulton-Gouterman type is integrable.
Hyperspectral Image Kernel Sparse Subspace Clustering with Spatial Max Pooling Operation
NASA Astrophysics Data System (ADS)
Zhang, Hongyan; Zhai, Han; Liao, Wenzhi; Cao, Liqin; Zhang, Liangpei; Pižurica, Aleksandra
2016-06-01
In this paper, we present a kernel sparse subspace clustering with spatial max pooling operation (KSSC-SMP) algorithm for hyperspectral remote sensing imagery. Firstly, the feature points are mapped from the original space into a higher dimensional space with a kernel strategy. In particular, the sparse subspace clustering (SSC) model is extended to nonlinear manifolds, which can better explore the complex nonlinear structure of hyperspectral images (HSIs) and obtain a much more accurate representation coefficient matrix. Secondly, through the spatial max pooling operation, the spatial contextual information is integrated to obtain a smoother clustering result. Through experiments, it is verified that the KSSC-SMP algorithm is a competitive clustering method for HSIs and outperforms the state-of-the-art clustering methods.
Entanglement properties of positive operators with ranges in completely entangled subspaces
NASA Astrophysics Data System (ADS)
Sengupta, R.; Arvind, Singh, Ajit Iqbal
2014-12-01
We prove that the projection on a completely entangled subspace S of maximum dimension obtained by Parthasarathy [K. R. Parthasarathy, Proc. Indian Acad. Sci. Math. Sci. 114, 365 (2004), 10.1007/BF02829441] in a multipartite quantum system is not positive under partial transpose. We next show that a large number of positive operators with a range in S also have the same property. In this process we construct an orthonormal basis for S and provide a theorem to link the constructions of completely entangled subspaces due to Parthasarathy (as cited above), Bhat [B. V. R. Bhat, Int. J. Quantum Inf. 4, 325 (2006), 10.1142/S0219749906001797], and Johnston [N. Johnston, Phys. Rev. A 87, 064302 (2013), 10.1103/PhysRevA.87.064302].
NASA Astrophysics Data System (ADS)
Hsu, Wei-Ting; Loh, Chin-Hsiung; Chao, Shu-Hsien
2015-03-01
Stochastic subspace identification method (SSI) has been proven to be an efficient algorithm for the identification of liner-time-invariant system using multivariate measurements. Generally, the estimated modal parameters through SSI may be afflicted with statistical uncertainty, e.g. undefined measurement noises, non-stationary excitation, finite number of data samples etc. Therefore, the identified results are subjected to variance errors. Accordingly, the concept of the stabilization diagram can help users to identify the correct model, i.e. through removing the spurious modes. Modal parameters are estimated at successive model orders where the physical modes of the system are extracted and separated from the spurious modes. Besides, an uncertainty computation scheme was derived for the calculation of uncertainty bounds for modal parameters at some given model order. The uncertainty bounds of damping ratios are particularly interesting, as the estimation of damping ratios are difficult to obtain. In this paper, an automated stochastic subspace identification algorithm is addressed. First, the identification of modal parameters through covariance-driven stochastic subspace identification from the output-only measurements is used for discussion. A systematic way of investigation on the criteria for the stabilization diagram is presented. Secondly, an automated algorithm of post-processing on stabilization diagram is demonstrated. Finally, the computation of uncertainty bounds for each mode with all model order in the stabilization diagram is utilized to determine system natural frequencies and damping ratios. Demonstration of this study on the system identification of a three-span steel bridge under operation condition is presented. It is shown that the proposed new operation procedure for the automated covariance-driven stochastic subspace identification can enhance the robustness and reliability in structural health monitoring.
Transfer and teleportation of quantum states encoded in decoherence-free subspace
Wei Hua; Deng Zhijao; Zhang Xiaolong; Feng Mang
2007-11-15
Quantum state transfer and teleportation, with qubits encoded in internal states of atoms in cavities, among spatially separated nodes of a quantum network in a decoherence-free subspace are proposed, based on a cavity-assisted interaction with single-photon pulses. We show in detail the implementation of a logic-qubit Hadamard gate and a two-logic-qubit conditional gate, and discuss the experimental feasibility of our scheme.
A repeatable inverse kinematics algorithm with linear invariant subspaces for mobile manipulators.
Tchoń, Krzysztof; Jakubiak, Janusz
2005-10-01
On the basis of a geometric characterization of repeatability we present a repeatable extended Jacobian inverse kinematics algorithm for mobile manipulators. The algorithm's dynamics have linear invariant subspaces in the configuration space. A standard Ritz approximation of platform controls results in a band-limited version of this algorithm. Computer simulations involving an RTR manipulator mounted on a kinematic car-type mobile platform are used in order to illustrate repeatability and performance of the algorithm. PMID:16240778
NASA Astrophysics Data System (ADS)
Jefferson, Jennifer L.; Gilbert, James M.; Constantine, Paul G.; Maxwell, Reed M.
2016-05-01
Integrated hydrologic models coupled to land surface models require several input parameters to characterize the land surface and to estimate energy fluxes. Uncertainty of input parameter values is inherent in any model and the sensitivity of output to these uncertain parameters becomes an important consideration. To better understand these connections in the context of hydrologic models, we use the ParFlow-Common Land Model (PF-CLM) to estimate energy fluxes given variations in 19 vegetation and land surface parameters over a 144-hour period of time. Latent, sensible and ground heat fluxes from bare soil and grass vegetation were estimated using single column and tilted-v domains. Energy flux outputs, along with the corresponding input parameters, from each of the four scenario simulations were evaluated using active subspaces. The active subspace method considers parameter sensitivity by quantifying a weight for each parameter. The method also evaluates the potential for dimension reduction by identifying the input-output relationship through the active variable - a linear combination of input parameters. The aerodynamic roughness length was the most important parameter for bare soil energy fluxes. Multiple parameters were important for energy fluxes from vegetated surfaces and depended on the type of energy flux. Relationships between land surface inputs and output fluxes varied between latent, sensible and ground heat, but were consistent between domain setup (i.e., with or without lateral flow) and vegetation type. A quadratic polynomial was used to describe the input-output relationship for these energy fluxes. The reduced-dimension model of land surface dynamics can be compared to observations or used to solve the inverse problem. Considering this work as a proof-of-concept, the active subspace method can be applied and extended to a range of domain setups, land cover types and time periods to obtain a reduced-form representation of any output of interest
Robust Subspace Clustering for Multi-View Data by Exploiting Correlation Consensus.
Wang, Yang; Lin, Xuemin; Wu, Lin; Zhang, Wenjie; Zhang, Qing; Huang, Xiaodi
2015-11-01
More often than not, a multimedia data described by multiple features, such as color and shape features, can be naturally decomposed of multi-views. Since multi-views provide complementary information to each other, great endeavors have been dedicated by leveraging multiple views instead of a single view to achieve the better clustering performance. To effectively exploit data correlation consensus among multi-views, in this paper, we study subspace clustering for multi-view data while keeping individual views well encapsulated. For characterizing data correlations, we generate a similarity matrix in a way that high affinity values are assigned to data objects within the same subspace across views, while the correlations among data objects from distinct subspaces are minimized. Before generating this matrix, however, we should consider that multi-view data in practice might be corrupted by noise. The corrupted data will significantly downgrade clustering results. We first present a novel objective function coupled with an angular based regularizer. By minimizing this function, multiple sparse vectors are obtained for each data object as its multiple representations. In fact, these sparse vectors result from reaching data correlation consensus on all views. For tackling noise corruption, we present a sparsity-based approach that refines the angular-based data correlation. Using this approach, a more ideal data similarity matrix is generated for multi-view data. Spectral clustering is then applied to the similarity matrix to obtain the final subspace clustering. Extensive experiments have been conducted to validate the effectiveness of our proposed approach. PMID:26353354
Subspace based adaptive denoising of surface EMG from neurological injury patients
NASA Astrophysics Data System (ADS)
Liu, Jie; Ying, Dongwen; Zev Rymer, William; Zhou, Ping
2014-10-01
Objective: After neurological injuries such as spinal cord injury, voluntary surface electromyogram (EMG) signals recorded from affected muscles are often corrupted by interferences, such as spurious involuntary spikes and background noises produced by physiological and extrinsic/accidental origins, imposing difficulties for signal processing. Conventional methods did not well address the problem caused by interferences. It is difficult to mitigate such interferences using conventional methods. The aim of this study was to develop a subspace-based denoising method to suppress involuntary background spikes contaminating voluntary surface EMG recordings. Approach: The Karhunen-Loeve transform was utilized to decompose a noisy signal into a signal subspace and a noise subspace. An optimal estimate of EMG signal is derived from the signal subspace and the noise power. Specifically, this estimator is capable of making a tradeoff between interference reduction and signal distortion. Since the estimator partially relies on the estimate of noise power, an adaptive method was presented to sequentially track the variation of interference power. The proposed method was evaluated using both semi-synthetic and real surface EMG signals. Main results: The experiments confirmed that the proposed method can effectively suppress interferences while keep the distortion of voluntary EMG signal in a low level. The proposed method can greatly facilitate further signal processing, such as onset detection of voluntary muscle activity. Significance: The proposed method can provide a powerful tool for suppressing background spikes and noise contaminating voluntary surface EMG signals of paretic muscles after neurological injuries, which is of great importance for their multi-purpose applications.
N-Screen Aware Multicriteria Hybrid Recommender System Using Weight Based Subspace Clustering
Ullah, Farman; Lee, Sungchang
2014-01-01
This paper presents a recommender system for N-screen services in which users have multiple devices with different capabilities. In N-screen services, a user can use various devices in different locations and time and can change a device while the service is running. N-screen aware recommendation seeks to improve the user experience with recommended content by considering the user N-screen device attributes such as screen resolution, media codec, remaining battery time, and access network and the user temporal usage pattern information that are not considered in existing recommender systems. For N-screen aware recommendation support, this work introduces a user device profile collaboration agent, manager, and N-screen control server to acquire and manage the user N-screen devices profile. Furthermore, a multicriteria hybrid framework is suggested that incorporates the N-screen devices information with user preferences and demographics. In addition, we propose an individual feature and subspace weight based clustering (IFSWC) to assign different weights to each subspace and each feature within a subspace in the hybrid framework. The proposed system improves the accuracy, precision, scalability, sparsity, and cold start issues. The simulation results demonstrate the effectiveness and prove the aforementioned statements. PMID:25152921
Shahbazi Avarvand, Forooz; Ewald, Arne; Nolte, Guido
2012-01-01
To address the problem of mixing in EEG or MEG connectivity analysis we exploit that noninteracting brain sources do not contribute systematically to the imaginary part of the cross-spectrum. Firstly, we propose to apply the existing subspace method "RAP-MUSIC" to the subspace found from the dominant singular vectors of the imaginary part of the cross-spectrum rather than to the conventionally used covariance matrix. Secondly, to estimate the specific sources interacting with each other, we use a modified LCMV-beamformer approach in which the source direction for each voxel was determined by maximizing the imaginary coherence with respect to a given reference. These two methods are applicable in this form only if the number of interacting sources is even, because odd-dimensional subspaces collapse to even-dimensional ones. Simulations show that (a) RAP-MUSIC based on the imaginary part of the cross-spectrum accurately finds the correct source locations, that (b) conventional RAP-MUSIC fails to do so since it is highly influenced by noninteracting sources, and that (c) the second method correctly identifies those sources which are interacting with the reference. The methods are also applied to real data for a motor paradigm, resulting in the localization of four interacting sources presumably in sensory-motor areas. PMID:22919429
Tao, Dacheng; Tang, Xiaoou; Li, Xuelong; Wu, Xindong
2006-07-01
Relevance feedback schemes based on support vector machines (SVM) have been widely used in content-based image retrieval (CBIR). However, the performance of SVM-based relevance feedback is often poor when the number of labeled positive feedback samples is small. This is mainly due to three reasons: 1) an SVM classifier is unstable on a small-sized training set, 2) SVM's optimal hyperplane may be biased when the positive feedback samples are much less than the negative feedback samples, and 3) overfitting happens because the number of feature dimensions is much higher than the size of the training set. In this paper, we develop a mechanism to overcome these problems. To address the first two problems, we propose an asymmetric bagging-based SVM (AB-SVM). For the third problem, we combine the random subspace method and SVM for relevance feedback, which is named random subspace SVM (RS-SVM). Finally, by integrating AB-SVM and RS-SVM, an asymmetric bagging and random subspace SVM (ABRS-SVM) is built to solve these three problems and further improve the relevance feedback performance. PMID:16792098
Dong, Daoyi; Chen, Chunlin; Tarn, Tzyh-Jong; Pechen, Alexander; Rabitz, Herschel
2008-08-01
In this paper, an incoherent control scheme for accomplishing the state control of a class of quantum systems which have wavefunction-controllable subspaces is proposed. This scheme includes the following two steps: projective measurement on the initial state and learning control in the wavefunction-controllable subspace. The first step probabilistically projects the initial state into the wavefunction-controllable subspace. The probability of success is sensitive to the initial state; however, it can be greatly improved through multiple experiments on several identical initial states even in the case with a small probability of success for an individual measurement. The second step finds a local optimal control sequence via quantum reinforcement learning and drives the controlled system to the objective state through a set of suitable controls. In this strategy, the initial states can be unknown identical states, the quantum measurement is used as an effective control, and the controlled system is not necessarily unitarily controllable. This incoherent control scheme provides an alternative quantum engineering strategy for locally controllable quantum systems. PMID:18632384
A signal subspace approach for modeling the hemodynamic response function in fMRI.
Hossein-Zadeh, Gholam-Ali; Ardekani, Babak A; Soltanian-Zadeh, Hamid
2003-10-01
Many fMRI analysis methods use a model for the hemodynamic response function (HRF). Common models of the HRF, such as the Gaussian or Gamma functions, have parameters that are usually selected a priori by the data analyst. A new method is presented that characterizes the HRF over a wide range of parameters via three basis signals derived using principal component analysis (PCA). Covering the HRF variability, these three basis signals together with the stimulation pattern define signal subspaces which are applicable to both linear and nonlinear modeling and identification of the HRF and for various activation detection strategies. Analysis of simulated fMRI data using the proposed signal subspace showed increased detection sensitivity compared to the case of using a previously proposed trigonometric subspace. The methodology was also applied to activation detection in both event-related and block design experimental fMRI data using both linear and nonlinear modeling of the HRF. The activated regions were consistent with previous studies, indicating the ability of the proposed approach in detecting brain activation without a priori assumptions about the shape parameters of the HRF. The utility of the proposed basis functions in identifying the HRF is demonstrated by estimating the HRF in different activated regions. PMID:14599533
N-screen aware multicriteria hybrid recommender system using weight based subspace clustering.
Ullah, Farman; Sarwar, Ghulam; Lee, Sungchang
2014-01-01
This paper presents a recommender system for N-screen services in which users have multiple devices with different capabilities. In N-screen services, a user can use various devices in different locations and time and can change a device while the service is running. N-screen aware recommendation seeks to improve the user experience with recommended content by considering the user N-screen device attributes such as screen resolution, media codec, remaining battery time, and access network and the user temporal usage pattern information that are not considered in existing recommender systems. For N-screen aware recommendation support, this work introduces a user device profile collaboration agent, manager, and N-screen control server to acquire and manage the user N-screen devices profile. Furthermore, a multicriteria hybrid framework is suggested that incorporates the N-screen devices information with user preferences and demographics. In addition, we propose an individual feature and subspace weight based clustering (IFSWC) to assign different weights to each subspace and each feature within a subspace in the hybrid framework. The proposed system improves the accuracy, precision, scalability, sparsity, and cold start issues. The simulation results demonstrate the effectiveness and prove the aforementioned statements. PMID:25152921
[Preconditioning impact on coronary perfusion during ischemia and reperfusion of heart].
Maslov, L N; Lishmanov, Iu B; Oeltgen, P; Peĭ, J-M; Krylatov, A V; Barzakh, E I; Portnichenko, A G; Meshoulam, R
2012-04-01
Recent studies have confirmed that ischemic preconditioning prevents appearance of reperfusion endothelial dysfunction. However, the issue of preconditioning impact on no-reflow phenomenon remains unresolved. The receptor mechanisms involved in the cardioprotective and vasoprotective effects of preconditioning are different. The ability of preconditioning in preventing reperfusion endothelial dysfunction is dependent upon bradykinin B2-receptor activation and not dependent upon adenosine receptor stimulation. The vasoprotective effect of preconditioning is mediated via mechanisms relying in part on activation of protein kinase C, NO-synthase, cyclooxygenase, mitochondrial K(ATP)-channel opening and an enhancement of antioxidative protection of the heart. The delayed preconditioning also exerts endothelium-protective effect. Peroxynitrite, NO* and O2* are the triggers of this effect but a possible end-effector involves endothelial NO-synthase. PMID:22834333
Preconditioned iterative methods for space-time fractional advection-diffusion equations
NASA Astrophysics Data System (ADS)
Zhao, Zhi; Jin, Xiao-Qing; Lin, Matthew M.
2016-08-01
In this paper, we propose practical numerical methods for solving a class of initial-boundary value problems of space-time fractional advection-diffusion equations. First, we propose an implicit method based on two-sided Grünwald formulae and discuss its stability and consistency. Then, we develop the preconditioned generalized minimal residual (preconditioned GMRES) method and preconditioned conjugate gradient normal residual (preconditioned CGNR) method with easily constructed preconditioners. Importantly, because resulting systems are Toeplitz-like, fast Fourier transform can be applied to significantly reduce the computational cost. We perform numerical experiments to demonstrate the efficiency of our preconditioners, even in cases with variable coefficients.
NASA Astrophysics Data System (ADS)
Kovalevsky, Louis; Gosselet, Pierre
2016-09-01
The Variational Theory of Complex Rays (VTCR) is an indirect Trefftz method designed to study systems governed by Helmholtz-like equations. It uses wave functions to represent the solution inside elements, which reduces the dispersion error compared to classical polynomial approaches but the resulting system is prone to be ill conditioned. This paper gives a simple and original presentation of the VTCR using the discontinuous Galerkin framework and it traces back the ill-conditioning to the accumulation of eigenvalues near zero for the formulation written in terms of wave amplitude. The core of this paper presents an efficient solving strategy that overcomes this issue. The key element is the construction of a search subspace where the condition number is controlled at the cost of a limited decrease of attainable precision. An augmented LSQR solver is then proposed to solve efficiently and accurately the complete system. The approach is successfully applied to different examples.
Steps to translate preconditioning from basic research to the clinic
Bahjat, Frances R; Gesuete, Raffaella; Stenzel-Poore, Mary P
2012-01-01
Efforts to treat cardiovascular and cerebrovascular diseases often focus on the mitigation of ischemia-reperfusion (I/R) injury. Many treatments or “preconditioners” are known to provide substantial protection against the I/R injury when administered prior to the event. Brief periods of ischemia itself have been validated as a means to achieve neuroprotection in many experimental disease settings, in multiple organ systems, and in multiple species suggesting a common pathway leading to tolerance. In addition, pharmacological agents that act as potent preconditioners have been described. Experimental induction of neuroprotection using these various preconditioning paradigms has provided a unique window into the brain’s endogenous protective mechanisms. Moreover, preconditioning agents themselves hold significant promise as clinical-stage therapies for prevention of I/R injury. The aim of this article is to explore several key steps involved in the preclinical validation of preconditioning agents prior to the conduct of clinical studies in humans. Drug development is difficult, expensive and relies on multi-factorial analysis of data from diverse disciplines. Importantly, there is no single path for the preclinical development of a novel therapeutic and no proven strategy to ensure success in clinical translation. Rather, the conduct of a diverse array of robust preclinical studies reduces the risk of clinical failure by varying degrees depending upon the relevance of preclinical models and drug pharmacology to humans. A strong sense of urgency and high tolerance of failure are often required to achieve success in the development of novel treatment paradigms for complex human conditions. PMID:23504609
Dexmedetomidine preconditioning ameliorates kidney ischemia-reperfusion injury
Lempiäinen, Juha; Finckenberg, Piet; Mervaala, Elina E; Storvik, Markus; Kaivola, Juha; Lindstedt, Ken; Levijoki, Jouko; Mervaala, Eero M
2014-01-01
Kidney ischemia-reperfusion (I/R) injury is a common cause of acute kidney injury. We tested whether dexmedetomidine (Dex), an alpha2 adrenoceptor (α2-AR) agonist, protects against kidney I/R injury. Sprague–Dawley rats were divided into four groups: (1) Sham-operated group; (2) I/R group (40 min ischemia followed by 24 h reperfusion); (3) I/R group + Dex (1 μg/kg i.v. 60 min before the surgery), (4) I/R group + Dex (10 μg/kg). The effects of Dex postconditiong (Dex 1 or 10 μg/kg i.v. after reperfusion) as well as the effects of peripheral α2-AR agonism with fadolmidine were also examined. Hemodynamic effects were monitored, renal function measured, and acute tubular damage along with monocyte/macrophage infiltration scored. Kidney protein kinase B, toll like receptor 4, light chain 3B, p38 mitogen-activated protein kinase (p38 MAPK), sirtuin 1, adenosine monophosphate kinase (AMPK), and endothelial nitric oxide synthase (eNOS) expressions were measured, and kidney transciptome profiles analyzed. Dex preconditioning, but not postconditioning, attenuated I/R injury-induced renal dysfunction, acute tubular necrosis and inflammatory response. Neither pre- nor postconditioning with fadolmidine protected kidneys. Dex decreased blood pressure more than fadolmidine, ameliorated I/R-induced impairment of autophagy and increased renal p38 and eNOS expressions. Dex downregulated 245 and upregulated 61 genes representing 17 enriched Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways, in particular, integrin pathway and CD44. Ingenuity analysis revealed inhibition of Rac and nuclear factor (erythroid-derived 2)-like 2 pathways, whereas aryl hydrocarbon receptor (AHR) pathway was activated. Dex preconditioning ameliorates kidney I/R injury and inflammatory response, at least in part, through p38-CD44-pathway and possibly also through ischemic preconditioning. PMID:25505591
Calcium preconditioning triggers neuroprotection in retinal ganglion cells
Brandt, Sean K.; Weatherly, Monique E.; Ware, Lillian; Linn, David M.; Linn, Cindy L.
2010-01-01
In the mammalian retina, excitotoxicity has been shown to be involved in apoptotic retinal ganglion cell (RGC) death and is associated with certain retinal disease states including glaucoma, diabetic retinopathy and retinal ischemia. Previous studies from this lab (Wehrwein et al., 2004) have demonstrated that acetylcholine (ACh) and nicotine protects against glutamate-induced excitotoxicity in isolated adult pig RGCs through nicotinic acetylcholine receptors (nAChRs). Activation of nAChRs in these RGCs triggers cell survival signaling pathways and inhibits apoptotic enzymes (Asomugha et al., 2010). However, the link between binding of nAChRs and activation of neuroprotective pathways is unknown. In this study, we examine the hypothesis that calcium permeation through nAChR channels is required for ACh-induced neuroprotection against glutamate-induced excitotoxicity in isolated pig RGCs. RGCs were isolated from other retinal tissue using a two step panning technique and cultured for 3 days under different conditions. In some studies, calcium imaging experiments were performed using the fluorescent calcium indicator, fluo-4, and demonstrated that calcium permeates the nAChR channels located on pig RGCs. In other studies, the extracellular calcium concentration was altered to determine the effect on nicotine-induced neuroprotection. Results support the hypothesis that calcium is required for nicotine-induced neuroprotection in isolated pig RGCs. Lastly, studies were performed to analyze the effects of preconditioning on glutamate-induced excitotoxicity and neuroprotection. In these studies, a preconditioning dose of calcium was introduced to cells using a variety of mechanisms before a large glutamate insult was applied to cells. Results from these studies support the hypothesis that preconditioning cells with a relatively low level of calcium before an excitotoxic insult leads to neuroprotection. In the future, these results could provide important information
Preconditioned Conjugate Gradient methods for low speed flow calculations
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing
1993-01-01
An investigation is conducted into the viability of using a generalized Conjugate Gradient-like method as an iterative solver to obtain steady-state solutions of very low-speed fluid flow problems. Low-speed flow at Mach 0.1 over a backward-facing step is chosen as a representative test problem. The unsteady form of the two dimensional, compressible Navier-Stokes equations are integrated in time using discrete time-steps. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux split formulation. The new iterative solver is used to solve a linear system of equations at each step of the time-integration. Preconditioning techniques are used with the new solver to enhance the stability and the convergence rate of the solver and are found to be critical to the overall success of the solver. A study of various preconditioners reveals that a preconditioner based on the lower-upper (L-U)-successive symmetric over-relaxation iterative scheme is more efficient than a preconditioner based on incomplete L-U factorizations of the iteration matrix. The performance of the new preconditioned solver is compared with a conventional line Gauss-Seidel relaxation (LGSR) solver. Overall speed-up factors of 28 (in terms of global time-steps required to converge to a steady-state solution) and 20 (in terms of total CPU time on one processor of a CRAY-YMP) are found in favor of the new preconditioned solver, when compared with the LGSR solver.
Chiueh, C.C. . E-mail: chiueh@tmu.edu.tw; Andoh, Tsugunobu; Chock, P. Boon
2005-09-01
Hormesis, a stress tolerance, can be induced by ischemic preconditioning stress. In addition to preconditioning, it may be induced by other means, such as gas anesthetics. Preconditioning mechanisms, which may be mediated by reprogramming survival genes and proteins, are obscure. A known neurotoxicant, 1-Methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP), causes less neurotoxicity in the mice that are preconditioned. Pharmacological evidences suggest that the signaling pathway of {center_dot}NO-cGMP-PKG (protein kinase G) may mediate preconditioning phenomenon. We developed a human SH-SY5Y cell model for investigating {sup {center_dot}}NO-mediated signaling pathway, gene regulation, and protein expression following a sublethal preconditioning stress caused by a brief 2-h serum deprivation. Preconditioned human SH-SY5Y cells are more resistant against severe oxidative stress and apoptosis caused by lethal serum deprivation and 1-mehtyl-4-phenylpyridinium (MPP{sup +}). Both sublethal and lethal oxidative stress caused by serum withdrawal increased neuronal nitric oxide synthase (nNOS/NOS1) expression and {sup {center_dot}}NO levels to a similar extent. In addition to free radical scavengers, inhibition of nNOS, guanylyl cyclase, and PKG blocks hormesis induced by preconditioning. S-nitrosothiols and 6-Br-cGMP produce a cytoprotection mimicking the action of preconditioning tolerance. There are two distinct cGMP-mediated survival pathways: (i) the up-regulation of a redox protein thioredoxin (Trx) for elevating mitochondrial levels of antioxidant protein Mn superoxide dismutase (MnSOD) and antiapoptotic protein Bcl-2, and (ii) the activation of mitochondrial ATP-sensitive potassium channels [K(ATP)]. Preconditioning induction of Trx increased tolerance against MPP{sup +}, which was blocked by Trx mRNA antisense oligonucleotide and Trx reductase inhibitor. It is concluded that Trx plays a pivotal role in {sup {center_dot}}NO-dependent preconditioning hormesis against
Incomplete block factorization preconditioning for indefinite elliptic problems
Guo, Chun-Hua
1996-12-31
The application of the finite difference method to approximate the solution of an indefinite elliptic problem produces a linear system whose coefficient matrix is block tridiagonal and symmetric indefinite. Such a linear system can be solved efficiently by a conjugate residual method, particularly when combined with a good preconditioner. We show that specific incomplete block factorization exists for the indefinite matrix if the mesh size is reasonably small. And this factorization can serve as an efficient preconditioner. Some efforts are made to estimate the eigenvalues of the preconditioned matrix. Numerical results are also given.
Preconditioning a product of matrices arising in trust region subproblems
Hribar, M.E.; Plantenga, T.D.
1996-03-01
In solving large scale optimization problems, we find it advantageous to use iterative methods to solve the sparse linear systems that arise. In the ETR software for solving equality constrained optimization problems, we use a conjugate gradient method to approximately solve the trust region subproblems. To speed up the convergence of the conjugate gradient routine, we need to precondition matrices of the form Z{sup T} W Z, which are not explicitly stored. Four preconditioners were implemented and the results for each are given.
Ischemic preconditioning enhances integrity of coronary endothelial tight junctions
Li, Zhao; Jin, Zhu-Qiu
2012-08-31
Highlights: Black-Right-Pointing-Pointer Cardiac tight junctions are present between coronary endothelial cells. Black-Right-Pointing-Pointer Ischemic preconditioning preserves the structural and functional integrity of tight junctions. Black-Right-Pointing-Pointer Myocardial edema is prevented in hearts subjected to ischemic preconditioning. Black-Right-Pointing-Pointer Ischemic preconditioning enhances translocation of ZO-2 from cytosol to cytoskeleton. -- Abstract: Ischemic preconditioning (IPC) is one of the most effective procedures known to protect hearts against ischemia/reperfusion (IR) injury. Tight junction (TJ) barriers occur between coronary endothelial cells. TJs provide barrier function to maintain the homeostasis of the inner environment of tissues. However, the effect of IPC on the structure and function of cardiac TJs remains unknown. We tested the hypothesis that myocardial IR injury ruptures the structure of TJs and impairs endothelial permeability whereas IPC preserves the structural and functional integrity of TJs in the blood-heart barrier. Langendorff hearts from C57BL/6J mice were prepared and perfused with Krebs-Henseleit buffer. Cardiac function, creatine kinase release, and myocardial edema were measured. Cardiac TJ function was evaluated by measuring Evans blue-conjugated albumin (EBA) content in the extravascular compartment of hearts. Expression and translocation of zonula occludens (ZO)-2 in IR and IPC hearts were detected with Western blot. A subset of hearts was processed for the observation of ultra-structure of cardiac TJs with transmission electron microscopy. There were clear TJs between coronary endothelial cells of mouse hearts. IR caused the collapse of TJs whereas IPC sustained the structure of TJs. IR increased extravascular EBA content in the heart and myocardial edema but decreased the expression of ZO-2 in the cytoskeleton. IPC maintained the structure of TJs. Cardiac EBA content and edema were reduced in IPC hearts. IPC
Preconditioning methods for ideal and multiphase fluid flows
NASA Astrophysics Data System (ADS)
Gupta, Ashish
The objective of this study is to develop a preconditioning method for ideal and multiphase multispecies compressible fluid flow solver using homogeneous equilibrium mixture model. The mathematical model for fluid flow going through phase change uses density and temperature in the formulation, where the density represents the multiphase mixture density. The change of phase of the fluid is then explicitly determined using the equation of state of the fluid, which only requires temperature and mixture density. The method developed is based on a finite-volume framework in which the numerical fluxes are computed using Roe's approximate Riemann solver and the modified Harten, Lax and Van-leer scheme (HLLC). All speed Roe and HLLC flux based schemes have been developed either by using preconditioning or by directly modifying dissipation to reduce the effect of acoustic speed in its numerical dissipation when Mach number decreases. Preconditioning proposed by Briley, Taylor and Whitfield, Eriksson and Turkel are studied in this research, where as low dissipation schemes proposed by Rieper and Thornber, Mosedale, Drikakis, Youngs and Williams are also considered. Various preconditioners are evaluated in terms of development, performance, accuracy and limitations in simulations at various Mach numbers. A generalized preconditioner is derived which possesses well conditioned eigensystem for multiphase multispecies flow simulations. Validation and verification of the solution procedure are carried out on several small model problems with comparison to experimental, theoretical, and other numerical results. Preconditioning methods are evaluated using three basic geometries; 1) bump in a channel 2) flow over a NACA0012 airfoil and 3) flow over a cylinder, which are then compared with theoretical and numerical results. Multiphase capabilities of the solver are evaluated in cryogenic and non-cryogenic conditions. For cryogenic conditions the solver is evaluated by predicting
Weighted graph based ordering techniques for preconditioned conjugate gradient methods
NASA Technical Reports Server (NTRS)
Clift, Simon S.; Tang, Wei-Pai
1994-01-01
We describe the basis of a matrix ordering heuristic for improving the incomplete factorization used in preconditioned conjugate gradient techniques applied to anisotropic PDE's. Several new matrix ordering techniques, derived from well-known algorithms in combinatorial graph theory, which attempt to implement this heuristic, are described. These ordering techniques are tested against a number of matrices arising from linear anisotropic PDE's, and compared with other matrix ordering techniques. A variation of RCM is shown to generally improve the quality of incomplete factorization preconditioners.
Interface preconditionings for domain-decomposed convection-diffusion operators
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Keyes, David E.
1990-01-01
The performance of five different interface preconditionings for domain-decomposed convection-diffusion problems, including a novel one known as the spectral probe is tested in a three-dimensional parameter space consisting of mesh parameters, Reynolds number, and domain aspect ratio. The preconditioners are representative of the range of practically computable possibilities that have appeared in the literature for the treatment of nonoverlapping subdomains. Numerical examples show that no single preconditioner can be considered uniformly superior or uniformly inferior to the rest, but that knowledge of the particulars of the shape and strength of the convection is important in selecting among them in a given problem.
NASA Astrophysics Data System (ADS)
Zimmerling, Jörn; Wei, Lei; Urbach, Paul; Remis, Rob
2016-03-01
We present a Krylov model-order reduction approach to efficiently compute the spontaneous decay (SD) rate of arbitrarily shaped 3D nanosized resonators. We exploit the symmetry of Maxwell's equations to efficiently construct so-called reduced-order models that approximate the SD rate of a quantum emitter embedded in a resonating nanostructure. The models allow for frequency sweeps, meaning that a single model provides SD rate approximations over an entire spectral interval of interest. Field approximations and dominant quasinormal modes can be determined at low cost as well.
Kerfriden, P.; Gosselet, P.; Adhikari, S.; Bordas, S.
2013-01-01
This article describes a bridge between POD-based model order reduction techniques and the classical Newton/Krylov solvers. This bridge is used to derive an efficient algorithm to correct, “on-the-fly”, the reduced order modelling of highly nonlinear problems undergoing strong topological changes. Damage initiation problems are addressed and tackle via a corrected hyperreduction method. It is shown that the relevancy of reduced order model can be significantly improved with reasonable additional costs when using this algorithm, even when strong topological changes are involved. PMID:27076688
NASA Astrophysics Data System (ADS)
Xu, Y.; Tuttas, S.; Heogner, L.; Stilla, U.
2016-06-01
This paper presents an approach for the classification of photogrammetric point clouds of scaffolding components in a construction site, aiming at making a preparation for the automatic monitoring of construction site by reconstructing an as-built Building Information Model (as-built BIM). The points belonging to tubes and toeboards of scaffolds will be distinguished via subspace clustering process and principal components analysis (PCA) algorithm. The overall workflow includes four essential processing steps. Initially, the spherical support region of each point is selected. In the second step, the normalized cut algorithm based on spectral clustering theory is introduced for the subspace clustering, so as to select suitable subspace clusters of points and avoid outliers. Then, in the third step, the feature of each point is calculated by measuring distances between points and the plane of local reference frame defined by PCA in cluster. Finally, the types of points are distinguished and labelled through a supervised classification method, with random forest algorithm used. The effectiveness and applicability of the proposed steps are investigated in both simulated test data and real scenario. The results obtained by the two experiments reveal that the proposed approaches are qualified to the classification of points belonging to linear shape objects having different shapes of sections. For the tests using synthetic point cloud, the classification accuracy can reach 80%, with the condition contaminated by noise and outliers. For the application in real scenario, our method can also achieve a classification accuracy of better than 63%, without using any information about the normal vector of local surface.
A fast algorithm for the recursive calculation of dominant singular subspaces
NASA Astrophysics Data System (ADS)
Mastronardi, N.; van Barel, M.; Vandebril, R.
2008-09-01
In many engineering applications it is required to compute the dominant subspace of a matrix A of dimension m×n, with m[not double greater-than sign]n. Often the matrix A is produced incrementally, so all the columns are not available simultaneously. This problem arises, e.g., in image processing, where each column of the matrix A represents an image of a given sequence leading to a singular value decomposition-based compression [S. Chandrasekaran, B.S. Manjunath, Y.F. Wang, J. Winkeler, H. Zhang, An eigenspace update algorithm for image analysis, Graphical Models and Image Process. 59 (5) (1997) 321-332]. Furthermore, the so-called proper orthogonal decomposition approximation uses the left dominant subspace of a matrix A where a column consists of a time instance of the solution of an evolution equation, e.g., the flow field from a fluid dynamics simulation. Since these flow fields tend to be very large, only a small number can be stored efficiently during the simulation, and therefore an incremental approach is useful [P. Van Dooren, Gramian based model reduction of large-scale dynamical systems, in: Numerical Analysis 1999, Chapman & Hall, CRC Press, London, Boca Raton, FL, 2000, pp. 231-247]. In this paper an algorithm for computing an approximation of the left dominant subspace of size k of , with k[double less-than sign]m,n, is proposed requiring at each iteration O(mk+k2) floating point operations. Moreover, the proposed algorithm exhibits a lot of parallelism that can be exploited for a suitable implementation on a parallel computer.
Investigation into on-road vehicle parameter identification based on subspace methods
NASA Astrophysics Data System (ADS)
Dong, Guangming; Chen, Jin; Zhang, Nong
2014-12-01
The randomness of road-tyre excitations can excite the low frequency ride vibrations of bounce, pitch and roll modes of an on-road vehicle. In this paper, modal parameters and mass moments of inertia of an on-road vehicle are estimated with an acceptable accuracy only by measuring accelerations of vehicle sprung mass and unsprung masses, which is based on subspace identification methods. The vehicle bounce, pitch and roll modes are characterized by their large damping (damping ratio 0.2-0.3). Two kinds of subspace identification methods, one that uses input/output data and the other that uses output data only, are compared for the highly damped modes. It is shown that, when the same data length is given, larger error of modal identification results can be clearly observed for the method using output data only; while additional use of input data will significantly reduce estimation variance. Instead of using tyre forces as inputs, which are difficult to be measured or estimated, vertical accelerations of unsprung masses are used as inputs. Theoretical analysis and Monte Carlo experiments show that, when the vehicle speed is not very high, subspace identification method using accelerations of unsprung masses as inputs can give more accurate results compared with the method using road-tyre forces as inputs. After the modal parameters are identified, and if vehicle mass and its center of gravity are pre-determined, roll and pitch moments of inertia of an on-road vehicle can be directly computed using the identified frequencies only, without requiring accurate estimation of mode shape vectors and multi-variable optimization algorithms.
Blended particle methods with adaptive subspaces for filtering turbulent dynamical systems
NASA Astrophysics Data System (ADS)
Qi, Di; Majda, Andrew J.
2015-04-01
It is a major challenge throughout science and engineering to improve uncertain model predictions by utilizing noisy data sets from nature. Hybrid methods combining the advantages of traditional particle filters and the Kalman filter offer a promising direction for filtering or data assimilation in high dimensional turbulent dynamical systems. In this paper, blended particle filtering methods that exploit the physical structure of turbulent dynamical systems are developed. Non-Gaussian features of the dynamical system are captured adaptively in an evolving-in-time low dimensional subspace through particle methods, while at the same time statistics in the remaining portion of the phase space are amended by conditional Gaussian mixtures interacting with the particles. The importance of both using the adaptively evolving subspace and introducing conditional Gaussian statistics in the orthogonal part is illustrated here by simple examples. For practical implementation of the algorithms, finding the most probable distributions that characterize the statistics in the phase space as well as effective resampling strategies is discussed to handle realizability and stability issues. To test the performance of the blended algorithms, the forty dimensional Lorenz 96 system is utilized with a five dimensional subspace to run particles. The filters are tested extensively in various turbulent regimes with distinct statistics and with changing observation time frequency and both dense and sparse spatial observations. In real applications perfect dynamical models are always inaccessible considering the complexities in both modeling and computation of high dimensional turbulent system. The effects of model errors from imperfect modeling of the systems are also checked for these methods. The blended methods show uniformly high skill in both capturing non-Gaussian statistics and achieving accurate filtering results in various dynamical regimes with and without model errors.
An adaptive subspace trust-region method for frequency-domain seismic full waveform inversion
NASA Astrophysics Data System (ADS)
Zhang, Huan; Li, Xiaofan; Song, Hanjie; Liu, Shaolin
2015-05-01
Full waveform inversion is currently considered as a promising seismic imaging method to obtain high-resolution and quantitative images of the subsurface. It is a nonlinear ill-posed inverse problem, the main difficulty of which that prevents the full waveform inversion from widespread applying to real data is the sensitivity to incorrect initial models and noisy data. Local optimization theories including Newton's method and gradient method always lead the convergence to local minima, while global optimization algorithms such as simulated annealing are computationally costly. To confront this issue, in this paper we investigate the possibility of applying the trust-region method to the full waveform inversion problem. Different from line search methods, trust-region methods force the new trial step within a certain neighborhood of the current iterate point. Theoretically, the trust-region methods are reliable and robust, and they have very strong convergence properties. The capability of this inversion technique is tested with the synthetic Marmousi velocity model and the SEG/EAGE Salt model. Numerical examples demonstrate that the adaptive subspace trust-region method can provide solutions closer to the global minima compared to the conventional Approximate Hessian approach and the L-BFGS method with a higher convergence rate. In addition, the match between the inverted model and the true model is still excellent even when the initial model deviates far from the true model. Inversion results with noisy data also exhibit the remarkable capability of the adaptive subspace trust-region method for low signal-to-noise data inversions. Promising numerical results suggest this adaptive subspace trust-region method is suitable for full waveform inversion, as it has stronger convergence and higher convergence rate.
Saxena, Saurabh; Shukla, Dhananjay; Bansal, Anju
2016-03-01
Previously, we have reported the regulation of monocarboxylate transporters (MCT)1 and MCT4 by physiological stimuli such as hypoxia and exercise. In the present study, we have evaluated the effect of hypoxic preconditioning and training on expression of different MCT isoforms in muscles. We found the increased mRNA expression of MCT1, MCT11, and MCT12 after hypoxic preconditioning with cobalt chloride and training. However, the expression of other MCT isoforms increased marginally or even reduced after hypoxic preconditioning. Only the protein expression of MCT1 increased after hypoxia preconditioning. MCT2 protein expression increased after training only and MCT4 protein expression decreased both in preconditioning and hypoxic training. Furthermore, we found decreased plasma lactate level during hypoxia preconditioning (0.74-fold), exercise (0.78-fold), and hypoxia preconditioning along with exercise (0.67-fold), which indicates increased lactate uptake by skeletal muscle. The protein-protein interactions with hypoxia inducible factor-1 and MCT isoforms were also evaluated, but no interaction was found. In conclusion, we say that almost all MCTs are expressed in red gastrocnemius muscle at the mRNA level and their expression is regulated differently under hypoxia preconditioning and exercise condition. PMID:26716978
Wan Ahmad Kamal, Wan Syazli Rodzaia; Noor, Norizal Mohd; Abdullah, Shafie
2013-01-01
Background Ischemic preconditioning has been shown to improve the outcomes of hypoxic tolerance of the heart, brain, lung, liver, jejunum, skin, and muscle tissues. However, to date, no report of ischemic preconditioning on vascularized bone grafts has been published. Methods Sixteen rabbits were divided into four groups with ischemic times of 2, 6, 14, and 18 hours. Half of the rabbits in each group underwent ischemic preconditioning. The osteomyocutaneous flaps consisted of the tibia bone, from which the overlying muscle and skin were raised. The technique of ischemic preconditioning involved applying a vascular clamp to the pedicle for 3 cycles of 10 minutes each. The rabbits then underwent serial plain radiography and computed tomography imaging on the first, second, fourth, and sixth postoperative weeks. Following this, all of the rabbits were sacrificed and histological examinations were performed. Results The results showed that for clinical analysis of the skin flaps and bone grafts, the preconditioned groups showed better survivability. In the plain radiographs, except for two non-preconditioned rabbits with intraoperative ischemic times of 6 hours, all began to show early callus formation at the fourth week. The computed tomography findings showed more callus formation in the preconditioned groups for all of the ischemic times except for the 18-hour group. The histological findings correlated with the radiological findings. There was no statistical significance in the difference between the two groups. Conclusions In conclusion, ischemic preconditioning improved the survivability of skin flaps and increased callus formation during the healing process of vascularized bone grafts. PMID:24286040
Sensory Preconditioning in Newborn Rabbits: From Common to Distinct Odor Memories
ERIC Educational Resources Information Center
Coureaud, Gerard; Tourat, Audrey; Ferreira, Guillaume
2013-01-01
This study evaluated whether olfactory preconditioning is functional in newborn rabbits and based on joined or independent memory of odorants. First, after exposure to odorants A+B, the conditioning of A led to high responsiveness to odorant B. Second, responsiveness to B persisted after amnesia of A. Third, preconditioning was also functional…
40 CFR 85.2220 - Preconditioned two speed idle test-EPA 91.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Preconditioned two speed idle test-EPA... Warranty Short Tests § 85.2220 Preconditioned two speed idle test—EPA 91. (a) General requirements—(1...-speed mode followed immediately by a first-chance idle mode. (ii) The second-chance test as...
40 CFR 85.2220 - Preconditioned two speed idle test-EPA 91.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Preconditioned two speed idle test-EPA... Warranty Short Tests § 85.2220 Preconditioned two speed idle test—EPA 91. (a) General requirements—(1...-speed mode followed immediately by a first-chance idle mode. (ii) The second-chance test as...
40 CFR 85.2220 - Preconditioned two speed idle test-EPA 91.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Preconditioned two speed idle test-EPA... Warranty Short Tests § 85.2220 Preconditioned two speed idle test—EPA 91. (a) General requirements—(1... between 2200 and 2800 rpm. If the engine speed falls below 2200 rpm or exceeds 2800 rpm for more than...
Barbosa, V; Sievers, R E; Zaugg, C E; Wolfe, C L
1996-02-01
The cardioprotective effect of preconditioning is associated with glycogen depletion and attenuation of intracellular acidosis during subsequent prolonged ischemia. This study determined the effects of increasing preconditioning ischemia time on myocardial glycogen depletion and on infarct size reduction. In addition, this study determined whether infarct size reduction by preconditioning correlates with glycogen depletion before prolonged ischemia. Anesthetized rats underwent a single episode of preconditioning lasting 1.25, 2.5, 5, or 10 minutes or multiple episodes cumulating in 10 (2 x 5 min) or 20 minutes (4 x 5 or 2 x 10 min) of preconditioning ischemia time, each followed by 5 minutes of reperfusion. Then both preconditioned and control rats underwent 45 minutes of ischemia induced by left coronary artery (LCA) occlusion and 120 minutes of reperfusion. After prolonged ischemia, infarct size was determined by dual staining with triphenyltetrazolium chloride and phthalocyanine blue dye. Glycogen levels were determined by an enzymatic assay in selected rats from each group before prolonged ischemia. We found that increasing preconditioning ischemia time resulted in glycogen depletion and infarct size reduction that could both be described by exponential functions. Furthermore, infarct size reduction correlated with glycogen depletion before prolonged ischemia (r = 0.98; p < 0.01). These findings suggest a role for glycogen depletion in reducing ischemic injury in the preconditioned heart. PMID:8579012
40 CFR 85.2220 - Preconditioned two speed idle test-EPA 91.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Preconditioned two speed idle test-EPA... Warranty Short Tests § 85.2220 Preconditioned two speed idle test—EPA 91. (a) General requirements—(1...-speed mode followed immediately by a first-chance idle mode. (ii) The second-chance test as...
Towards automatic music transcription: note extraction based on independent subspace analysis
NASA Astrophysics Data System (ADS)
Wellhausen, Jens; Hoynck, Michael
2005-01-01
Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.
Learning a subspace for face image clustering via trace ratio criterion
NASA Astrophysics Data System (ADS)
Hou, Chenping; Nie, Feiping; Zhang, Changshui; Wu, Yi
2009-06-01
Face clustering is gaining ever-increasing attention due to its importance in optical image processing. Because traditional clustering methods do not specify the particular characters of the face image, they are not suitable for face image clustering. We propose a novel approach that employs the trace ratio criterion and specifies that the face images should be spatially smooth. The graph regularization technique is also applied to constrain that nearby images have similar cluster indicators. We alternately learn the optimal subspace and the clusters. Experimental results demonstrate that the proposed approach performs better than other learning methods for face image clustering.
Random subspaces for encryption based on a private shared Cartesian frame
Bartlett, Stephen D.; Hayden, Patrick; Spekkens, Robert W.
2005-11-15
A private shared Cartesian frame is a novel form of private shared correlation that allows for both private classical and quantum communication. Cryptography using a private shared Cartesian frame has the remarkable property that asymptotically, if perfect privacy is demanded, the private classical capacity is three times the private quantum capacity. We demonstrate that if the requirement for perfect privacy is relaxed, then it is possible to use the properties of random subspaces to nearly triple the private quantum capacity, almost closing the gap between the private classical and quantum capacities.
Preconditioning 2D Integer Data for Fast Convex Hull Computations.
Cadenas, José Oswaldo; Megson, Graham M; Luengo Hendriks, Cris L
2016-01-01
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved. PMID:26938221
Stress Preconditioning of Spreading Depression in the Locust CNS
Rodgers, Corinne I.; Armstrong, Gary A. B.; Shoemaker, Kelly L.; LaBrie, John D.; Moyes, Christopher D.; Robertson, R. Meldrum
2007-01-01
Cortical spreading depression (CSD) is closely associated with important pathologies including stroke, seizures and migraine. The mechanisms underlying SD in its various forms are still incompletely understood. Here we describe SD-like events in an invertebrate model, the ventilatory central pattern generator (CPG) of locusts. Using K+ -sensitive microelectrodes, we measured extracellular K+ concentration ([K+]o) in the metathoracic neuropile of the CPG while monitoring CPG output electromyographically from muscle 161 in the second abdominal segment to investigate the role K+ in failure of neural circuit operation induced by various stressors. Failure of ventilation in response to different stressors (hyperthermia, anoxia, ATP depletion, Na+/K+ ATPase impairment, K+ injection) was associated with a disturbance of CNS ion homeostasis that shares the characteristics of CSD and SD-like events in vertebrates. Hyperthermic failure was preconditioned by prior heat shock (3 h, 45°C) and induced-thermotolerance was associated with an increase in the rate of clearance of extracellular K+ that was not linked to changes in ATP levels or total Na+/K+ ATPase activity. Our findings suggest that SD-like events in locusts are adaptive to terminate neural network operation and conserve energy during stress and that they can be preconditioned by experience. We propose that they share mechanisms with CSD in mammals suggesting a common evolutionary origin. PMID:18159249
Cardioprotection Acquired Through Exercise: The Role of Ischemic Preconditioning
Marongiu, Elisabetta; Crisafulli, Antonio
2014-01-01
A great bulk of evidence supports the concept that regular exercise training can reduce the incidence of coronary events and increase survival chances after myocardial infarction. These exercise-induced beneficial effects on the myocardium are reached by means of the reduction of several risk factors relating to cardiovascular disease, such as high cholesterol, hypertension, obesity etc. Furthermore, it has been demonstrated that exercise can reproduce the “ischemic preconditioning” (IP), which refers to the capacity of short periods of ischemia to render the myocardium more resistant to subsequent ischemic insult and to limit infarct size during prolonged ischemia. However, IP is a complex phenomenon which, along with infarct size reduction, can also provide protection against arrhythmia and myocardial stunning due to ischemia-reperfusion. Several clues demonstrate that preconditioning may be directly induced by exercise, thus inducing a protective phenotype at the heart level without the necessity of causing ischemia. Exercise appears to act as a physiological stress that induces beneficial myocardial adaptive responses at cellular level. The purpose of the present paper is to review the latest data on the role played by exercise in triggering myocardial preconditioning. PMID:24720421
Parallelizable approximate solvers for recursions arising in preconditioning
Shapira, Y.
1996-12-31
For the recursions used in the Modified Incomplete LU (MILU) preconditioner, namely, the incomplete decomposition, forward elimination and back substitution processes, a parallelizable approximate solver is presented. The present analysis shows that the solutions of the recursions depend only weakly on their initial conditions and may be interpreted to indicate that the inexact solution is close, in some sense, to the exact one. The method is based on a domain decomposition approach, suitable for parallel implementations with message passing architectures. It requires a fixed number of communication steps per preconditioned iteration, independently of the number of subdomains or the size of the problem. The overlapping subdomains are either cubes (suitable for mesh-connected arrays of processors) or constructed by the data-flow rule of the recursions (suitable for line-connected arrays with possibly SIMD or vector processors). Numerical examples show that, in both cases, the overhead in the number of iterations required for convergence of the preconditioned iteration is small relatively to the speed-up gained.
Preconditioning 2D Integer Data for Fast Convex Hull Computations
2016-01-01
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved. PMID:26938221
Ischemic preconditioning stimulates sodium and proton transport in isolated rat hearts.
Ramasamy, R; Liu, H; Anderson, S; Lundmark, J; Schaefer, S
1995-01-01
One or more brief periods of ischemia, termed preconditioning, dramatically limits infarct size and reduces intracellular acidosis during subsequent ischemia, potentially via enhanced sarcolemmal proton efflux mechanisms. To test the hypothesis that preconditioning increases the functional activity of sodium-dependent proton efflux pathways, isolated rat hearts were subjected to 30 min of global ischemia with or without preconditioning. Intracellular sodium (Nai) was assessed using 23Na magnetic resonance spectroscopy, and the activity of the Na-H exchanger and Na-K-2Cl cotransporter was measured by transiently exposing the hearts to an acid load (NH4Cl washout). Creatine kinase release was reduced by greater than 60% in the preconditioned hearts (P < 0.05) and was associated with improved functional recovery on reperfusion. Preconditioning increased Nai by 6.24 +/- 2.04 U, resulting in a significantly higher level of Nai before ischemia than in the control hearts. Nai increased significantly at the onset of ischemia (8.48 +/- 1.21 vs. 2.57 +/- 0.81 U, preconditioned vs. control hearts; P < 0.01). Preconditioning did not reduce Nai accumulation during ischemia, but the decline in Nai during the first 5 min of reperfusion was significantly greater in the preconditioned than in the control hearts (13.48 +/- 1.73 vs. 2.54 +/- 0.41 U; P < 0.001). Exposure of preconditioned hearts to ethylisopropylamiloride or bumetanide in the last reperfusion period limited in the increase in Nai during ischemia and reduced the beneficial effects of preconditioning. After the NH4Cl prepulse, preconditioned hearts acidified significantly more than control hearts and had significantly more rapid recovery of pH (preconditioned, delta pH = 0.35 +/- 0.04 U over 5 min; control, delta pH = 0.15 +/- 0.02 U over 5 min). This rapid pH recovery was not affected by inhibition of the Na-K-2Cl cotransporter but was abolished by inhibition of the Na-H exchanger. These results demonstrate that
Kamon, M.; Phillips, J.R.
1994-12-31
In this paper techniques are presented for preconditioning equations generated by discretizing constrained vector integral equations associated with magnetoquasistatic analysis. Standard preconditioning approaches often fail on these problems. The authors present a specialized preconditioning technique and prove convergence bounds independent of the constraint equations and electromagnetic excitation frequency. Computational results from analyzing several electronic packaging examples are given to demonstrate that the new preconditioning approach can sometimes reduce the number of GMRES iterations by more than an order of magnitude.
Erling, Nilon; de Souza Montero, Edna Frasson; Sannomiya, Paulina; Poli-de-Figueiredo (in memoriam), Luiz Francisco
2013-01-01
OBJECTIVES: This study tests the hypothesis that local or remote ischemic preconditioning may protect the intestinal mucosa against ischemia and reperfusion injuries resulting from temporary supraceliac aortic clamping. METHODS: Twenty-eight Wistar rats were divided into four groups: the sham surgery group, the supraceliac aortic occlusion group, the local ischemic preconditioning prior to supraceliac aortic occlusion group, and the remote ischemic preconditioning prior to supraceliac aortic occlusion group. Tissue samples from the small bowel were used for quantitative morphometric analysis of mucosal injury, and blood samples were collected for laboratory analyses. RESULTS: Supraceliac aortic occlusion decreased intestinal mucosal length by reducing villous height and elevated serum lactic dehydrogenase and lactate levels. Both local and remote ischemic preconditioning mitigated these histopathological and laboratory changes. CONCLUSIONS: Both local and remote ischemic preconditioning protect intestinal mucosa against ischemia and reperfusion injury following supraceliac aortic clamping. PMID:24473514
NASA Astrophysics Data System (ADS)
Kubota, H.; Yoneyama, K.; Nasuno, T.; Hamada, J.
2013-12-01
During the international field experiment 'Cooperative Indian Ocean experiment on intraseasonal variability in the Year 2011 (CINDY2011)', the preconditioning process of the MJO was observed. In this study, the contribution of the maritime continent convection was focused on the preconditioning process of the third MJO. During the preconditioning stage of the MJO, westward propagating disturbances were observed from Sumatera Island to the central Indian Ocean and moistened the atmosphere. Convections over the Sumatera Island were activated around December 15th when the moist air mass reached from South China Sea. The origin of the moist air mass was tropical cyclone which was formed in South China Sea in December 10th. The high moisture associated with tropical cyclone activated the convection over Sumatera Island, promoted westward propagating disturbances, and acted a favorable environment for the preconditioning of the MJO. This preconditioning stage of the MJO is simulated by Nonhydrostatic ICosahedral Atmospheric Model (NICAM) and investigated the moistening process.
Argon Induces Protective Effects in Cardiomyocytes during the Second Window of Preconditioning
Mayer, Britta; Soppert, Josefin; Kraemer, Sandra; Schemmel, Sabrina; Beckers, Christian; Bleilevens, Christian; Rossaint, Rolf; Coburn, Mark; Goetzenich, Andreas; Stoppe, Christian
2016-01-01
Increasing evidence indicates that argon has organoprotective properties. So far, the underlying mechanisms remain poorly understood. Therefore, we investigated the effect of argon preconditioning in cardiomyocytes within the first and second window of preconditioning. Primary isolated cardiomyocytes from neonatal rats were subjected to 50% argon for 1 h, and subsequently exposed to a sublethal dosage of hypoxia (<1% O2) for 5 h either within the first (0–3 h) or second window (24–48 h) of preconditioning. Subsequently, the cell viability and proliferation was measured. The argon-induced effects were assessed by evaluation of mRNA and protein expression after preconditioning. Argon preconditioning did not show any cardioprotective effects in the early window of preconditioning, whereas it leads to a significant increase of cell viability 24 h after preconditioning compared to untreated cells (p = 0.015) independent of proliferation. Argon-preconditioning significantly increased the mRNA expression of heat shock protein (HSP) B1 (HSP27) (p = 0.048), superoxide dismutase 2 (SOD2) (p = 0.001), vascular endothelial growth factor (VEGF) (p < 0.001) and inducible nitric oxide synthase (iNOS) (p = 0.001). No difference was found with respect to activation of pro-survival kinases in the early and late window of preconditioning. The findings provide the first evidence of argon-induced effects on the survival of cardiomyocytes during the second window of preconditioning, which may be mediated through the induction of HSP27, SOD2, VEGF and iNOS. PMID:27447611
Argon Induces Protective Effects in Cardiomyocytes during the Second Window of Preconditioning.
Mayer, Britta; Soppert, Josefin; Kraemer, Sandra; Schemmel, Sabrina; Beckers, Christian; Bleilevens, Christian; Rossaint, Rolf; Coburn, Mark; Goetzenich, Andreas; Stoppe, Christian
2016-01-01
Increasing evidence indicates that argon has organoprotective properties. So far, the underlying mechanisms remain poorly understood. Therefore, we investigated the effect of argon preconditioning in cardiomyocytes within the first and second window of preconditioning. Primary isolated cardiomyocytes from neonatal rats were subjected to 50% argon for 1 h, and subsequently exposed to a sublethal dosage of hypoxia (<1% O₂) for 5 h either within the first (0-3 h) or second window (24-48 h) of preconditioning. Subsequently, the cell viability and proliferation was measured. The argon-induced effects were assessed by evaluation of mRNA and protein expression after preconditioning. Argon preconditioning did not show any cardioprotective effects in the early window of preconditioning, whereas it leads to a significant increase of cell viability 24 h after preconditioning compared to untreated cells (p = 0.015) independent of proliferation. Argon-preconditioning significantly increased the mRNA expression of heat shock protein (HSP) B1 (HSP27) (p = 0.048), superoxide dismutase 2 (SOD2) (p = 0.001), vascular endothelial growth factor (VEGF) (p < 0.001) and inducible nitric oxide synthase (iNOS) (p = 0.001). No difference was found with respect to activation of pro-survival kinases in the early and late window of preconditioning. The findings provide the first evidence of argon-induced effects on the survival of cardiomyocytes during the second window of preconditioning, which may be mediated through the induction of HSP27, SOD2, VEGF and iNOS. PMID:27447611
Support vector machine classifiers for large data sets.
Gertz, E. M.; Griffin, J. D.
2006-01-31
This report concerns the generation of support vector machine classifiers for solving the pattern recognition problem in machine learning. Several methods are proposed based on interior point methods for convex quadratic programming. Software implementations are developed by adapting the object-oriented packaging OOQP to the problem structure and by using the software package PETSc to perform time-intensive computations in a distributed setting. Linear systems arising from classification problems with moderately large numbers of features are solved by using two techniques--one a parallel direct solver, the other a Krylov-subspace method incorporating novel preconditioning strategies. Numerical results are provided, and computational experience is discussed.
NLTE water lines in Betelgeuse-like atmospheres
NASA Astrophysics Data System (ADS)
Lambert, J.; Josselin, E.; Ryde, N.; Faure, A.
2013-05-01
The interpretation of water lines in red supergiant stellar atmospheres has been much debated over the past decade. The introduction of the so-called MOLspheres to account for near-infrared "extra" absorption has been controversial. We propose that non-LTE effects should be taken into account before considering any extra-photospheric contribution. After a brief introduction on the radiative transfer treatment and the inadequacy of classical treatments in the case of large-scale systems such as molecules, we present a new code, based on preconditioned Krylov subspace methods. Preliminary results suggest that NLTE effects lead to deeper water bands, as well as extra cooling.
High resolution through-the-wall radar image based on beamspace eigenstructure subspace methods
NASA Astrophysics Data System (ADS)
Yoon, Yeo-Sun; Amin, Moeness G.
2008-04-01
Through-the-wall imaging (TWI) is a challenging problem, even if the wall parameters and characteristics are known to the system operator. Proper target classification and correct imaging interpretation require the application of high resolution techniques using limited array size. In inverse synthetic aperture radar (ISAR), signal subspace methods such as Multiple Signal Classification (MUSIC) are used to obtain high resolution imaging. In this paper, we adopt signal subspace methods and apply them to the 2-D spectrum obtained from the delay-andsum beamforming image. This is in contrast to ISAR, where raw data, in frequency and angle, is directly used to form the estimate of the covariance matrix and array response vector. Using beams rather than raw data has two main advantages, namely, it improves the signal-to-noise ratio (SNR) and can correctly image typical indoor extended targets, such as tables and cabinets, as well as point targets. The paper presents both simulated and experimental results using synthesized and real data. It compares the performance of beam-space MUSIC and Capon beamformer. The experimental data is collected at the test facility in the Radar Imaging Laboratory, Villanova University.
Adaptive subspace detection of extended target in white Gaussian noise using sinc basis
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Wei; Li, Ming; Qu, Jian-She; Yang, Hui
2016-01-01
For the high resolution radar (HRR), the problem of detecting the extended target is considered in this paper. Based on a single observation, a new two-step detection based on sparse representation (TSDSR) method is proposed to detect the extended target in the presence of Gaussian noise with unknown covariance. In the new method, the Sinc dictionary is introduced to sparsely represent the high resolution range profile (HRRP). Meanwhile, adaptive subspace pursuit (ASP) is presented to recover the HRRP embedded in the Gaussian noise and estimate the noise covariance matrix. Based on the Sinc dictionary and the estimated noise covariance matrix, one step subspace detector (OSSD) for the first-order Gaussian (FOG) model without secondary data is adopted to realise the extended target detection. Finally, the proposed TSDSR method is applied to raw HRR data. Experimental results demonstrate that HRRPs of different targets can be sparsely represented very well with the Sinc dictionary. Moreover, the new method can estimate the noise power with tiny errors and have a good detection performance.
Multiple Dipole Sources Localization from the Scalp EEG Using a High-resolution Subspace Approach.
Ding, Lei; He, Bin
2005-01-01
We have developed a new algorithm, FINE, to enhance the spatial resolution and localization accuracy for closely-spaced sources, in the framework of the subspace source localization. Computer simulations were conducted in the present study to evaluate the performance of FINE, as compared with classic subspace source localization algorithms, i.e. MUSIC and RAP-MUSIC, in a realistic geometry head model by means of boundary element method (BEM). The results show that FINE could distinguish superficial simulated sources, with distance as low as 8.5 mm and deep simulated sources, with distance as low as 16.3 mm. Our results also show that the accuracy of source orientation estimates from FINE is better than MUSIC and RAP-MUSIC for closely-spaced sources. Motor potentials, obtained during finger movements in a human subject, were analyzed using FINE. The detailed neural activity distribution within the contralateral premotor areas and supplemental motor areas (SMA) is revealed by FINE as compared with MUSIC. The present study suggests that FINE has excellent spatial resolution in imaging neural sources. PMID:17282374
Multicomponent dynamics of coupled quantum subspaces and field-induced molecular ionizations.
Nguyen-Dang, Thanh-Tung; Viau-Trudel, Jérémy
2013-12-28
To describe successive ionization steps of a many-electron atom or molecule driven by an ultrashort, intense laser pulse, we introduce a hierarchy of successive two-subspace Feshbach partitions of the N-electron Hilbert space, and solve the partitioned time-dependent Schrödinger equation by a short-time unitary algorithm. The partitioning scheme allows one to use different level of theory to treat the many-electron dynamics in different subspaces. We illustrate the procedure on a simple two-active-electron model molecular system subjected to a few-cycle extreme Ultra-Violet (XUV) pulse to study channel-resolved photoelectron spectra as a function of the pulse's central frequency and duration. We observe how the momentum and kinetic-energy distributions of photoelectrons accompanying the formation of the molecular cation in a given electronic state (channel) change as the XUV few-cycle pulse's width is varied, from a form characteristic of an impulsive ionization regime, corresponding to the limit of a delta-function pulse, to a form characteristic of multiphoton above-threshold ionization, often associated with continuous-wave infinitely long pulse. PMID:24387352
3D deformable image matching: a hierarchical approach over nested subspaces
NASA Astrophysics Data System (ADS)
Musse, Olivier; Heitz, Fabrice; Armspach, Jean-Paul
2000-06-01
This paper presents a fast hierarchical method to perform dense deformable inter-subject matching of 3D MR Images of the brain. To recover the complex morphological variations in neuroanatomy, a hierarchy of 3D deformations fields is estimated, by minimizing a global energy function over a sequence of nested subspaces. The nested subspaces, generated from a single scaling function, consist of deformation fields constrained at different scales. The highly non linear energy function, describing the interactions between the target and the source images, is minimized using a coarse-to-fine continuation strategy over this hierarchy. The resulting deformable matching method shows low sensitivity to local minima and is able to track large non-linear deformations, with moderate computational load. The performances of the approach are assessed both on simulated 3D transformations and on a real data base of 3D brain MR Images from different individuals. The method has shown efficient in putting into correspondence the principle anatomical structures of the brain. An application to atlas-based MRI segmentation, by transporting a labeled segmentation map on patient data, is also presented.
A tensor-based subspace approach for bistatic MIMO radar in spatial colored noise.
Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang
2014-01-01
In this paper, a new tensor-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement tensor by exploiting the inherent structure of the matched filter. Then, the measurement tensor can be divided into two sub-tensors, and a cross-covariance tensor is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value decomposition (HOSVD) of the cross-covariance tensor, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance tensor technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method. PMID:24573313
Wavelet subspace decomposition of thermal infrared images for defect detection in artworks
NASA Astrophysics Data System (ADS)
Ahmad, M. Z.; Khan, A. A.; Mezghani, S.; Perrin, E.; Mouhoubi, K.; Bodnar, J. L.; Vrabie, V.
2016-07-01
Health of ancient artworks must be routinely monitored for their adequate preservation. Faults in these artworks may develop over time and must be identified as precisely as possible. The classical acoustic testing techniques, being invasive, risk causing permanent damage during periodic inspections. Infrared thermometry offers a promising solution to map faults in artworks. It involves heating the artwork and recording its thermal response using infrared camera. A novel strategy based on pseudo-random binary excitation principle is used in this work to suppress the risks associated with prolonged heating. The objective of this work is to develop an automatic scheme for detecting faults in the captured images. An efficient scheme based on wavelet based subspace decomposition is developed which favors identification of, the otherwise invisible, weaker faults. Two major problems addressed in this work are the selection of the optimal wavelet basis and the subspace level selection. A novel criterion based on regional mutual information is proposed for the latter. The approach is successfully tested on a laboratory based sample as well as real artworks. A new contrast enhancement metric is developed to demonstrate the quantitative efficiency of the algorithm. The algorithm is successfully deployed for both laboratory based and real artworks.
Quantum probabilities as Dempster-Shafer probabilities in the lattice of subspaces
NASA Astrophysics Data System (ADS)
Vourdas, A.
2014-08-01
The orthocomplemented modular lattice of subspaces L[H(d)], of a quantum system with d-dimensional Hilbert space H(d), is considered. A generalized additivity relation which holds for Kolmogorov probabilities is violated by quantum probabilities in the full lattice L[H(d)] (it is only valid within the Boolean subalgebras of L[H(d)]). This suggests the use of more general (than Kolmogorov) probability theories, and here the Dempster-Shafer probability theory is adopted. An operator {{D}}(H_1, H_2), which quantifies deviations from Kolmogorov probability theory is introduced, and it is shown to be intimately related to the commutator of the projectors {{P}}(H_1), {{P}}(H_2), to the subspaces H1, H2. As an application, it is shown that the proof of the inequalities of Clauser, Horne, Shimony, and Holt for a system of two spin 1/2 particles is valid for Kolmogorov probabilities, but it is not valid for Dempster-Shafer probabilities. The violation of these inequalities in experiments supports the interpretation of quantum probabilities as Dempster-Shafer probabilities.
A chi-squared-transformed subspace of LBP histogram for visual recognition.
Ren, Jianfeng; Jiang, Xudong; Yuan, Junsong
2015-06-01
Local binary pattern (LBP) and its variants have been widely used in many recognition tasks. Subspace approaches are often applied to the LBP feature in order to remove unreliable dimensions, or to derive a compact feature representation. It is well-known that subspace approaches utilizing up to the second-order statistics are optimal only when the underlying distribution is Gaussian. However, due to its nonnegative and simplex constraints, the LBP feature deviates significantly from Gaussian distribution. To alleviate this problem, we propose a chi-squared transformation (CST) to transfer the LBP feature to a feature that fits better to Gaussian distribution. The proposed CST leads to the formulation of a two-class classification problem. Due to its asymmetric nature, we apply asymmetric principal component analysis (APCA) to better remove the unreliable dimensions in the CST feature space. The proposed CST-APCA is evaluated extensively on spatial LBP for face recognition, protein cellular classification, and spatial-temporal LBP for dynamic texture recognition. All experiments show that the proposed feature transformation significantly enhances the recognition accuracy. PMID:25769153
Fringe filtering technique based on local signal reconstruction using noise subspace inflation
NASA Astrophysics Data System (ADS)
Kulkarni, Rishikesh; Rastogi, Pramod
2016-03-01
A noise filtering technique is proposed to filter the fringe pattern recorded in the optical measurement set-up. A single fringe pattern carrying the information on the measurand is treated as a data matrix which can either be complex or real valued. In the first approach, the noise filtering is performed pixel-wise in a windowed data segment generated around each pixel. The singular value decomposition of an enhanced form of this data segment is performed to extract the signal component from a noisy background. This enhancement of matrix has an effect of noise subspace inflation which accommodates maximum amount of noise. In another computationally efficient approach, the data matrix is divided into number of small-sized blocks and filtering is performed block-wise based on the similar noise subspace inflation method. The proposed method has an important ability to identify the spatially varying fringe density and regions of phase discontinuities. The performance of the proposed method is validated with numerical and experimental results.
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
Incorporating patch subspace model in Mumford-Shah type active contours.
Wang, Junyan; Chan, Kap Luk
2013-11-01
In this paper, we propose a unified energy minimization model for segmentation of non-smooth image structures, e.g., textures, based on Mumford-Shah functional and linear patch model. We consider that image patches of a non-smooth image structure can be modeled by a patch subspace, and image patches of different non-smooth image structures belong to different patch subspaces, which leads to a computational framework for segmentation of non-smooth image structures. Motivated by the Mumford-Shah model, we show that this segmentation framework is equivalent to minimizing a piecewise linear patch reconstruction energy. We also prove that the error of segmentation is bounded by the error of the linear patch reconstruction, meaning that improving the linear patch reconstruction for each region leads to reduction of the segmentation error. In addition, we derive an algorithm for the linear patch reconstruction with proven global optimality and linear rate of convergence. The segmentation in our method is achieved by minimizing a single energy functional without requiring predefined features. Hence, compared with the previous methods that require predefined texture features, our method can be more suitable for handling general textures in unsupervised segmentation. As a by-product, our method also produces a dictionary of optimized orthonormal descriptors for each segmented region. We mainly evaluate our method on the Brodatz textures. The experiments validate our theoretical claims and show the clear superior performance of our methods over other related methods for segmentation of the textures. PMID:23893721
Renaut, R.; He, Q.
1994-12-31
In a new parallel iterative algorithm for unconstrained optimization by multisplitting is proposed. In this algorithm the original problem is split into a set of small optimization subproblems which are solved using well known sequential algorithms. These algorithms are iterative in nature, e.g. DFP variable metric method. Here the authors use sequential algorithms based on an inexact subspace search, which is an extension to the usual idea of an inexact fine search. Essentially the idea of the inexact line search for nonlinear minimization is that at each iteration the authors only find an approximate minimum in the line search direction. Hence by inexact subspace search, they mean that, instead of finding the minimum of the subproblem at each interation, they do an incomplete down hill search to give an approximate minimum. Some convergence and numerical results for this algorithm will be presented. Further, the original theory will be generalized to the situation with a singular Hessian. Applications for nonlinear least squares problems will be presented. Experimental results will be presented for implementations on an Intel iPSC/860 Hypercube with 64 nodes as well as on the Intel Paragon.
Multicomponent dynamics of coupled quantum subspaces and field-induced molecular ionizations
NASA Astrophysics Data System (ADS)
Nguyen-Dang, Thanh-Tung; Viau-Trudel, Jérémy
2013-12-01
To describe successive ionization steps of a many-electron atom or molecule driven by an ultrashort, intense laser pulse, we introduce a hierarchy of successive two-subspace Feshbach partitions of the N-electron Hilbert space, and solve the partitioned time-dependent Schrödinger equation by a short-time unitary algorithm. The partitioning scheme allows one to use different level of theory to treat the many-electron dynamics in different subspaces. We illustrate the procedure on a simple two-active-electron model molecular system subjected to a few-cycle extreme Ultra-Violet (XUV) pulse to study channel-resolved photoelectron spectra as a function of the pulse's central frequency and duration. We observe how the momentum and kinetic-energy distributions of photoelectrons accompanying the formation of the molecular cation in a given electronic state (channel) change as the XUV few-cycle pulse's width is varied, from a form characteristic of an impulsive ionization regime, corresponding to the limit of a delta-function pulse, to a form characteristic of multiphoton above-threshold ionization, often associated with continuous-wave infinitely long pulse.
Analyzing the Subspace Structure of Related Images: Concurrent Segmentation of Image Sets*
Mukherjee, Lopamudra; Singh, Vikas; Xu, Jia; Collins, Maxwell D.
2013-01-01
We develop new algorithms to analyze and exploit the joint subspace structure of a set of related images to facilitate the process of concurrent segmentation of a large set of images. Most existing approaches for this problem are either limited to extracting a single similar object across the given image set or do not scale well to a large number of images containing multiple objects varying at different scales. One of the goals of this paper is to show that various desirable properties of such an algorithm (ability to handle multiple images with multiple objects showing arbitary scale variations) can be cast elegantly using simple constructs from linear algebra: this significantly extends the operating range of such methods. While intuitive, this formulation leads to a hard optimization problem where one must perform the image segmentation task together with appropriate constraints which enforce desired algebraic regularity (e.g., common subspace structure). We propose efficient iterative algorithms (with small computational requirements) whose key steps reduce to objective functions solvable by max-flow and/or nearly closed form identities. We study the qualitative, theoretical, and empirical properties of the method, and present results on benchmark datasets. PMID:25267943
Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments
NASA Astrophysics Data System (ADS)
Tsuji, Daisuke; Suyama, Kenji
This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.
Quantum probabilities as Dempster-Shafer probabilities in the lattice of subspaces
Vourdas, A.
2014-08-15
The orthocomplemented modular lattice of subspaces L[H(d)], of a quantum system with d-dimensional Hilbert space H(d), is considered. A generalized additivity relation which holds for Kolmogorov probabilities is violated by quantum probabilities in the full lattice L[H(d)] (it is only valid within the Boolean subalgebras of L[H(d)]). This suggests the use of more general (than Kolmogorov) probability theories, and here the Dempster-Shafer probability theory is adopted. An operator D(H{sub 1},H{sub 2}), which quantifies deviations from Kolmogorov probability theory is introduced, and it is shown to be intimately related to the commutator of the projectors P(H{sub 1}),P(H{sub 2}), to the subspaces H{sub 1}, H{sub 2}. As an application, it is shown that the proof of the inequalities of Clauser, Horne, Shimony, and Holt for a system of two spin 1/2 particles is valid for Kolmogorov probabilities, but it is not valid for Dempster-Shafer probabilities. The violation of these inequalities in experiments supports the interpretation of quantum probabilities as Dempster-Shafer probabilities.
Multicomponent dynamics of coupled quantum subspaces and field-induced molecular ionizations
Nguyen-Dang, Thanh-Tung; Viau-Trudel, Jérémy
2013-12-28
To describe successive ionization steps of a many-electron atom or molecule driven by an ultrashort, intense laser pulse, we introduce a hierarchy of successive two-subspace Feshbach partitions of the N-electron Hilbert space, and solve the partitioned time-dependent Schrödinger equation by a short-time unitary algorithm. The partitioning scheme allows one to use different level of theory to treat the many-electron dynamics in different subspaces. We illustrate the procedure on a simple two-active-electron model molecular system subjected to a few-cycle extreme Ultra-Violet (XUV) pulse to study channel-resolved photoelectron spectra as a function of the pulse's central frequency and duration. We observe how the momentum and kinetic-energy distributions of photoelectrons accompanying the formation of the molecular cation in a given electronic state (channel) change as the XUV few-cycle pulse's width is varied, from a form characteristic of an impulsive ionization regime, corresponding to the limit of a delta-function pulse, to a form characteristic of multiphoton above-threshold ionization, often associated with continuous-wave infinitely long pulse.
Calcium preconditioning triggers neuroprotection in retinal ganglion cells.
Brandt, S K; Weatherly, M E; Ware, L; Linn, D M; Linn, C L
2011-01-13
In the mammalian retina, excitotoxicity has been shown to be involved in apoptotic retinal ganglion cell (RGC) death and is associated with certain retinal disease states including glaucoma, diabetic retinopathy and retinal ischemia. Previous studies from this lab [Wehrwein E, Thompson SA, Coulibaly SF, Linn DM, Linn CL (2004) Invest Ophthalmol Vis Sci 45:1531-1543] have demonstrated that acetylcholine (ACh) and nicotine protects against glutamate-induced excitotoxicity in isolated adult pig RGCs through nicotinic acetylcholine receptors (nAChRs). Activation of nAChRs in these RGCs triggers cell survival signaling pathways and inhibits apoptotic enzymes [Asomugha CO, Linn DM, Linn CL (2010) J Neurochem 112:214-226]. However, the link between binding of nAChRs and activation of neuroprotective pathways is unknown. In this study, we examine the hypothesis that calcium permeation through nAChR channels is required for ACh-induced neuroprotection against glutamate-induced excitotoxicity in isolated pig RGCs. RGCs were isolated from other retinal tissue using a two step panning technique and cultured for 3 days under different conditions. In some studies, calcium imaging experiments were performed using the fluorescent calcium indicator, fluo-4, and demonstrated that calcium permeates the nAChR channels located on pig RGCs. In other studies, the extracellular calcium concentration was altered to determine the effect on nicotine-induced neuroprotection. Results support the hypothesis that calcium is required for nicotine-induced neuroprotection in isolated pig RGCs. Lastly, studies were performed to analyze the effects of preconditioning on glutamate-induced excitotoxicity and neuroprotection. In these studies, a preconditioning dose of calcium was introduced to cells using a variety of mechanisms before a large glutamate insult was applied to cells. Results from these studies support the hypothesis that preconditioning cells with a relatively low level of calcium before
Ischemic Preconditioning and Placebo Intervention Improves Resistance Exercise Performance.
Marocolo, Moacir; Willardson, Jeffrey M; Marocolo, Isabela C; Ribeiro da Mota, Gustavo; Simão, Roberto; Maior, Alex S
2016-05-01
Marocolo, M, Willardson, JM, Marocolo, IC, da Mota, GR, Simão, R, and Maior, AS. Ischemic preconditioning and PLACEBO intervention improves resistance exercise performance. J Strength Cond Res 30(5): 1462-1469, 2016-This study evaluated the effect of ischemic preconditioning (IPC) on resistance exercise performance in the lower limbs. Thirteen men participated in a randomized crossover design that involved 3 separate sessions (IPC, PLACEBO, and control). A 12-repetition maximum (12RM) load for the leg extension exercise was assessed through test and retest sessions before the first experimental session. The IPC session consisted of 4 cycles of 5 minutes of occlusion at 220 mm Hg of pressure alternated with 5 minutes of reperfusion at 0 mm Hg for a total of 40 minutes. The PLACEBO session consisted of 4 cycles of 5 minutes of cuff administration at 20 mm Hg of pressure alternated with 5 minutes of pseudo-reperfusion at 0 mm Hg for a total of 40 minutes. The occlusion and reperfusion phases were conducted alternately between the thighs, with subjects remaining seated. No ischemic pressure was applied during the control (CON) session and subjects sat passively for 40 minutes. Eight minutes after IPC, PLACEBO, or CON, subjects performed 3 repetition maximum sets of the leg extension (2-minute rest between sets) with the predetermined 12RM load. Four minutes after the third set for each condition, blood lactate was assessed. The results showed that for the first set, the number of repetitions significantly increased for both the IPC (13.08 ± 2.11; p = 0.0036) and PLACEBO (13.15 ± 0.88; p = 0.0016) conditions, but not for the CON (11.88 ± 1.07; p > 0.99) condition. In addition, the IPC and PLACEBO conditions resulted insignificantly greater repetitions vs. the CON condition on the first set (p = 0.015; p = 0.007) and second set (p = 0.011; p = 0.019), but not on the third set (p = 0.68; p > 0.99). No difference (p = 0.465) was found in the fatigue index and lactate
NASA Astrophysics Data System (ADS)
Wang, De-jun; Li, Feng-hua
2010-09-01
It has been proved theoretically that two incompletely correlated sources can be identified by linear signal processing methods. However, it is difficult in practice. A new method to separate two wideband sources with one vector sensor is presented in this paper. The method is the combination of subspace rotation and spatial matched filter. Simulations show that this method is insensitive to the initial azimuth error, independent of signal spectrum, and better man wideband focusing subspace methods at low SNR. The sea trial is performed and the experiment results show that the proposed method is effective to separate and track two wideband sources in the underwater environment.
Parallel preconditioning for the solution of nonsymmetric banded linear systems
Amodio, P.; Mazzia, F.
1994-12-31
Many computational techniques require the solution of banded linear systems. Common examples derive from the solution of partial differential equations and of boundary value problems. In particular the authors are interested in the parallel solution of block Hessemberg linear systems Gx = f, arising from the solution of ordinary differential equations by means of boundary value methods (BVMs), even if the considered preconditioning may be applied to any block banded linear system. BVMs have been extensively investigated in the last few years and their stability properties give promising results. A new class of BVMs called Reverse Adams, which are BV-A-stable for orders up to 6, and BV-A{sub 0}-stable for orders up to 9, have been studied.
Preconditioned Mixed Spectral Element Methods for Elasticity and Stokes Problems
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Preconditioned iterative methods for the indefinite systems obtained by discretizing the linear elasticity and Stokes problems with mixed spectral elements in three dimensions are introduced and analyzed. The resulting stiffness matrices have the structure of saddle point problems with a penalty term, which is associated with the Poisson ratio for elasticity problems or with stabilization techniques for Stokes problems. The main results of this paper show that the convergence rate of the resulting algorithms is independent of the penalty parameter, the number of spectral elements Nu and mildly dependent on the spectral degree eta via the inf-sup constant. The preconditioners proposed for the whole indefinite system are block-diagonal and block-triangular. Numerical experiments presented in the final section show that these algorithms are a practical and efficient strategy for the iterative solution of the indefinite problems arising from mixed spectral element discretizations of elliptic systems.
Institutionalized ignorance as a precondition for rational risk expertise.
Merkelsen, Henrik
2011-07-01
The present case study seeks to explain the conditions for experts' rational risk perception by analyzing the institutional contexts that constitute a field of food safety expertise in Denmark. The study highlights the role of risk reporting and how contextual factors affect risk reporting from the lowest organizational level, where concrete risks occur, to the highest organizational level, where the body of professional risk expertise is situated. The article emphasizes the role of knowledge, responsibility, loyalty, and trust as risk-attenuation factors and concludes by suggesting that the preconditions for the expert's rationality may rather be a lack of risk-specific knowledge due to poor risk reporting than a superior level of risk knowledge. PMID:21284683
A frequency dependent preconditioned wavelet method for atmospheric tomography
NASA Astrophysics Data System (ADS)
Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny
2013-12-01
Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.
NASA Astrophysics Data System (ADS)
Warner, Dennis B.
1984-02-01
Recognition of the socioeconomic preconditions for successful rural water-supply and sanitation projects in developing countries is the key to identifying a new project. Preconditions are the social, economic and technical characteristics defining the project environment. There are two basic types of preconditions: those existing at the time of the initial investigation and those induced by subsequent project activities. Successful project identification is dependent upon an accurate recognition of existing constraints and a carefully tailored package of complementary investments intended to overcome the constraints. This paper discusses the socioeconomic aspects of preconditions in the context of a five-step procedure for project identification. The procedure includes: (1) problem identification; (2) determination of socioeconomic status; (3) technology selection; (4) utilization of support conditions; and (5) benefit estimation. Although the establishment of specific preconditions should be based upon the types of projects likely to be implemented, the paper outlines a number of general relationships regarding favourable preconditions in water and sanitation planning. These relationships are used within the above five-step procedure to develop a set of general guidelines for the application of preconditions in the identification of rural water-supply and sanitation projects.
The Effect of Hypoxic Preconditioning on Induced Schwann Cells under Hypoxic Conditions
Chen, Ou; Wu, Miaomiao; Jiang, Liangfu
2015-01-01
Object Our objective was to explore the protective effects of hypoxic preconditioning on induced Schwann cells exposed to an environment with low concentrations of oxygen. It has been observed that hypoxic preconditioning of induced Schwann cells can promote axonal regeneration under low oxygen conditions. Method Rat bone marrow mesenchymal stem cells (MSCs) were differentiated into Schwann cells and divided into a normal oxygen control group, a hypoxia-preconditioning group and a hypoxia group. The ultrastructure of each of these groups of cells was observed by electron microscopy. In addition, flow cytometry was used to measure changes in mitochondrial membrane potential. Annexin V-FITC/PI staining was used to detect apoptosis, and Western blots were used to detect the expression of Bcl-2/Bax. Fluorescence microscopic observations of axonal growth in NG-108 cells under hypoxic conditions were also performed. Results The hypoxia-preconditioning group maintained mitochondrial cell membrane and crista integrity, and these cells exhibited less edema than the hypoxia group. In addition, the cells in the hypoxia-preconditioning group were found to be in early stages of apoptosis, whereas cells from the hypoxia group were in the later stages of apoptosis. The hypoxia-preconditioning group also had higher levels of Bcl-2/Bax expression and longer NG-108 cell axons than were observed in the hypoxia group. Conclusion Hypoxic preconditioning can improve the physiological state of Schwann cells in a severe hypoxia environment and improve the ability to promote neurite outgrowth. PMID:26509259
Perfusion delay causes unintentional ischemic preconditioning in isolated heart preparation.
Minhaz, U; Koide, S; Shohtsu, A; Fujishima, M; Nakazawa, H
1995-01-01
This study sought to show that unintentional preconditioning can be induced in the isolated perfused heart during the preparation procedure. The following four groups were compared: hearts were placed in ice cold saline and cooled for 15 s and then mounted to the Langendorff apparatus (n = 5; cool immediate group); hearts were cooled for 60 s and mounted (n = 5; cool delay group); hearts were mounted directly to the apparatus within 15 s after the isolation without cooling (n = 5; noncool immediate group); hearts were mounted without cooling, but the mounting was delayed for 60 s after the isolation (n = 5; noncool delay group). All hearts were paced at a fixed rate of 300 bpm, and an occlusion of left coronary (LCA) for 60 min was performed, which was followed by reperfusion for another 60 min. Coronary flow (CBF), left ventricular developed pressure (LVDP), and creatine phosphokinase (CPK) release did not change among the four groups during ischemia. At the end of reperfusion the LVDP values were 70 +/- 1%, 66 +/- 2%, 62 +/- 3%, and 73 +/- 2% of preischemic values in cool immediate, cool delay, noncool immediate, and noncool delay groups, respectively. CPK values were 116 +/- 4, 121 +/- 7, 138 +/- 6, and 29 +/- 1 x 10(3) U/g myocardium, and percentage necrosis/risk areas were 24 +/- 1.0%, 21 +/- 1.7%, 38 +/- 2.6%, and 13 +/- 0.5% in cool immediate, cool delay, noncool immediate, and noncool delay groups, respectively. The noncool delay group demonstrated high LVDP, least amount of CPK release, and smallest size of necrosis. These results indicate that an unintentional preconditioning effect can be induced when the cooling procedure is not applied and perfusion is delayed. PMID:8585864
Stetler, R. Anne; Leak, Rehana K.; Gan, Yu; Li, Peiying; Hu, Xiaoming; Jing, Zheng; Chen, Jun; Zigmond, Michael J.; Gao, Yanqin
2014-01-01
Preconditioning is a phenomenon in which brief episodes of a sublethal insult induce robust protection against subsequent lethal injuries. Preconditioning has been observed in multiple organisms and can occur in the brain as well as other tissues. Extensive animal studies suggest that the brain can be preconditioned to resist acute injuries, such as ischemic stroke, neonatal hypoxia/ischemia, trauma, and agents that are used in models of neurodegenerative diseases, such as Parkinson’s disease and Alzheimer’s disease. Effective preconditioning stimuli are numerous and diverse, ranging from transient ischemia, hypoxia, hyperbaric oxygen, hypothermia and hyperthermia, to exposure to neurotoxins and pharmacological agents. The phenomenon of “cross-tolerance,” in which a sublethal stress protects against a different type of injury, suggests that different preconditioning stimuli may confer protection against a wide range of injuries. Research conducted over the past few decades indicates that brain preconditioning is complex, involving multiple effectors such as metabolic inhibition, activation of extra- and intracellular defense mechanisms, a shift in the neuronal excitatory/inhibitory balance, and reduction in inflammatory sequelae. An improved understanding of brain preconditioning should help us identify innovative therapeutic strategies that prevent or at least reduce neuronal damage in susceptible patients. In this review, we focus on the experimental evidence of preconditioning in the brain and systematically survey the models used to develop paradigms for neuroprotection, and then discuss the clinical potential of brain preconditioning. In a subsequent components of this two-part series, we will discuss the cellular and molecular events that are likely to underlie these phenomena. PMID:24389580
Eigenmode Analysis of Boundary Conditions for One-Dimensional Preconditioned Euler Equations
NASA Technical Reports Server (NTRS)
Darmofal, David L.
1998-01-01
An analysis of the effect of local preconditioning on boundary conditions for the subsonic, one-dimensional Euler equations is presented. Decay rates for the eigenmodes of the initial boundary value problem are determined for different boundary conditions. Riemann invariant boundary conditions based on the unpreconditioned Euler equations are shown to be reflective with preconditioning, and, at low Mach numbers, disturbances do not decay. Other boundary conditions are investigated which are non-reflective with preconditioning and numerical results are presented confirming the analysis.
Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing
NASA Astrophysics Data System (ADS)
Rowe, C. A.; Stead, R. J.; Begnaud, M. L.
2013-12-01
Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for
EEG Subspace Analysis and Classification Using Principal Angles for Brain-Computer Interfaces
NASA Astrophysics Data System (ADS)
Ashari, Rehab Bahaaddin
Brain-Computer Interfaces (BCIs) help paralyzed people who have lost some or all of their ability to communicate and control the outside environment from loss of voluntary muscle control. Most BCIs are based on the classification of multichannel electroencephalography (EEG) signals recorded from users as they respond to external stimuli or perform various mental activities. The classification process is fraught with difficulties caused by electrical noise, signal artifacts, and nonstationarity. One approach to reducing the effects of similar difficulties in other domains is the use of principal angles between subspaces, which has been applied mostly to video sequences. This dissertation studies and examines different ideas using principal angles and subspaces concepts. It introduces a novel mathematical approach for comparing sets of EEG signals for use in new BCI technology. The success of the presented results show that principal angles are also a useful approach to the classification of EEG signals that are recorded during a BCI typing application. In this application, the appearance of a subject's desired letter is detected by identifying a P300-wave within a one-second window of EEG following the flash of a letter. Smoothing the signals before using them is the only preprocessing step that was implemented in this study. The smoothing process based on minimizing the second derivative in time is implemented to increase the classification accuracy instead of using the bandpass filter that relies on assumptions on the frequency content of EEG. This study examines four different ways of removing outliers that are based on the principal angles and shows that the outlier removal methods did not help in the presented situations. One of the concepts that this dissertation focused on is the effect of the number of trials on the classification accuracies. The achievement of the good classification results by using a small number of trials starting from two trials only
NASA Astrophysics Data System (ADS)
Chambers, Derrick James Allen
An approach for subspace detection and magnitude estimation of small seismic events is proposed. The process is used to identify mining related seismicity from a surface coal mine and an underground coal mining district, both located in the Western U.S. Using a blasting log and a locally derived seismic catalog as ground truth, the detector performance is assessed in terms of verified detections, false positives, and failed detections. Over 95% of the surface coal mine blasts and about 33% of the events from the underground mining district are correctly identified. The number of potential false positives are kept relatively low by requiring detections to simultaneously occur on two stations. Many of the potential false detections for the underground coal district are genuine events missed by the local seismic network, demonstrating the usefulness of regional subspace detectors in augmenting local catalogs. A trade-off in detection performance between stations at smaller source-receiver distances, which have increased signal to noise ratios, and stations at larger distances, which have greater waveform similarity, is observed. The increased detection capabilities of a single higher dimension subspace detector, compared to multiple lower dimension detectors, are explored in identifying events that can be described as linear combinations of training events. In this data set, such an advantage can be significant, justifying the use of a subspace detection scheme over conventional correlation methods.
Robust Quantum-Network Memory Using Decoherence-Protected Subspaces of Nuclear Spins
NASA Astrophysics Data System (ADS)
Reiserer, Andreas; Kalb, Norbert; Blok, Machiel S.; van Bemmelen, Koen J. M.; Taminiau, Tim H.; Hanson, Ronald; Twitchen, Daniel J.; Markham, Matthew
2016-04-01
The realization of a network of quantum registers is an outstanding challenge in quantum science and technology. We experimentally investigate a network node that consists of a single nitrogen-vacancy center electronic spin hyperfine coupled to nearby nuclear spins. We demonstrate individual control and readout of five nuclear spin qubits within one node. We then characterize the storage of quantum superpositions in individual nuclear spins under repeated application of a probabilistic optical internode entangling protocol. We find that the storage fidelity is limited by dephasing during the electronic spin reset after failed attempts. By encoding quantum states into a decoherence-protected subspace of two nuclear spins, we show that quantum coherence can be maintained for over 1000 repetitions of the remote entangling protocol. These results and insights pave the way towards remote entanglement purification and the realization of a quantum repeater using nitrogen-vacancy center quantum-network nodes.
Modal contributions and effects of spurious poles in nonlinear subspace identification
NASA Astrophysics Data System (ADS)
Marchesiello, S.; Fasana, A.; Garibaldi, L.
2016-06-01
Stabilisation diagrams have become a standard tool in the linear system identification, due to the capability of reducing the user interaction during the parameter extraction process. Their use in the presence of nonlinearity was recently introduced and it was demonstrated to be effective even in presence of non-smooth nonlinearities and high modal density. However, some variability of the identification results was reported, in particular concerning the quantification of the nonlinear effects, because of the presence of spurious modes, due to an over-estimation of the system order. In this paper the impact of spurious poles on the nonlinear subspace identification is investigated and some modal decoupling tools are introduced, which make it possible to identify modal contributions of physical poles on the nonlinear dynamics. An experimental identification is then conducted on a multi-degree-of-freedom system with a local nonlinearity and the significant improvements of the estimates obtained by the proposed approach are highlighted.
Li, Yongxiao; Wang, Zinan; Peng, Chao; Li, Zhengbin
2014-10-10
Conventional signal processing methods for improving the random walk coefficient and the bias stability of interferometric fiber-optic gyroscopes are usually implemented in one-dimension sequence. In this paper, as a comparison, we allocated synchronous adaptive filters with the calculations of correlations of multidimensional signals in the perspective of the signal subspace. First, two synchronous independent channels are obtained through quadrature demodulation. Next, synchronous adaptive filters were carried out in order to project the original channels to the high related error channels and the approximation channels. The error channel signals were then processed by principal component analysis for suppressing coherent noises. Finally, an optimal state estimation of these error channels and approximation channels based on the Kalman gain coefficient was operated. Experimental results show that this signal processing method improved the raw measurements' variance from 0.0630 [(°/h)2] to 0.0103 [(°/h)2]. PMID:25322393
Joint DOA and multi-pitch estimation based on subspace techniques
NASA Astrophysics Data System (ADS)
Xi Zhang, Johan; Christensen, Mads Græsbøll; Jensen, Søren Holdt; Moonen, Marc
2012-12-01
In this article, we present a novel method for high-resolution joint direction-of-arrivals (DOA) and multi-pitch estimation based on subspaces decomposed from a spatio-temporal data model. The resulting estimator is termed multi-channel harmonic MUSIC (MC-HMUSIC). It is capable of resolving sources under adverse conditions, unlike traditional methods, for example when multiple sources are impinging on the array from approximately the same angle or similar pitches. The effectiveness of the method is demonstrated on a simulated an-echoic array recordings with source signals from real recorded speech and clarinet. Furthermore, statistical evaluation with synthetic signals shows the increased robustness in DOA and fundamental frequency estimation, as compared with to a state-of-the-art reference method.
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B. Montgomery
2016-01-01
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution. PMID:27453632
Handwritten digit recognition by adaptive-subspace self-organizing map (ASSOM).
Zhang, B; Fu, M; Yan, H; Jabri, M A
1999-01-01
The adaptive-subspace self-organizing map (ASSOM) proposed by Kohonen is a recent development in self-organizing map (SOM) computation. In this paper, we propose a method to realize ASSOM using a neural learning algorithm in nonlinear autoencoder networks. Our method has the advantage of numerical stability. We have applied our ASSOM model to build a modular classification system for handwritten digit recognition. Ten ASSOM modules are used to capture different features in the ten classes of digits. When a test digit is presented to all the modules, each module provides a reconstructed pattern and the system outputs a class label by comparing the ten reconstruction errors. Our experiments show promising results. For relatively small size modules, the classification accuracy reaches 99.3% on the training set and over 97% on the testing set. PMID:18252591
Application of the adaptive subspace detector to Raman spectra for biological threat detection
NASA Astrophysics Data System (ADS)
Russell, Thomas A.; Borchardt, Steven; Anderson, Richard; Treado, Patrick; Neiss, Jason
2006-10-01
Effective application of point detectors in the field to monitor the air for biological attack imposes a challenging set of requirements on threat detection algorithms. Raman spectra exhibit features that discriminate between threats and non-threats, and such spectra can be collected quickly, offering a potential solution given the appropriate algorithm. The algorithm must attempt to match to known threat signatures, while suppressing the background clutter in order to produce acceptable Receiver Operating Characteristic (ROC) curves. The radar space-time adaptive processing (STAP) community offers a set of tools appropriate to this problem, and these have recently crossed over into hyperspectral imaging (HSI) applications. The Adaptive Subspace Detector (ASD) is the Generalized Likelihood Ratio Test (GLRT) detector for structured backgrounds (which we expect for Raman background spectra) and mixed pixels, and supports the necessary adaptation to varying background environments. The structured background model reduces the training required for that adaptation, and the number of statistical assumptions required. We applied the ASD to large Raman spectral databases collected by ChemImage, developed spectral libraries of threat signatures and several backgrounds, and tested the algorithm against individual and mixture spectra, including in blind tests. The algorithm was successful in detecting threats, however, in order to maintain the desired false alarm rate, it was necessary to shift the decision threshold so as to give up some detection sensitivity. This was due to excess spread of the detector histograms, apparently related to variability in the signatures not captured by the subspaces, and evidenced by non-Gaussian residuals. We present here performance modeling, test data, algorithm and sensor performance results, and model validation conclusions.
Bayesian estimation of Karhunen-Loève expansions; A random subspace approach
NASA Astrophysics Data System (ADS)
Chowdhary, Kenny; Najm, Habib N.
2016-08-01
One of the most widely-used procedures for dimensionality reduction of high dimensional data is Principal Component Analysis (PCA). More broadly, low-dimensional stochastic representation of random fields with finite variance is provided via the well known Karhunen-Loève expansion (KLE). The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L2 sense, i.e., which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition) on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build probabilistic Karhunen-Loève expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.
NASA Astrophysics Data System (ADS)
Jefferson, J.; Gilbert, J. M.; Maxwell, R. M.; Constantine, P. G.
2015-12-01
Complex hydrologic models are commonly used as computational tools to assess and quantify fluxes at the land surface and for forecasting and prediction purposes. When estimating water and energy fluxes from vegetated surfaces, the equations solved within these models require that multiple input parameters be specified. Some parameters characterize land cover properties while others are constants used to model physical processes like transpiration. As a result, it becomes important to understand the sensitivity of output flux estimates to uncertain input parameters. The active subspace method identifies the most important direction in the high-dimensional space of model inputs. Perturbations of input parameters in this direction influence output quantities more, on average, than perturbations in other directions. The components of the vector defining this direction quantify the sensitivity of the model output to the corresponding inputs. Discovering whether or not an active subspace exists is computationally efficient compared to several other sensitivity analysis methods. Here, we apply this method to evaluate the sensitivity of latent, sensible and ground heat fluxes from the ParFlow-Common Land Model (PF-CLM). Of the 19 input parameters used to specify properties of a grass covered surface, between three and six parameters are identified as important for heat flux estimates. Furthermore, the 19-dimenision input parameter space is reduced to one active variable and the relationship between the inputs and output fluxes for this case is described by a quadratic polynomial. The input parameter weights and the input-output relationship provide a powerful combination of information that can be used to understand land surface dynamics. Given the success of this proof-of-concept example, extension of this method to identify important parameters within the transpiration computation will be explored.
Global spatial sensitivity of runoff to subsurface permeability using the active subspace method
NASA Astrophysics Data System (ADS)
Gilbert, James M.; Jefferson, Jennifer L.; Constantine, Paul G.; Maxwell, Reed M.
2016-06-01
Hillslope scale runoff is generated as a result of interacting factors that include water influx rate, surface and subsurface properties, and antecedent saturation. Heterogeneity of these factors affects the existence and characteristics of runoff. This heterogeneity becomes an increasingly relevant consideration as hydrologic models are extended and employed to capture greater detail in runoff generating processes. We investigate the impact of one type of heterogeneity - subsurface permeability - on runoff using the integrated hydrologic model ParFlow. Specifically, we examine the sensitivity of runoff to variation in three-dimensional subsurface permeability fields for scenarios dominated by either Hortonian or Dunnian runoff mechanisms. Ten thousand statistically consistent subsurface permeability fields are parameterized using a truncated Karhunen-Loéve (KL) series and used as inputs to 48-h simulations of integrated surface-subsurface flow in an idealized 'tilted-v' domain. Coefficients of the spatial modes of the KL permeability fields provide the parameter space for analysis using the active subspace method. The analysis shows that for Dunnian-dominated runoff conditions the cumulative runoff volume is sensitive primarily to the first spatial mode, corresponding to permeability values in the center of the three-dimensional model domain. In the Hortonian case, runoff volume is sensitive to multiple smaller-scale spatial modes and the locus of that sensitivity is in the near-surface zone upslope from the domain outlet. Variation in runoff volume resulting from random heterogeneity configurations can be expressed as an approximately univariate function of the active variable, a weighted combination of spatial parameterization coefficients computed through the active subspace method. However, this relationship between the active variable and runoff volume is more well-defined for Dunnian runoff than for the Hortonian scenario.
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.; Chima, Rodrick V.; Turkel, Eli
1997-01-01
A preconditioning scheme has been implemented into a three-dimensional viscous computational fluid dynamics code for turbomachine blade rows. The preconditioning allows the code, originally developed for simulating compressible flow fields, to be applied to nearly-incompressible, low Mach number flows. A brief description is given of the compressible Navier-Stokes equations for a rotating coordinate system, along with the preconditioning method employed. Details about the conservative formulation of artificial dissipation are provided, and different artificial dissipation schemes are discussed and compared. The preconditioned code was applied to a well-documented case involving the NASA large low-speed centrifugal compressor for which detailed experimental data are available for comparison. Performance and flow field data are compared for the near-design operating point of the compressor, with generally good agreement between computation and experiment. Further, significant differences between computational results for the different numerical implementations, revealing different levels of solution accuracy, are discussed.
Analysis of a Lipid/Polymer Membrane for Bitterness Sensing with a Preconditioning Process.
Yatabe, Rui; Noda, Junpei; Tahara, Yusuke; Naito, Yoshinobu; Ikezaki, Hidekazu; Toko, Kiyoshi
2015-01-01
It is possible to evaluate the taste of foods or medicines using a taste sensor. The taste sensor converts information on taste into an electrical signal using several lipid/polymer membranes. A lipid/polymer membrane for bitterness sensing can evaluate aftertaste after immersion in monosodium glutamate (MSG), which is called "preconditioning". However, we have not yet analyzed the change in the surface structure of the membrane as a result of preconditioning. Thus, we analyzed the change in the surface by performing contact angle and surface zeta potential measurements, Fourier transform infrared spectroscopy (FTIR), X-ray photon spectroscopy (XPS) and gas cluster ion beam time-of-flight secondary ion mass spectrometry (GCIB-TOF-SIMS). After preconditioning, the concentrations of MSG and tetradodecylammonium bromide (TDAB), contained in the lipid membrane were found to be higher in the surface region than in the bulk region. The effect of preconditioning was revealed by the above analysis methods. PMID:26404301
Cai, Yunfeng; Bai, Zhaojun; Pask, John E.; Sukumar, N.
2013-12-15
The iterative diagonalization of a sequence of large ill-conditioned generalized eigenvalue problems is a computational bottleneck in quantum mechanical methods employing a nonorthogonal basis for ab initio electronic structure calculations. We propose a hybrid preconditioning scheme to effectively combine global and locally accelerated preconditioners for rapid iterative diagonalization of such eigenvalue problems. In partition-of-unity finite-element (PUFE) pseudopotential density-functional calculations, employing a nonorthogonal basis, we show that the hybrid preconditioned block steepest descent method is a cost-effective eigensolver, outperforming current state-of-the-art global preconditioning schemes, and comparably efficient for the ill-conditioned generalized eigenvalue problems produced by PUFE as the locally optimal block preconditioned conjugate-gradient method for the well-conditioned standard eigenvalue problems produced by planewave methods.
Some Dilemmas of Institutional Evaluation and Their Relationship to Preconditions and Procedures.
ERIC Educational Resources Information Center
Adelman, Clem
1980-01-01
Using "A Study of Student Choice in Context of Institutional Change (SCIC)" is an example of an evaluation using social anthropological methods. The problems of confidentiality, rapport, evaluator autonomy, and unclear preconditions are discussed and illustrated. (BW)
Liu, Xian-bao; Chen, Han; Chen, Hui-qiang; Zhu, Mei-fei; Hu, Xin-yang; Wang, Ya-ping; Jiang, Zhi; Xu, Yin-chuan; Xiang, Mei-xiang; Wang, Jian-an
2012-01-01
Objective: Mesenchymal stem cell (MSC) transplantation is a promising therapy for ischemic heart diseases. However, poor cell survival after transplantation greatly limits the therapeutic efficacy of MSCs. The purpose of this study was to investigate the protective effect of angiopoietin-1 (Ang1) preconditioning on MSC survival and subsequent heart function improvement after transplantation. Methods: MSCs were cultured with or without 50 ng/ml Ang1 in complete medium for 24 h prior to experiments on cell survival and transplantation. 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and Hoechst staining were applied to evaluate MSC survival after serum deprivation in vitro, while cell survival in vivo was detected by terminal deoxynucleotidyl transferase biotin-dUPT nick end labeling (TUNEL) assay 24 and 72 h after transplantation. Heart function and infarct size were measured four weeks later by small animal echocardiography and Masson’s trichrome staining, respectively. Results: Ang1 preconditioning induced Akt phosphorylation and increased expression of Bcl-2 and the ratio of Bcl-2/Bax. In comparison with non-preconditioned MSCs, Ang1-preconditioned cell survival was significantly increased while the apoptotic rate decreased in vitro. However, the PI3K/Akt pathway inhibitor, LY294002, abrogated the protective effect of Ang1 preconditioning. After transplantation, the Ang1-preconditioned-MSC group showed a lower death rate, smaller infarct size, and better heart functional recovery compared to the non-preconditioned-MSC group. Conclusions: Ang1 preconditioning enhances MSC survival, contributing to further improvement of heart function. PMID:22843181
Preconditioned domain decomposition scheme for three-dimensional aerodynamic sensitivity analysis
NASA Technical Reports Server (NTRS)
Eleshaky, Mohammed E.; Baysal, Oktay
1993-01-01
A preconditioned domain decomposition scheme is introduced for the solution of the 3D aerodynamic sensitivity equation. This scheme uses the iterative GMRES procedure to solve the effective sensitivity equation of the boundary-interface cells in the sensitivity analysis domain-decomposition scheme. Excluding the dense matrices and the effect of cross terms between boundary-interfaces is found to produce an efficient preconditioning matrix.
Dekker, L.R.C.; van Bavel, E.; Opthof, T.; Coronel, R.; Janse, M.J.
2003-01-01
Background ATP-sensitive K+ (KATP) channels play an important role in the protective mechanism underlying ischaemic preconditioning. Ample evidence indicates, however, that action potential shortening is not a prerequisite for the cardioprotective effect of preconditioning. Methods Monophasic action potential duration (MAPD), tissue resistance, intracellular Ca2+ (Indo-1) and mechanical activity were simultaneously assessed in arterially perfused rabbit papillary muscles. We studied four experimental protocols preceding sustained ischaemia: 1. control perfusion (n=6), 2. ischaemic preconditioning (PC; n=4), 3. pretreatment with a KATP channel blocker, glibenclamide (15 μmol/1), prior to ischaemic preconditioning (PC+glib; n=3), 4. glibenclamide pretreatment only (Glib; n=2). Results In the PC group an increase in the diastolic Ca2+ level and a prolongation of the Ca2+ transient just prior to the induction of sustained ischaemia correlate to the postponement of the onset of irreversible ischaemic damage, as established by a rise in [Ca2+]i, electrical uncoupling and contracture. Glibenclamide antagonised these changes in the Ca2+ transient and the cardioprotection induced by preconditioning. MAPD was equal in all experimental groups. Conclusions Prolongation of the Ca2+ transient and increase of diastolic [Ca2+]i just prior to the induction of sustained ischaemia and not action potential shortening are involved in the cardioprotective effect of ischaemic preconditioning. Therefore, a glibenclamide-sensitive mechanism, other than the sarcolemmal KATP channels, is involved in the protective effect of ischaemic preconditioning. Changes in Ca2+ metabolism may play a crucial role in ischaemic preconditioning. ImagesFigure 1 PMID:25696182
Bernsen, Erik; Dijkstra, Henk A.; Thies, Jonas; Wubs, Fred W.
2010-10-20
In present-day forward time stepping ocean-climate models, capturing both the wind-driven and thermohaline components, a substantial amount of CPU time is needed in a so-called spin-up simulation to determine an equilibrium solution. In this paper, we present methodology based on Jacobian-Free Newton-Krylov methods to reduce the computational time for such a spin-up problem. We apply the method to an idealized configuration of a state-of-the-art ocean model, the Modular Ocean Model version 4 (MOM4). It is shown that a typical speed-up of a factor 10-25 with respect to the original MOM4 code can be achieved and that this speed-up increases with increasing horizontal resolution.
NASA Astrophysics Data System (ADS)
Bernsen, Erik; Dijkstra, Henk A.; Thies, Jonas; Wubs, Fred W.
2010-10-01
In present-day forward time stepping ocean-climate models, capturing both the wind-driven and thermohaline components, a substantial amount of CPU time is needed in a so-called spin-up simulation to determine an equilibrium solution. In this paper, we present methodology based on Jacobian-Free Newton-Krylov methods to reduce the computational time for such a spin-up problem. We apply the method to an idealized configuration of a state-of-the-art ocean model, the Modular Ocean Model version 4 (MOM4). It is shown that a typical speed-up of a factor 10-25 with respect to the original MOM4 code can be achieved and that this speed-up increases with increasing horizontal resolution.
Kapoor, Sorabh; Berishvili, Ekaterine; Bandi, Sriram; Gupta, Sanjeev
2014-10-01
Despite the potential of ischemic preconditioning for organ protection, long-term effects in terms of molecular processes and cell fates are ill defined. We determined consequences of hepatic ischemic preconditioning in rats, including cell transplantation assays. Ischemic preconditioning induced persistent alterations; for example, after 5 days liver histology was normal, but γ-glutamyl transpeptidase expression was observed, with altered antioxidant enzyme content, lipid peroxidation, and oxidative DNA adducts. Nonetheless, ischemic preconditioning partially protected from toxic liver injury. Similarly, primary hepatocytes from donor livers preconditioned with ischemia exhibited undesirably altered antioxidant enzyme content and lipid peroxidation, but better withstood insults. However, donor hepatocytes from livers preconditioned with ischemia did not engraft better than hepatocytes from control livers. Moreover, proliferation of hepatocytes from donor livers preconditioned with ischemia decreased under liver repopulation conditions. Hepatocytes from donor livers preconditioned with ischemia showed oxidative DNA damage with expression of genes involved in MAPK signaling that impose G1/S and G2/M checkpoint restrictions, including p38 MAPK-regulated or ERK-1/2-regulated cell-cycle genes such as FOS, MAPK8, MYC, various cyclins, CDKN2A, CDKN2B, TP53, and RB1. Thus, although ischemic preconditioning allowed hepatocytes to better withstand secondary insults, accompanying DNA damage and molecular events simultaneously impaired their proliferation capacity over the long term. Mitigation of ischemic preconditioning-induced DNA damage and deleterious molecular perturbations holds promise for advancing clinical applications. PMID:25128377
Opioid-induced preconditioning: recent advances and future perspectives.
Peart, Jason N; Gross, Eric R; Gross, Garrett J
2005-01-01
Opioids, named by Acheson for compounds with morphine-like actions despite chemically distinct structures, have received much research interest, particularly for their central nervous system (CNS) actions involved in pain management, resulting in thousands of scientific papers focusing on their effects on the CNS and other organ systems. A more recent area which may have great clinical importance concerns the role of opioids, either endogenous or exogenous compounds, in limiting the pathogenesis of ischemia-reperfusion injury in heart and brain. The role of endogenous opioids in hibernation provides tantalizing evidence for the protective potential of opioids against ischemia or hypoxia. Mammalian hibernation, a distinct energy-conserving state, is associated with depletion of energy stores, intracellular acidosis and hypoxia, similar to those which occur during ischemia. However, despite the potentially detrimental cellular state induced with hibernation, the myocardium remains resilient for many months. What accounts for the hypoxia-tolerant state is of great interest. During hibernation, circulating levels of opioid peptides are increased dramatically, and indeed, are considered a "trigger" of hibernation. Furthermore, administration of opioid antagonists can effectively reverse hibernation in mammals. Therefore, it is not surprising that activation of opioid receptors has been demonstrated to preserve cellular status following a hypoxic insult, such as ischemia-reperfusion in many model systems including the intestine [Zhang, Y., Wu, Y.X., Hao, Y.B., Dun, Y. Yang, S.P., 2001. Role of endogenous opioid peptides in protection of ischemic preconditioning in rat small intestine. Life Sci. 68, 1013-1019], skeletal muscle [Addison, P.D., Neligan, P.C., Ashrafpour, H., Khan, A., Zhong, A., Moses, M., Forrest, C.R., Pang, C.Y., 2003. Noninvasive remote ischemic preconditioning for global protection of skeletal muscle against infarction. Am. J. Physiol. Heart Circ
Nie, Huang; Xiong, Lize; Lao, Ning; Chen, Shaoyang; Xu, Ning; Zhu, Zhenghua
2006-05-01
The present study examined the hypothesis that spinal cord ischemic tolerance induced by hyperbaric oxygen (HBO) preconditioning is triggered by an initial oxidative stress and is associated with an increase of antioxidant enzyme activities as one effector of the neuroprotection. New Zealand White rabbits were subjected to HBO preconditioning, hyperbaric air (HBA) preconditioning, or sham pretreatment once daily for five consecutive days before spinal cord ischemia. Activities of catalase (CAT) and superoxide dismutase were increased in spinal cord tissue in the HBO group 24 h after the last pretreatment and reached a higher level after spinal cord ischemia for 20 mins followed by reperfusion for 24 or 48 h, in comparison with those in control and HBA groups. The spinal cord ischemic tolerance induced by HBO preconditioning was attenuated when a CAT inhibitor, 3-amino-1,2,4-triazole,1 g/kg, was administered intraperitoneally 1 h before ischemia. In addition, administration of a free radical scavenger, dimethylthiourea, 500 mg/kg, intravenous, 1 h before each day's preconditioning, reversed the increase of the activities of both enzymes in spinal cord tissue. The results indicate that an initial oxidative stress, as a trigger to upregulate the antioxidant enzyme activities, plays an important role in the formation of the tolerance against spinal cord ischemia by HBO preconditioning. PMID:16136055
Choi, Ji Ye; Park, Jeong-Min; Yi, Joo Mi; Leem, Sun-Hee; Kang, Tae-Hong
2015-01-01
The capacity of tumor cells for nucleotide excision repair (NER) is a major determinant of the efficacy of and resistance to DNA-damaging chemotherapeutics, such as cisplatin. Here, we demonstrate that using lesion-specific monoclonal antibodies, NER capacity is enhanced in human lung cancer cells after preconditioning with DNA-damaging agents. Preconditioning of cells with a nonlethal dose of UV radiation facilitated the kinetics of subsequent cisplatin repair and vice versa. Dual-incision assay confirmed that the enhanced NER capacity was sustained for 2 days. Checkpoint activation by ATR kinase and expression of NER factors were not altered significantly by the preconditioning, whereas association of XPA, the rate-limiting factor in NER, with chromatin was accelerated. In preconditioned cells, SIRT1 expression was increased, and this resulted in a decrease in acetylated XPA. Inhibition of SIRT1 abrogated the preconditioning-induced predominant XPA binding to DNA lesions. Taking these data together, we conclude that upregulated NER capacity in preconditioned lung cancer cells is caused partly by an increased level of SIRT1, which modulates XPA sensitivity to DNA damage. This study provides some insights into the molecular mechanism of chemoresistance through acquisition of enhanced DNA repair capacity in cancer cells. PMID:26317794
Hu, Xiaowu; Yang, Junjie; Wang, Ying; Zhang, You; Ii, Masaaki; Shen, Zhenya; Hui, Jie
2015-01-01
Background: Cell-based angiogenesis is a promising treatment for ischemic diseases; however, survival of implanted cells is impaired by the ischemic microenvironment. In this study, mesenchymal stem cells (MSCs) for cell transplantation were preconditioned with trimetazidine (TMZ). We hypothesized that TMZ enhances the survival rate of MSCs under hypoxic stimuli through up-regulation of HIF1-α. Methods and results: Bone marrow-derived rat mesenchymal stem cells were preconditioned with 10 μM TMZ for 6 h. TMZ preconditioning of MSCs remarkably increased cell viability and the expression of HIF1-α and Bcl-2, when cells were under hypoxia/reoxygenation (H/R) stimuli. But the protective effects of TMZ were abolished after knocking down of HIF-1α. Three days after implantation of the cells into the peri-ischemic zone of rat myocardial ischemia-reperfusion (I/R) injury model, survival of the TMZ-preconditioned MSCs was high. Furthermore, capillary density and cardiac function were significantly better in the rats implanted with TMZ-preconditioned MSCs 28 days after cell injection. Conclusions: TMZ preconditioning increased the survival rate of MSCs, through up-regulation of HIF1-α, thus contributing to neovascularization and improved cardiac function of rats subjected to myocardial I/R injury. PMID:26629255
Intestinal ischemic preconditioning reduces liver ischemia reperfusion injury in rats
XUE, TONG-MIN; TAO, LI-DE; ZHANG, JIE; ZHANG, PEI-JIAN; LIU, XIA; CHEN, GUO-FENG; ZHU, YI-JIA
2016-01-01
The aim of the current study was to investigate whether intestinal ischemic preconditioning (IP) reduces damage to the liver during hepatic ischemia reperfusion (IR). Sprague Dawley rats were used to model liver IR injury, and were divided into the sham operation group (SO), IR group and IP group. The results indicated that IR significantly increased Bax, caspase 3 and NF-κBp65 expression levels, with reduced expression of Bcl-2 compared with the IP group. Compared with the IR group, the levels of AST, ALT, MPO, MDA, TNF-α and IL-1 were significantly reduced in the IP group. Immunohistochemistry for Bcl-2 and Bax indicated that Bcl-2 expression in the IP group was significantly increased compared with the IR group. In addition, IP reduced Bax expression compared with the IR group. The average liver injury was worsened in the IR group and improved in the IP group, as indicated by the morphological evaluation of liver tissues. The present study suggested that IP may alleviates apoptosis, reduce the release of pro-inflammatory cytokines, ameloriate reductions in liver function and reduce liver tissue injury. To conclude, IP provided protection against hepatic IR injury. PMID:26821057
Protective effects of remote ischemic preconditioning in isolated rat hearts
Teng, Xiao; Yuan, Xin; Tang, Yue; Shi, Jingqian
2015-01-01
To use Langendorff model to investigate whether remote ischemic preconditioning (RIPC) attenuates post-ischemic mechanical dysfunction on isolated rat heart and to explore possible mechanisms. SD rats were randomly divided into RIPC group, RIPC + norepinephrine (NE) depletion group, RIPC + pertussis toxin (PTX) pretreatment group, ischemia/reperfusion group without treatment (ischemia group) and time control (TC) group. RIPC was achieved through interrupted occlusion of anterior mesenteric artery. Then, Langendorff model was established using routine methods. Heart function was tested; immunohistochemistry and ELISA methods were used to detect various indices related to myocardial injury. Compared with ischemia group in which the hemodynamic parameters deteriorated significantly, heart function recovered to a certain degree among the RIPC, RIPC + NE depletion, and RIPC + PTX groups (P<0.05). More apoptotic nuclei were observed in ischemia group than in the other three groups (P<0.05); more apoptotic nuclei were detected in NE depletion and PTX groups than in RIPC group (P<0.05). While, there was no significant difference between NE depletion and PTX groups. In conclusion, RIPC protection on I/R myocardium extends to the period after hearts are isolated. NE and PTX-sensitive inhibitory G protein might have a role in the protection process. PMID:26550168
[Phytoadaptogens-induced phenomenon similar to ischemic preconditioning].
Arbuzov, A G; Maslov, L N; Burkova, V N; Krylatov, A V; Konkovskaia, Iu N; Safronov, S M
2009-04-01
The course administration (16 mg/kg per os for 5 days) of extracts of Panax ginseng or Rhodiola rosea induced a decrease in the infarction size/the area at risk (IS AAR) ratio during a 45-min local ischemia and a 2-hr reperfusion in artificially ventilated chloralose-anaesthetized rats. Single administration of ginseng or Rhodiola 24 h before ischemia did not affect the IS/AAR ratio. Chronic administration of Extracts of Eleutherococcus senticosus, Leuzea carthamoides and Aralia mandshurica had no effect on the IS/AAR ratio. Pretreatment with extract ofAralia mandshurica prevented appearance of ventricular arrhythmias during first 10 min coronary artery occlusion. Pretreatment with extract of Rhodiola rosea decreased the incidence of ventricular fibrillation during ischemia. Single administration of extracts of Panax ginseng or Rhodiola rosea in a dose of 16 mg/kg had no effect on the IS/AAR ratio. The authors conclude that extracts of ginseng or Rhodiola exhibit a powerful cardioprotective effect. Extract of Aralia exhibit a strong antiarrhythmic effect. Extracts of ginseng and Rhodiola do not mimic phenomena of ischemia preconditioning. PMID:19505042
Remote Limb Ischemic Preconditioning: A Neuroprotective Technique in Rodents.
Brandli, Alice
2015-01-01
Sublethal ischemia protects tissues against subsequent, more severe ischemia through the upregulation of endogenous mechanisms in the affected tissue. Sublethal ischemia has also been shown to upregulate protective mechanisms in remote tissues. A brief period of ischemia (5-10 min) in the hind limb of mammals induces self-protective responses in the brain, lung, heart and retina. The effect is known as remote ischemic preconditioning (RIP). It is a therapeutically promising way of protecting vital organs, and is already under clinical trials for heart and brain injuries. This publication demonstrates a controlled, minimally invasive method of making a limb - specifically the hind limb of a rat - ischemic. A blood pressure cuff developed for use in human neonates is connected to a manual sphygmomanometer and used to apply 160 mmHg pressure around the upper part of the hind limb. A probe designed to detect skin temperature is used to verify the ischemia, by recording the drop in skin temperature caused by pressure-induced occlusion of the leg arteries, and the rise in temperature which follows release of the cuff. This method of RIP affords protection to the rat retina against bright light-induced damage and degeneration. PMID:26065365
Concepts of hypoxic NO signaling in remote ischemic preconditioning
Totzeck, Matthias; Hendgen-Cotta, Ulrike; Rassaf, Tienush
2015-01-01
Acute coronary syndromes remain a leading single cause of death worldwide. Therapeutic strategies to treat cardiomyocyte threatening ischemia/reperfusion injury are urgently needed. Remote ischemic preconditioning (rIPC) applied by brief ischemic episodes to heart-distant organs has been tested in several clinical studies, and the major body of evidence points to beneficial effects of rIPC for patients. The underlying signaling, however, remains incompletely understood. This relates particularly to the mechanism by which the protective signal is transferred from the remote site to the target organ. Many pathways have been forwarded but none can explain the protective effects completely. In light of recent experimental studies, we here outline the current knowledge relating to the generation of the protective signal in the remote organ, the signal transfer to the target organ and the transduction of the transferred signal into cardioprotection. The majority of studies favors a humoral factor that activates cardiomyocyte downstream signaling - receptor-dependent and independently. Cellular targets include deleterious calcium (Ca2+) signaling, reactive oxygen species, mitochondrial function and structure, and cellular apoptosis and necrosis. Following an outline of the existing evidence, we will furthermore characterize the existing knowledge and discuss future perspectives with particular emphasis on the interaction between the recently discovered hypoxic nitrite-nitric oxide signaling in rIPC. This refers to the protective role of nitrite, which can be activated endogenously using rIPC and which then contributes to cardioprotection by rIPC. PMID:26516418
Effects of hypoxic preconditioning on synaptic ultrastructure in mice.
Liu, Yi; Sun, Zhishan; Sun, Shufeng; Duan, Yunxia; Shi, Jingfei; Qi, Zhifeng; Meng, Ran; Sun, Yongxin; Zeng, Xianwei; Chui, Dehua; Ji, Xunming
2015-01-01
Hypoxic preconditioning (HPC) elicits resistance to more drastic subsequent insults, which potentially provide neuroprotective therapeutic strategy, but the underlying mechanisms remain to be fully elucidated. Here, we examined the effects of HPC on synaptic ultrastructure in olfactory bulb of mice. Mice underwent up to five cycles of repeated HPC treatments, and hypoxic tolerance was assessed with a standard gasp reflex assay. As expected, HPC induced an increase in tolerance time. To assess synaptic responses, Western blots were used to quantify protein levels of representative markers for glia, neuron, and synapse, and transmission electron microscopy was used to examine synaptic ultrastructure and mitochondrial density. HPC did not significantly alter the protein levels of astroglial marker (GFAP), neuron-specific markers (GAP43, Tuj-1, and OMP), synaptic number markers (synaptophysin and SNAP25) or the percentage of excitatory synapses versus inhibitory synapses. However, HPC significantly affected synaptic curvature and the percentage of synapses with presynaptic mitochondria, which showed concomitant change pattern. These findings demonstrate that HPC is associated with changes in synaptic ultrastructure. PMID:25155519
Cerenkov luminescence tomography based on preconditioning orthogonal matching pursuit
NASA Astrophysics Data System (ADS)
Liu, Haixiao; Hu, Zhenhua; Wang, Kun; Tian, Jie; Yang, Xin
2015-03-01
Cerenkov luminescence imaging (CLI) is a novel optical imaging method and has been proved to be a potential substitute of the traditional radionuclide imaging such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT). This imaging method inherits the high sensitivity of nuclear medicine and low cost of optical molecular imaging. To obtain the depth information of the radioactive isotope, Cerenkov luminescence tomography (CLT) is established and the 3D distribution of the isotope is reconstructed. However, because of the strong absorption and scatter, the reconstruction of the CLT sources is always converted to an ill-posed linear system which is hard to be solved. In this work, the sparse nature of the light source was taken into account and the preconditioning orthogonal matching pursuit (POMP) method was established to effectively reduce the ill-posedness and obtain better reconstruction accuracy. To prove the accuracy and speed of this algorithm, a heterogeneous numerical phantom experiment and an in vivo mouse experiment were conducted. Both the simulation result and the mouse experiment showed that our reconstruction method can provide more accurate reconstruction result compared with the traditional Tikhonov regularization method and the ordinary orthogonal matching pursuit (OMP) method. Our reconstruction method will provide technical support for the biological application for Cerenkov luminescence.
Isoflurane Preconditioning Confers Cardioprotection by Activation of ALDH2
Lang, Xiao-E; Wang, Xiong; Zhang, Ke-Rang; Lv, Ji-Yuan; Jin, Jian-Hua; Li, Qing-Shan
2013-01-01
The volatile anesthetic, isoflurane, protects the heart from ischemia/reperfusion (I/R) injury. Aldehyde dehydrogenase 2 (ALDH2) is thought to be an endogenous mechanism against ischemia-reperfusion injury possibly through detoxification of toxic aldehydes. We investigated whether cardioprotection by isoflurane depends on activation of ALDH2.Anesthetized rats underwent 40 min of coronary artery occlusion followed by 120 min of reperfusion and were randomly assigned to the following groups: untreated controls, isoflurane preconditioning with and without an ALDH2 inhibitor, the direct activator of ALDH2 or a protein kinase C (PKCε) inhibitor. Pretreatment with isoflurane prior to ischemia reduced LDH and CK-MB levels and infarct size, while it increased phosphorylation of ALDH2, which could be blocked by the ALDH2 inhibitor, cyanamide. Isolated neonatal cardiomyocytes were treated with hypoxia followed by reoxygenation. Hypoxia/reoxygenation (H/R) increased cardiomyocyte apoptosis and injury which were attenuated by isoflurane and forced the activation of ALDH2. In contrast, the effect of isoflurane-induced protection was almost abolished by knockdown of ALDH2. Activation of ALDH2 and cardioprotection by isoflurane were substantially blocked by the PKCε inhibitor. Activation of ALDH2 by mitochondrial PKCε plays an important role in the cardioprotection of isoflurane in myocardium I/R injury. PMID:23468836
Remote Ischemic Preconditioning (RIPC) Modifies Plasma Proteome in Humans
Hepponstall, Michele; Ignjatovic, Vera; Binos, Steve; Monagle, Paul; Jones, Bryn; Cheung, Michael H. H.; d’Udekem, Yves; Konstantinov, Igor E.
2012-01-01
Remote Ischemic Preconditioning (RIPC) induced by brief episodes of ischemia of the limb protects against multi-organ damage by ischemia-reperfusion (IR). Although it has been demonstrated that RIPC affects gene expression, the proteomic response to RIPC has not been determined. This study aimed to examine RIPC induced changes in the plasma proteome. Five healthy adult volunteers had 4 cycles of 5 min ischemia alternating with 5 min reperfusion of the forearm. Blood samples were taken from the ipsilateral arm prior to first ischaemia, immediately after each episode of ischemia as well as, at 15 min and 24 h after the last episode of ischemia. Plasma samples from five individuals were analysed using two complementary techniques. Individual samples were analysed using 2Dimensional Difference in gel electrophoresis (2D DIGE) and mass spectrometry (MS). Pooled samples for each of the time-points underwent trypsin digestion and peptides generated were analysed in triplicate using Liquid Chromatography and MS (LC-MS). Six proteins changed in response to RIPC using 2D DIGE analysis, while 48 proteins were found to be differentially regulated using LC-MS. The proteins of interest were involved in acute phase response signalling, and physiological molecular and cellular functions. The RIPC stimulus modifies the plasma protein content in blood taken from the ischemic arm in a cumulative fashion and evokes a proteomic response in peripheral blood. PMID:23139772
Arnoldi preconditioning for solving large linear biomedical systems.
Deo, Makarand; Vigmond, Edward
2005-01-01
Simulations of biomedical systems often involve solving large, sparse, linear systems of the form Ax = b. In initial value problems, this system is solved at every time step, so a quick solution is essential for tractability. Iterative solvers, especially preconditioned conjugate gradient, are attractive since memory demands are minimized compared to direct methods, albeit at a cost of solution speed. A proper preconditioner can drastically reduce computation and remains an area of active research. In this paper, we propose a novel preconditioner based on system order reduction using the Arnoldi method. Systems of orders up to a million, generated from a finite element method formulation of the elliptic portion of the bidomain equations, are solved with the new preconditioner and performance is compared with that of other preconditioners. Results indicate that the new method converges considerably faster, often within a single iteration. It also uses less memory than an incomplete LU decomposition (ILU). For solving a system repeatedly, the Arnoldi transformation must be continually recomputed, unlike ILU, but this can be done quickly. In conclusion, for solving a system once, the Arnoldi preconditioner offers a greatly reduced solution time, and for repeated solves, will still be faster than an ILU preconditioner. PMID:17282853
Accelerating large cardiac bidomain simulations by arnoldi preconditioning.
Deo, Makarand; Bauer, Steffen; Plank, Gernot; Vigmond, Edward
2006-01-01
Bidomain simulations of cardiac systems often in volve solving large, sparse, linear systems of the form Ax=b. These simulations are computationally very expensive in terms of run time and memory requirements. Therefore, efficient solvers are essential to keep simulations tractable. In this paper, an efficient preconditioner for the conjugate gradient (CG) method based on system order reduction using the Arnoldi method (A-PCG) is explained. Large order systems generated during cardiac bidomain simulations using a finite element method formulation, are solved using the A-PCG method. Its performance is compared with incomplete LU (ILU) preconditioning. Results indicate that the A-PCG estimates an approximate solution considerably faster than the ILU, often within a single iteration. To reduce the computational demands in terms of memory and run time, the use of a cascaded preconditioner is suggested. The A-PCG can be applied to quickly obtain an approximate solution, subsequently a cheap iterative method such as successive overrelaxation (SOR) is applied to further refine the solution to arrive at a desired accuracy. The memory requirements are less than direct LU but more than ILU method. The proposed scheme is shown to yield significant speedups when solving time evolving systems. PMID:17946209
Calik, Michael W; Shankarappa, Sahadev A; Langert, Kelly A; Stubbs, Evan B
2015-01-01
A short-term exposure to moderately intense physical exercise affords a novel measure of protection against autoimmune-mediated peripheral nerve injury. Here, we investigated the mechanism by which forced exercise attenuates the development and progression of experimental autoimmune neuritis (EAN), an established animal model of Guillain-Barré syndrome. Adult male Lewis rats remained sedentary (control) or were preconditioned with forced exercise (1.2 km/day × 3 weeks) prior to P2-antigen induction of EAN. Sedentary rats developed a monophasic course of EAN beginning on postimmunization day 12.3 ± 0.2 and reaching peak severity on day 17.0 ± 0.3 (N = 12). By comparison, forced-exercise preconditioned rats exhibited a similar monophasic course but with significant (p < .05) reduction of disease severity. Analysis of popliteal lymph nodes revealed a protective effect of exercise preconditioning on leukocyte composition and egress. Compared with sedentary controls, forced exercise preconditioning promoted a sustained twofold retention of P2-antigen responsive leukocytes. The percentage distribution of pro-inflammatory (Th1) lymphocytes retained in the nodes from sedentary EAN rats (5.1 ± 0.9%) was significantly greater than that present in nodes from forced-exercise preconditioned EAN rats (2.9 ± 0.6%) or from adjuvant controls (2.0 ± 0.3%). In contrast, the percentage of anti-inflammatory (Th2) lymphocytes (7-10%) and that of cytotoxic T lymphocytes (∼20%) remained unaltered by forced exercise preconditioning. These data do not support an exercise-inducible shift in Th1:Th2 cell bias. Rather, preconditioning with forced exercise elicits a sustained attenuation of EAN severity, in part, by altering the composition and egress of autoreactive proinflammatory (Th1) lymphocytes from draining lymph nodes. PMID:26186926
Hypoxic preconditioning with cobalt ameliorates hypobaric hypoxia induced pulmonary edema in rat.
Shukla, Dhananjay; Saxena, Saurabh; Purushothaman, Jayamurthy; Shrivastava, Kalpana; Singh, Mrinalini; Shukla, Shirish; Malhotra, Vineet Kumar; Mustoori, Sairam; Bansal, Anju
2011-04-10
Exposure to high altitude results in hypobaric hypoxia which is considered as an acute physiological stress and often leads to high altitude maladies such as high altitude pulmonary edema (HAPE) and high altitude cerebral edema (HACE). The best way to prevent high altitude injuries is hypoxic preconditioning which has potential clinical usefulness and can be mimicked by cobalt chloride. Preconditioning with cobalt has been reported to provide protection in various tissues against ischemic injury. However, the effect of preconditioning with cobalt against high altitude induced pulmonary edema has not been investigated in vivo. Therefore, in the present study, rats pretreated with saline or cobalt (12.5mg/kg body weight) for 7days were exposed to hypobaric hypoxia of 9142m for 5h at 24°C. Formation of pulmonary edema was assessed by measuring transvascular leakage of sodium fluorescein dye and lung water content. Total protein content, albumin content, vascular endothelial growth factor (VEGF) and cytokine levels were measured in bronchoalveolar lavage fluid. Expression of HO-1, MT, NF-κB DNA binding activity and lung tissue pathology were evaluated to determine the effect of preconditioning on HAPE. Hypobaric hypoxia induced increase in transvascular leakage of sodium fluorescein dye, lung water content, lavage total protein, albumin, VEGF levels, pro-inflammatory cytokine levels, tissue expression of cell adhesion molecules and NF-κB DNA binding activity were reduced significantly after hypoxic preconditioning with cobalt. Expression of anti-inflammatory protein HO-1, MT, TGF-β and IL-6 were increased after hypoxic preconditioning. These data suggest that hypoxic preconditioning with cobalt has protective effect against HAPE. PMID:21296072
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Lung, Shu
2009-01-01
Modern airplane design is a multidisciplinary task which combines several disciplines such as structures, aerodynamics, flight controls, and sometimes heat transfer. Historically, analytical and experimental investigations concerning the interaction of the elastic airframe with aerodynamic and in retia loads have been conducted during the design phase to determine the existence of aeroelastic instabilities, so called flutter .With the advent and increased usage of flight control systems, there is also a likelihood of instabilities caused by the interaction of the flight control system and the aeroelastic response of the airplane, known as aeroservoelastic instabilities. An in -house code MPASES (Ref. 1), modified from PASES (Ref. 2), is a general purpose digital computer program for the analysis of the closed-loop stability problem. This program used subroutines given in the International Mathematical and Statistical Library (IMSL) (Ref. 3) to compute all of the real and/or complex conjugate pairs of eigenvalues of the Hessenberg matrix. For high fidelity configuration, these aeroelastic system matrices are large and compute all eigenvalues will be time consuming. A subspace iteration method (Ref. 4) for complex eigenvalues problems with nonsymmetric matrices has been formulated and incorporated into the modified program for aeroservoelastic stability (MPASES code). Subspace iteration method only solve for the lowest p eigenvalues and corresponding eigenvectors for aeroelastic and aeroservoelastic analysis. In general, the selection of p is ranging from 10 for wing flutter analysis to 50 for an entire aircraft flutter analysis. The application of this newly incorporated code is an experiment known as the Aerostructures Test Wing (ATW) which was designed by the National Aeronautic and Space Administration (NASA) Dryden Flight Research Center, Edwards, California to research aeroelastic instabilities. Specifically, this experiment was used to study an instability
NASA Astrophysics Data System (ADS)
Russakoff, Arthur; Li, Yonghui; He, Shenglai; Varga, Kalman
2016-05-01
Time-dependent Density Functional Theory (TDDFT) has become successful for its balance of economy and accuracy. However, the application of TDDFT to large systems or long time scales remains computationally prohibitively expensive. In this paper, we investigate the numerical stability and accuracy of two subspace propagation methods to solve the time-dependent Kohn-Sham equations with finite and periodic boundary conditions. The bases considered are the Lánczos basis and the adiabatic eigenbasis. The results are compared to a benchmark fourth-order Taylor expansion of the time propagator. Our results show that it is possible to use larger time steps with the subspace methods, leading to computational speedups by a factor of 2-3 over Taylor propagation. Accuracy is found to be maintained for certain energy regimes and small time scales.
Jankovic, Marko; Ogawa, Hidemitsu
2003-08-01
This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too. PMID:12964209
Russakoff, Arthur; Li, Yonghui; He, Shenglai; Varga, Kalman
2016-05-28
Time-dependent Density Functional Theory (TDDFT) has become successful for its balance of economy and accuracy. However, the application of TDDFT to large systems or long time scales remains computationally prohibitively expensive. In this paper, we investigate the numerical stability and accuracy of two subspace propagation methods to solve the time-dependent Kohn-Sham equations with finite and periodic boundary conditions. The bases considered are the Lánczos basis and the adiabatic eigenbasis. The results are compared to a benchmark fourth-order Taylor expansion of the time propagator. Our results show that it is possible to use larger time steps with the subspace methods, leading to computational speedups by a factor of 2-3 over Taylor propagation. Accuracy is found to be maintained for certain energy regimes and small time scales. PMID:27250297
Resveratrol preconditioning protects against cerebral ischemic injury via Nrf2
Narayanan, Srinivasan V.; Dave, Kunjan R.; Saul, Isa; Perez-Pinzon, Miguel A.
2015-01-01
Background and Purpose Nuclear erythroid 2 related factor 2 (Nrf2) is an astrocyte-enriched transcription factor that has previously been shown to upregulate cellular antioxidant systems in response to ischemia. While resveratrol preconditioning (RPC) has emerged as a potential neuroprotective therapy, the involvement of Nrf2 in RPC-induced neuroprotection and mitochondrial reactive oxygen species (ROS) production following cerebral ischemia remains unclear. The goal of our study was to study the contribution of Nrf2 to RPC and its effects on mitochondrial function. Methods We used rodent astrocyte cultures and an in vivo stroke model with RPC. An Nrf2 DNA-binding ELISA and protein analysis via Western blotting of downstream Nrf2 targets were performed to determine RPC-induced activation of Nrf2 in rat and mouse astrocytes. Following RPC, mitochondrial function was determined by measuring ROS production and mitochondrial respiration in both wild-type (WT) and Nrf2−/− mice. Infarct volume was measured to determine neuroprotection, while protein levels were measured by immunoblotting. Results We report that Nrf2 is activated by RPC in rodent astrocyte cultures, and that loss of Nrf2 reduced RPC-mediated neuroprotection in a mouse model of focal cerebral ischemia. In addition, we observed that wild-type and Nrf2−/− cortical mitochondria exhibited increased uncoupling and ROS production following RPC treatments, Finally, Nrf2−/− astrocytes exhibited decreased mitochondrial antioxidant expression and were unable to upregulate cellular antioxidants following RPC treatment. Conclusion Nrf2 contributes to RPC-induced neuroprotection through maintaining mitochondrial coupling and antioxidant protein expression. PMID:25908459
An overview of NSPCG: A nonsymmetric preconditioned conjugate gradient package
NASA Astrophysics Data System (ADS)
Oppe, Thomas C.; Joubert, Wayne D.; Kincaid, David R.
1989-05-01
The most recent research-oriented software package developed as part of the ITPACK Project is called "NSPCG" since it contains many nonsymmetric preconditioned conjugate gradient procedures. It is designed to solve large sparse systems of linear algebraic equations by a variety of different iterative methods. One of the main purposes for the development of the package is to provide a common modular structure for research on iterative methods for nonsymmetric matrices. Another purpose for the development of the package is to investigate the suitability of several iterative methods for vector computers. Since the vectorizability of an iterative method depends greatly on the matrix structure, NSPCG allows great flexibility in the operator representation. The coefficient matrix can be passed in one of several different matrix data storage schemes. These sparse data formats allow matrices with a wide range of structures from highly structured ones such as those with all nonzeros along a relatively small number of diagonals to completely unstructured sparse matrices. Alternatively, the package allows the user to call the accelerators directly with user-supplied routines for performing certain matrix operations. In this case, one can use the data format from an application program and not be required to copy the matrix into one of the package formats. This is particularly advantageous when memory space is limited. Some of the basic preconditioners that are available are point methods such as Jacobi, Incomplete LU Decomposition and Symmetric Successive Overrelaxation as well as block and multicolor preconditioners. The user can select from a large collection of accelerators such as Conjugate Gradient (CG), Chebyshev (SI, for semi-iterative), Generalized Minimal Residual (GMRES), Biconjugate Gradient Squared (BCGS) and many others. The package is modular so that almost any accelerator can be used with almost any preconditioner.
Effects of ischemic preconditioning on short-duration cycling performance.
Cruz, Rogério Santos de Oliveira; de Aguiar, Rafael Alves; Turnes, Tiago; Salvador, Amadeo Félix; Caputo, Fabrizio
2016-08-01
It has been demonstrated that ischemic preconditioning (IPC) improves endurance performance. However, the potential benefits during anaerobic events and the mechanism(s) underlying these benefits remain unclear. Fifteen recreational cyclists were assessed to evaluate the effects of IPC of the upper thighs on anaerobic performance, skeletal muscle activation, and metabolic responses during a 60-s sprint performance. After an incremental test and a familiarization visit, subjects were randomly submitted in visits 3 and 4 to a performance protocol preceded by intermittent bilateral cuff inflation (4 × (5 min of blood flow restriction + 5 min reperfusion)) at either 220 mm Hg (IPC) or 20 mm Hg (control). To increase data reliability, each intervention was replicated, which was also in a random manner. In addition to the mean power output, the pulmonary oxygen uptake, blood lactate kinetics, and quadriceps electromyograms (EMGs) were analyzed during performance and throughout 45 min of passive recovery. After IPC, performance was improved by 2.1% compared with control (95% confidence intervals of 0.8% to 3.3%, P = 0.001), followed by increases in (i) the accumulated oxygen deficit, (ii) the amplitude of blood lactate kinetics, (iii) the total amount of oxygen consumed during recovery, and (iv) the overall EMG amplitude (P < 0.05). In addition, the ratio between EMG and power output was higher during the final third of performance after IPC (P < 0.05). These results suggest an increased skeletal muscle activation and a higher anaerobic contribution as the ultimate responses of IPC on short-term exercise performance. PMID:27404398
Exercise preconditioning attenuates pressure overload-induced pathological cardiac hypertrophy
Xu, Tongyi; Tang, Hao; Zhang, Ben; Cai, Chengliang; Liu, Xiaohong; Han, Qingqi; Zou, Liangjian
2015-01-01
Pathological cardiac hypertrophy, a common response of the heart to a variety of cardiovascular diseases, is typically associated with myocytes remodeling and fibrotic replacement, cardiac dysfunction. Exercise preconditioning (EP) increases the myocardial mechanical load and enhances tolerance of cardiac ischemia-reperfusion injury (IRI), however, is less reported in pathological cardiac hypertrophy. To determine the effect of EP in pathological cardiac hypertrophy, Male 10-wk-old Sprague-Dawley rats (n=30) were subjected to 4 weeks of EP followed by 4-8 weeks of pressure overload (transverse aortic constriction, TAC) to induce pathological remodeling. TAC in untrained controls (n=30) led to pathological cardiac hypertrophy, depressed systolic function. We observed that left ventricular wall thickness in end diastole, heart size, heart weight-to-body weight ratio, heart weight-to-tibia length ratio, cross-sectional area of cardiomyocytes and the reactivation of fetal genes (atrial natriuretic peptide and brain natriuretic peptide) were markedly increased, meanwhile left ventricular internal dimension at end-diastole, systolic function were significantly decreased by TAC at 4 wks after operation (P < 0.01), all of which were effectively inhibited by EP treatment (P < 0.05), but the differences of these parameters were decreased at 8 wks after operation. Furthermore, EP treatment inhibited degradation of IκBα, and decreased NF-κB p65 subunit levels in the nuclear fraction, and then reduced IL2 levels in the myocardium of rats subject to TAC. EP can effectively attenuate pathological cardiac hypertrophic responses induced by TAC possibly through inhibition of degradation of IκB and blockade of the NF-κB signaling pathway in the early stage of pathological cardiac hypertrophy. PMID:25755743
Ischemic preconditioning reduces hemodynamic response during metaboreflex activation.
Mulliri, Gabriele; Sainas, Gianmarco; Magnani, Sara; Palazzolo, Girolamo; Milia, Nicola; Orrù, Andrea; Roberto, Silvana; Marongiu, Elisabetta; Milia, Raffaele; Crisafulli, Antonio
2016-05-01
Ischemic preconditioning (IP) has been shown to improve exercise performance and to delay fatigue. However, the precise mechanisms through which IP operates remain elusive. It has been hypothesized that IP lowers the sensation of fatigue by reducing the discharge of group III and IV nerve endings, which also regulate hemodynamics during the metaboreflex. We hypothesized that IP reduces the blood pressure response during the metaboreflex. Fourteen healthy males (age between 25 and 48 yr) participated in this study. They underwent the following randomly assigned protocol: postexercise muscle ischemia (PEMI) test, during which the metaboreflex was elicited after dynamic handgrip; control exercise recovery session (CER) test; and PEMI after IP (IP-PEMI) test. IP was obtained by occluding forearm circulation for three cycles of 5 min spaced by 5 min of reperfusion. Hemodynamics were evaluated by echocardiography and impedance cardiography. The main results were that after IP the mean arterial pressure response was reduced compared with the PEMI test (means ± SD +3.37 ± 6.41 vs. +9.16 ± 7.09 mmHg, respectively). This was the consequence of an impaired venous return that impaired the stroke volume during the IP-PEMI more than during the PEMI test (-1.43 ± 15.35 vs. +10.28 ± 10.479 ml, respectively). It was concluded that during the metaboreflex, IP affects hemodynamics mainly because it impairs the capacity to augment venous return and to recruit the cardiac preload reserve. It was hypothesized that this is the consequence of an increased nitric oxide production, which reduces the possibility to constrict venous capacity vessels. PMID:26936782
Glaciations in response to climate variations preconditioned by evolving topography.
Pedersen, Vivi Kathrine; Egholm, David Lundbek
2013-01-10
Landscapes modified by glacial erosion show a distinct distribution of surface area with elevation (hypsometry). In particular, the height of these regions is influenced by climatic gradients controlling the altitude where glacial and periglacial processes are the most active, and as a result, surface area is focused just below the snowline altitude. Yet the effect of this distinct glacial hypsometric signature on glacial extent and therefore on continued glacial erosion has not previously been examined. Here we show how this topographic configuration influences the climatic sensitivity of Alpine glaciers, and how the development of a glacial hypsometric distribution influences the intensity of glaciations on timescales of more than a few glacial cycles. We find that the relationship between variations in climate and the resulting variation in areal extent of glaciation changes drastically with the degree of glacial modification in the landscape. First, in landscapes with novel glaciations, a nearly linear relationship between climate and glacial area exists. Second, in previously glaciated landscapes with extensive area at a similar elevation, highly nonlinear and rapid glacial expansions occur with minimal climate forcing, once the snowline reaches the hypsometric maximum. Our results also show that erosion associated with glaciations before the mid-Pleistocene transition at around 950,000 years ago probably preconditioned the landscape--producing glacial landforms and hypsometric maxima--such that ongoing cooling led to a significant change in glacial extent and erosion, resulting in more extensive glaciations and valley deepening in the late Pleistocene epoch. We thus provide a mechanism that explains previous observations from exposure dating and low-temperature thermochronology in the European Alps, and suggest that there is a strong topographic control on the most recent Quaternary period glaciations. PMID:23302860
Meclizine Preconditioning Protects the Kidney Against Ischemia-Reperfusion Injury.
Kishi, Seiji; Campanholle, Gabriela; Gohil, Vishal M; Perocchi, Fabiana; Brooks, Craig R; Morizane, Ryuji; Sabbisetti, Venkata; Ichimura, Takaharu; Mootha, Vamsi K; Bonventre, Joseph V
2015-09-01
Global or local ischemia contributes to the pathogenesis of acute kidney injury (AKI). Currently there are no specific therapies to prevent AKI. Potentiation of glycolytic metabolism and attenuation of mitochondrial respiration may decrease cell injury and reduce reactive oxygen species generation from the mitochondria. Meclizine, an over-the-counter anti-nausea and -dizziness drug, was identified in a 'nutrient-sensitized' chemical screen. Pretreatment with 100 mg/kg of meclizine, 17 h prior to ischemia protected mice from IRI. Serum creatinine levels at 24 h after IRI were 0.13 ± 0.06 mg/dl (sham, n = 3), 1.59 ± 0.10 mg/dl (vehicle, n = 8) and 0.89 ± 0.11 mg/dl (meclizine, n = 8). Kidney injury was significantly decreased in meclizine treated mice compared with vehicle group (p < 0.001). Protection was also seen when meclizine was administered 24 h prior to ischemia. Meclizine reduced inflammation, mitochondrial oxygen consumption, oxidative stress, mitochondrial fragmentation, and tubular injury. Meclizine preconditioned kidney tubular epithelial cells, exposed to blockade of glycolytic and oxidative metabolism with 2-deoxyglucose and NaCN, had reduced LDH and cytochrome c release. Meclizine upregulated glycolysis in glucose-containing media and reduced cellular ATP levels in galactose-containing media. Meclizine inhibited the Kennedy pathway and caused rapid accumulation of phosphoethanolamine. Phosphoethanolamine recapitulated meclizine-induced protection both in vitro and in vivo. PMID:26501107
Meclizine Preconditioning Protects the Kidney Against Ischemia–Reperfusion Injury
Kishi, Seiji; Campanholle, Gabriela; Gohil, Vishal M.; Perocchi, Fabiana; Brooks, Craig R.; Morizane, Ryuji; Sabbisetti, Venkata; Ichimura, Takaharu; Mootha, Vamsi K.; Bonventre, Joseph V.
2015-01-01
Global or local ischemia contributes to the pathogenesis of acute kidney injury (AKI). Currently there are no specific therapies to prevent AKI. Potentiation of glycolytic metabolism and attenuation of mitochondrial respiration may decrease cell injury and reduce reactive oxygen species generation from the mitochondria. Meclizine, an over-the-counter anti-nausea and -dizziness drug, was identified in a ‘nutrient-sensitized’ chemical screen. Pretreatment with 100 mg/kg of meclizine, 17 h prior to ischemia protected mice from IRI. Serum creatinine levels at 24 h after IRI were 0.13 ± 0.06 mg/dl (sham, n = 3), 1.59 ± 0.10 mg/dl (vehicle, n = 8) and 0.89 ± 0.11 mg/dl (meclizine, n = 8). Kidney injury was significantly decreased in meclizine treated mice compared with vehicle group (p < 0.001). Protection was also seen when meclizine was administered 24 h prior to ischemia. Meclizine reduced inflammation, mitochondrial oxygen consumption, oxidative stress, mitochondrial fragmentation, and tubular injury. Meclizine preconditioned kidney tubular epithelial cells, exposed to blockade of glycolytic and oxidative metabolism with 2-deoxyglucose and NaCN, had reduced LDH and cytochrome c release. Meclizine upregulated glycolysis in glucose-containing media and reduced cellular ATP levels in galactose-containing media. Meclizine inhibited the Kennedy pathway and caused rapid accumulation of phosphoethanolamine. Phosphoethanolamine recapitulated meclizine-induced protection both in vitro and in vivo. PMID:26501107
Dynamic preconditioning of the September sea-ice extent minimum
NASA Astrophysics Data System (ADS)
Williams, James; Tremblay, Bruno; Newton, Robert; Allard, Richard
2016-04-01
There has been an increased interest in seasonal forecasting of the sea-ice extent in recent years, in particular the minimum sea-ice extent. We propose a dynamical mechanism, based on winter preconditioning through first year ice formation, that explains a significant fraction of the variance in the anomaly of the September sea-ice extent from the long-term linear trend. To this end, we use a Lagrangian trajectory model to backtrack the September sea-ice edge to any time during the previous winter and quantify the amount of sea-ice divergence along the Eurasian and Alaskan coastlines as well as the Fram Strait sea-ice export. We find that coastal divergence that occurs later in the winter (March, April and May) is highly correlated with the following September sea-ice extent minimum (r = ‑0.73). This is because the newly formed first year ice will melt earlier allowing for other feedbacks (e.g. ice albedo feedback) to start amplifying the signal early in the melt season when the solar input is large. We find that the winter mean Fram Strait sea-ice export anomaly is also correlated with the minimum sea-ice extent the following summer. Next we backtrack a synthetic ice edge initialized at the beginning of the melt season (June 1st) in order to develop hindcast models of the September sea-ice extent that do not rely on a-priori knowledge of the minimum sea-ice extent. We find that using a multi-variate regression model of the September sea-ice extent anomaly based on coastal divergence and Fram Strait ice export as predictors reduces the error by 41%. A hindcast model based on the mean DJFMA Arctic Oscillation index alone reduces the error by 24%.
Analysis and modeling of neural processes underlying sensory preconditioning.
Matsumoto, Yukihisa; Hirashima, Daisuke; Mizunami, Makoto
2013-03-01
Sensory preconditioning (SPC) is a procedure to demonstrate learning to associate between relatively neutral sensory stimuli in the absence of an external reinforcing stimulus, the underlying neural mechanisms of which have remained obscure. We address basic questions about neural processes underlying SPC, including whether neurons that mediate reward or punishment signals in reinforcement learning participate in association between neutral sensory stimuli. In crickets, we have suggested that octopaminergic (OA-ergic) or dopaminergic (DA-ergic) neurons participate in memory acquisition and retrieval in appetitive or aversive conditioning, respectively. Crickets that had been trained to associate an odor (CS2) with a visual pattern (CS1) (phase 1) and then to associate CS1 with water reward or quinine punishment (phase 2) exhibited a significantly increased or decreased preference for CS2 that had never been paired with the US, demonstrating successful SPC. Injection of an OA or DA receptor antagonist at different phases of the SPC training and testing showed that OA-ergic or DA-ergic neurons do not participate in learning of CS2-CS1 association in phase 1, but that OA-ergic neurons participate in learning in phase 2 and memory retrieval after appetitive SPC training. We also obtained evidence suggesting that association between CS2 and US, which should underlie conditioned response of crickets to CS2, is formed in phase 2, contrary to the standard theory of SPC assuming that it occurs in the final test. We propose models of SPC to account for these findings, by extending our model of classical conditioning. PMID:23380289
NASA Astrophysics Data System (ADS)
Baranov, Vitaly; Oseledets, Ivan
2015-11-01
This paper is the first application of the tensor-train (TT) cross approximation procedure for potential energy surface fitting. In order to reduce the complexity, we combine the TT-approach with another technique recently introduced in the field of numerical analysis: an affine transformation of Cartesian coordinates into the active subspaces where the PES function has the most variability. The numerical experiments for the water molecule and for the nitrous acid molecule confirm the efficiency of this approach.
NASA Astrophysics Data System (ADS)
Pagnacco, E.; de Cursi, E. Souza; Sampaio, R.
2016-04-01
This study concerns the computation of frequency responses of linear stochastic mechanical systems through a modal analysis. A new strategy, based on transposing standards deterministic deflated and subspace inverse power methods into stochastic framework, is introduced via polynomial chaos representation. Applicability and effectiveness of the proposed schemes is demonstrated through three simple application examples and one realistic application example. It is shown that null and repeated-eigenvalue situations are addressed successfully.
Zhang, Peng; Zhou, Ning; Abdollahi, Ali
2013-09-10
A Generalized Subspace-Least Mean Square (GSLMS) method is presented for accurate and robust estimation of oscillation modes from exponentially damped power system signals. The method is based on orthogonality of signal and noise eigenvectors of the signal autocorrelation matrix. Performance of the proposed method is evaluated using Monte Carlo simulation and compared with Prony method. Test results show that the GSLMS is highly resilient to noise and significantly dominates Prony method in tracking power system modes under noisy environments.
Xu, Y; Li, N
2014-09-01
Biological species have produced many simple but efficient rules in their complex and critical survival activities such as hunting and mating. A common feature observed in several biological motion strategies is that the predator only moves along paths in a carefully selected or iteratively refined subspace (or manifold), which might be able to explain why these motion strategies are effective. In this paper, a unified linear algebraic formulation representing such a predator-prey relationship is developed to simplify the construction and refinement process of the subspace (or manifold). Specifically, the following three motion strategies are studied and modified: motion camouflage, constant absolute target direction and local pursuit. The framework constructed based on this varying subspace concept could significantly reduce the computational cost in solving a class of nonlinear constrained optimal trajectory planning problems, particularly for the case with severe constraints. Two non-trivial examples, a ground robot and a hypersonic aircraft trajectory optimization problem, are used to show the capabilities of the algorithms in this new computational framework. PMID:24713876
Sabra, Karim G; Anderson, Shaun D
2014-05-01
Structural echoes of underwater elastic targets, used for detection and classification purposes, can be highly localized in the time-frequency domain and can be aspect-dependent. Hence such structural echoes recorded along a distributed (synthetic) aperture, e.g., using a moving receiver platform, would not meet the stationarity and multiple snapshots requirements of common subspace array processing methods used for denoising array data based on their estimated covariance matrix. To address this issue, this article introduces a subspace array processing method based on the space-time-frequency distribution (STFD) of single-snapshots of non-stationary signals. This STFD is obtained by computing Cohen's class time-frequency distributions between all pairwise combination of the recorded signals along an arbitrary aperture array. This STFD is interpreted as a generalized array covariance matrix which automatically accounts for the inherent coherence across the time-frequency plane of the received nonstationary echoes emanating from the same target. Hence, identifying the signal's subspace from the eigenstructure of this STFD provides a means for denoising these non-stationary structural echoes by spreading the clutter and noise power in the time-frequency domain; as demonstrated here numerically and experimentally using the structural echoes of a thin steel spherical shell measured along a synthetic aperture. PMID:24815264
Eigen nodule: view-based recognition of lung nodule in chest x-ray CT images using subspace method
NASA Astrophysics Data System (ADS)
Nakamura, Yoshihiko; Fukano, Gentaro; Takizawa, Hotaka; Mizuno, Shinji; Yamamoto, Shinji; Matsumoto, Tohru; Tateno, Yukio; Iinuma, Takeshi
2004-05-01
We previously proposed a recognition method of lung nodules based on experimentally selected feature values (such as contrast, circularities, etc.) of the suspicious shadows detected by our Quoit filter. In this paper, we propose a new recognition method of lung nodule using each CT value itself in ROI (region of interest) area as a feature value. In the clustering stage, first, the suspicious shadows are classified into some clusters using Principal Component (PC) theories. A set of CT values in each ROI is regarded as a feature vector, and then the eigen vectors and the eigen values are calculated for each cluster by applying Principal Component Analysis (PCA). The eigen vectors (we call them Eigen Images) corresponding to the first 10 largest eigen values, are utilized as base vectors for subspaces of the clusters in the feature space. In the discrimination stage, correlations are measured between the unknown shadow and the subspace which is spanned by the Eigen Images. If the correlation with the abnormal subspace is large, the suspicious shadow is determined to be abnormal. Otherwise, it is determined to be normal. By applying our new method, good results have been acquired.
Wu, Xiao; Shen, Jiong; Li, Yiguo; Lee, Kwang Y
2014-05-01
This paper develops a novel data-driven fuzzy modeling strategy and predictive controller for boiler-turbine unit using fuzzy clustering and subspace identification (SID) methods. To deal with the nonlinear behavior of boiler-turbine unit, fuzzy clustering is used to provide an appropriate division of the operation region and develop the structure of the fuzzy model. Then by combining the input data with the corresponding fuzzy membership functions, the SID method is extended to extract the local state-space model parameters. Owing to the advantages of the both methods, the resulting fuzzy model can represent the boiler-turbine unit very closely, and a fuzzy model predictive controller is designed based on this model. As an alternative approach, a direct data-driven fuzzy predictive control is also developed following the same clustering and subspace methods, where intermediate subspace matrices developed during the identification procedure are utilized directly as the predictor. Simulation results show the advantages and effectiveness of the proposed approach. PMID:24559835
NASA Astrophysics Data System (ADS)
Chang, Chein-I.; Du, Qian
1999-12-01
Determination of Intrinsic Dimensionality (ID) for remotely sensed imagery has been a challenging problem. For multispectral imagery it may be solvable by Principal Components Analysis (PCA) due to a small number of spectral bands which implies that ID is also small. However, PCA method may not be effective if it is applied to hyperspectral images. This may arise in the fact that a high spectral-resolution hyperspectral sensor may also extract many unknown interfering signatures in addition to endmember signatures. So, determining the ID of hyperspectral imagery is more problematic than that of multispectral imagery. This paper presents a Neyman-Pearson detection theory-based eigen analysis for determination of ID for hyperspectral imagery, particularly, a new approach referred to as Noise Subspace Projection (NSP)-based eigen-thresholding method. It is derived from a noise whitening process coupled with a Neyman- Pearson detector. The former estimates the noise covariance matrix which will be used to whiten the data sample correlation matrix, whereas the latter converts the problem of determining ID to a Neyman-Pearson decision with the Receiver Operating Characteristics (ROC) analysis used as a thresholding technique to estimate ID. In order to demonstrate the effectiveness of the proposed method AVIRIS are used for experiments.
NASA Astrophysics Data System (ADS)
Song, Xue-Ke; Zhang, Hao; Ai, Qing; Qiu, Jing; Deng, Fu-Guo
2016-02-01
By using transitionless quantum driving algorithm (TQDA), we present an efficient scheme for the shortcuts to the holonomic quantum computation (HQC). It works in decoherence-free subspace (DFS) and the adiabatic process can be speeded up in the shortest possible time. More interestingly, we give a physical implementation for our shortcuts to HQC with nitrogen-vacancy centers in diamonds dispersively coupled to a whispering-gallery mode microsphere cavity. It can be efficiently realized by controlling appropriately the frequencies of the external laser pulses. Also, our scheme has good scalability with more qubits. Different from previous works, we first use TQDA to realize a universal HQC in DFS, including not only two noncommuting accelerated single-qubit holonomic gates but also a accelerated two-qubit holonomic controlled-phase gate, which provides the necessary shortcuts for the complete set of gates required for universal quantum computation. Moreover, our experimentally realizable shortcuts require only two-body interactions, not four-body ones, and they work in the dispersive regime, which relax greatly the difficulty of their physical implementation in experiment. Our numerical calculations show that the present scheme is robust against decoherence with current experimental parameters.
Subspace Compressive GLRT Detector for MIMO Radar in the Presence of Clutter
Bolisetti, Siva Karteek; Patwary, Mohammad; Ahmed, Khawza; Soliman, Abdel-Hamid; Abdel-Maguid, Mohamed
2015-01-01
The problem of optimising the target detection performance of MIMO radar in the presence of clutter is considered. The increased false alarm rate which is a consequence of the presence of clutter returns is known to seriously degrade the target detection performance of the radar target detector, especially under low SNR conditions. In this paper, a mathematical model is proposed to optimise the target detection performance of a MIMO radar detector in the presence of clutter. The number of samples that are required to be processed by a radar target detector regulates the amount of processing burden while achieving a given detection reliability. While Subspace Compressive GLRT (SSC-GLRT) detector is known to give optimised radar target detection performance with reduced computational complexity, it however suffers a significant deterioration in target detection performance in the presence of clutter. In this paper we provide evidence that the proposed mathematical model for SSC-GLRT detector outperforms the existing detectors in the presence of clutter. The performance analysis of the existing detectors and the proposed SSC-GLRT detector for MIMO radar in the presence of clutter are provided in this paper. PMID:26495422
Combinatorial chromatin modification patterns in the human genome revealed by subspace clustering
Ucar, Duygu; Hu, Qingyang; Tan, Kai
2011-01-01
Chromatin modifications, such as post-translational modification of histone proteins and incorporation of histone variants, play an important role in regulating gene expression. Joint analyses of multiple histone modification maps are starting to reveal combinatorial patterns of modifications that are associated with functional DNA elements, providing support to the ‘histone code’ hypothesis. However, due to the lack of analytical methods, only a small number of chromatin modification patterns have been discovered so far. Here, we introduce a scalable subspace clustering algorithm, coherent and shifted bicluster identification (CoSBI), to exhaustively identify the set of combinatorial modification patterns across a given epigenome. Performance comparisons demonstrate that CoSBI can generate biclusters with higher intra-cluster coherency and biological relevance. We apply our algorithm to a compendium of 39 genome-wide chromatin modification maps in human CD4+ T cells. We identify 843 combinatorial patterns that recur at >0.1% of the genome. A total of 19 chromatin modifications are observed in the combinatorial patterns, 10 of which occur in more than half of the patterns. We also identify combinatorial modification signatures for eight classes of functional DNA elements. Application of CoSBI to epigenome maps of different cells and developmental stages will aid in understanding how chromatin structure helps regulate gene expression. PMID:21266477
NASA Astrophysics Data System (ADS)
Asavaskulkiet, Krissada
2014-01-01
This paper proposes a novel face super-resolution reconstruction (hallucination) technique for YCbCr color space. The underlying idea is to learn with an error regression model and multi-linear principal component analysis (MPCA). From hallucination framework, many color face images are explained in YCbCr space. To reduce the time complexity of color face hallucination, we can be naturally described the color face imaged as tensors or multi-linear arrays. In addition, the error regression analysis is used to find the error estimation which can be obtained from the existing LR in tensor space. In learning process is from the mistakes in reconstruct face images of the training dataset by MPCA, then finding the relationship between input and error by regression analysis. In hallucinating process uses normal method by backprojection of MPCA, after that the result is corrected with the error estimation. In this contribution we show that our hallucination technique can be suitable for color face images both in RGB and YCbCr space. By using the MPCA subspace with error regression model, we can generate photorealistic color face images. Our approach is demonstrated by extensive experiments with high-quality hallucinated color faces. Comparison with existing algorithms shows the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Bajla, Ivan; Soukup, Daniel
2008-02-01
Non-negative matrix factorization of an input data matrix into a matrix of basis vectors and a matrix of encoding coefficients is a subspace representation method that has attracted attention of researches in pattern recognition in the recent period. We have explored crucial aspects of NMF on massive recognition experiments with the ORL database of faces which include intuitively clear parts constituting the whole. Using a principal changing of the learning stage structure and by formulating NMF problems for each of a priori given parts separately, we developed a novel modular NMF algorithm. Although this algorithm provides uniquely separated basis vectors which code individual face parts in accordance with the parts-based principle of the NMF methodology applied to object recognition problems, any significant improvement of recognition rates for occluded parts, predicted in several papers, was not reached. We claim that using the parts-based concept in NMF as a basis for solving recognition problems with occluded objects has not been justified.
Hunter, David W.; Hibbard, Paul B.
2016-01-01
An influential theory of mammalian vision, known as the efficient coding hypothesis, holds that early stages in the visual cortex attempts to form an efficient coding of ecologically valid stimuli. Although numerous authors have successfully modelled some aspects of early vision mathematically, closer inspection has found substantial discrepancies between the predictions of some of these models and observations of neurons in the visual cortex. In particular analysis of linear-non-linear models of simple-cells using Independent Component Analysis has found a strong bias towards features on the horoptor. In order to investigate the link between the information content of binocular images, mathematical models of complex cells and physiological recordings, we applied Independent Subspace Analysis to binocular image patches in order to learn a set of complex-cell-like models. We found that these complex-cell-like models exhibited a wide range of binocular disparity-discriminability, although only a minority exhibited high binocular discrimination scores. However, in common with the linear-non-linear model case we found that feature detection was limited to the horoptor suggesting that current mathematical models are limited in their ability to explain the functionality of the visual cortex. PMID:26982184
Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Subspace Compressive GLRT Detector for MIMO Radar in the Presence of Clutter.
Bolisetti, Siva Karteek; Patwary, Mohammad; Ahmed, Khawza; Soliman, Abdel-Hamid; Abdel-Maguid, Mohamed
2015-01-01
The problem of optimising the target detection performance of MIMO radar in the presence of clutter is considered. The increased false alarm rate which is a consequence of the presence of clutter returns is known to seriously degrade the target detection performance of the radar target detector, especially under low SNR conditions. In this paper, a mathematical model is proposed to optimise the target detection performance of a MIMO radar detector in the presence of clutter. The number of samples that are required to be processed by a radar target detector regulates the amount of processing burden while achieving a given detection reliability. While Subspace Compressive GLRT (SSC-GLRT) detector is known to give optimised radar target detection performance with reduced computational complexity, it however suffers a significant deterioration in target detection performance in the presence of clutter. In this paper we provide evidence that the proposed mathematical model for SSC-GLRT detector outperforms the existing detectors in the presence of clutter. The performance analysis of the existing detectors and the proposed SSC-GLRT detector for MIMO radar in the presence of clutter are provided in this paper. PMID:26495422
Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.
Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David
2016-02-01
In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html. PMID:26701675
Data processing in subspace identification and modal parameter identification of an arch bridge
NASA Astrophysics Data System (ADS)
Fan, Jiangling; Zhang, Zhiyi; Hua, Hongxing
2007-05-01
A data-processing method concerning subspace identification is presented to improve the identification of modal parameters from measured response data only. The identification procedure of this method consists of two phases, first estimating frequencies and damping ratios and then extracting mode shapes. Elements of Hankel matrices are specially rearranged to enhance the identifiability of weak characteristics and the robustness to noise contamination. Furthermore, an alternative stabilisation diagram in combination with component energy index is adopted to effectively separate spurious and physical modes. On the basis of identified frequencies, mode shapes are extracted from the signals obtained by filtering measured data with a series of band-pass filters. The proposed method was tested with a concrete-filled steel tubular arch bridge, which was subjected to ambient excitation. Gabor representation was also employed to process measured signals before conducting parameter identification. Identified results show that the proposed method can give a reliable separation of spurious and physical modes as well as accurate estimates of weak modes only from response signals.
Single-Photon Depth Imaging Using a Union-of-Subspaces Model
NASA Astrophysics Data System (ADS)
Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.
2015-12-01
Light detection and ranging systems reconstruct scene depth from time-of-flight measurements. For low light-level depth imaging applications, such as remote sensing and robot vision, these systems use single-photon detectors that resolve individual photon arrivals. Even so, they must detect a large number of photons to mitigate Poisson shot noise and reject anomalous photon detections from background light. We introduce a novel framework for accurate depth imaging using a small number of detected photons in the presence of an unknown amount of background light that may vary spatially. It employs a Poisson observation model for the photon detections plus a union-of-subspaces constraint on the discrete-time flux from the scene at any single pixel. Together, they enable a greedy signal-pursuit algorithm to rapidly and simultaneously converge on accurate estimates of scene depth and background flux, without any assumptions on spatial correlations of the depth or background flux. Using experimental single-photon data, we demonstrate that our proposed framework recovers depth features with 1.7 cm absolute error, using 15 photons per image pixel and an illumination pulse with 6.7-cm scaled root-mean-square length. We also show that our framework outperforms the conventional pixelwise log-matched filtering, which is a computationally-efficient approximation to the maximum-likelihood solution, by a factor of 6.1 in absolute depth error.
Subspace mapping of the three-dimensional spectral receptive field of macaque MT neurons.
Inagaki, Mikio; Sasaki, Kota S; Hashimoto, Hajime; Ohzawa, Izumi
2016-08-01
Neurons in the middle temporal (MT) visual area are thought to represent the velocity (direction and speed) of motion. Previous studies suggest the importance of both excitation and suppression for creating velocity representation in MT; however, details of the organization of excitation and suppression at the MT stage are not understood fully. In this article, we examine how excitatory and suppressive inputs are pooled in individual MT neurons by measuring their receptive fields in a three-dimensional (3-D) spatiotemporal frequency domain. We recorded the activity of single MT neurons from anesthetized macaque monkeys. To achieve both quality and resolution of the receptive field estimations, we applied a subspace reverse correlation technique in which a stimulus sequence of superimposed multiple drifting gratings was cross-correlated with the spiking activity of neurons. Excitatory responses tended to be organized in a manner representing a specific velocity independent of the spatial pattern of the stimuli. Conversely, suppressive responses tended to be distributed broadly over the 3-D frequency domain, supporting a hypothesis of response normalization. Despite the nonspecific distributed profile, the total summed strength of suppression was comparable to that of excitation in many MT neurons. Furthermore, suppressive responses reduced the bandwidth of velocity tuning, indicating that suppression improves the reliability of velocity representation. Our results suggest that both well-organized excitatory inputs and broad suppressive inputs contribute significantly to the invariant and reliable representation of velocity in MT. PMID:27193321
Characterizing two-timescale nonlinear dynamics using finite-time Lyapunov exponents and subspaces
NASA Astrophysics Data System (ADS)
Mease, K. D.; Topcu, U.; Aykutluğ, E.; Maggia, M.
2016-07-01
Finite-time Lyapunov exponents and subspaces are used to define and diagnose boundary-layer type, two-timescale behavior in the tangent linear dynamics and to determine the associated manifold structure in the flow of a finite-dimensional nonlinear autonomous dynamical system. Two-timescale behavior is characterized by a slow-fast splitting of the tangent bundle for a state space region. The slow-fast splitting is defined using finite-time Lyapunov exponents and vectors, guided by the asymptotic theory of partially hyperbolic sets, with important modifications for the finite-time case; for example, finite-time Lyapunov analysis relies more heavily on the Lyapunov vectors due to their relatively fast convergence compared to that of the corresponding exponents. The splitting is used to characterize and locate points approximately on normally hyperbolic center manifolds via tangency conditions for the vector field. Determining manifolds from tangent bundle structure is more generally applicable than approaches, such as the singular perturbation method, that require special normal forms or other a priori knowledge. The use, features, and accuracy of the approach are illustrated via several detailed examples.
Robust multipixel matched subspace detection with signal-dependent background power
NASA Astrophysics Data System (ADS)
Golikov, Victor; Rodriguez-Blanco, Marco; Lebedeva, Olga
2016-01-01
A modified matched subspace detector (MSD) has been recently proposed for detecting a barely discernible object in an additive Gaussian background clutter using a single pixel in a sequence of digital images. In contrast to this detector designed for the subpixel object, we developed a generalized likelihood ratio approach to the detection of a multipixel object of unknown shape, size, and position in an additive signal-dependent Gaussian background and noise. The proposed detector modifies the MSD by adding the additional term proportional to the square of the difference between the background variances under two statistical hypotheses. The performances of these detectors are evaluated for the example scenario of two multipixel floating objects on the agitated sea surface. The crucial characteristic of the proposed detector is that prior knowledge of the target size, shape, and position is not required. Computer simulation and experimental results have shown that the proposed detector outperforms the MSD, especially in the case of weak and poorly contrasted objects of unknown shape, size, and position.
Human detection by quadratic classification on subspace of extended histogram of gradients.
Satpathy, Amit; Jiang, Xudong; Eng, How-Lung
2014-01-01
This paper proposes a quadratic classification approach on the subspace of Extended Histogram of Gradients (ExHoG) for human detection. By investigating the limitations of Histogram of Gradients (HG) and Histogram of Oriented Gradients (HOG), ExHoG is proposed as a new feature for human detection. ExHoG alleviates the problem of discrimination between a dark object against a bright background and vice versa inherent in HG. It also resolves an issue of HOG whereby gradients of opposite directions in the same cell are mapped into the same histogram bin. We reduce the dimensionality of ExHoG using Asymmetric Principal Component Analysis (APCA) for improved quadratic classification. APCA also addresses the asymmetry issue in training sets of human detection where there are much fewer human samples than non-human samples. Our proposed approach is tested on three established benchmarking data sets--INRIA, Caltech, and Daimler--using a modified Minimum Mahalanobis distance classifier. Results indicate that the proposed approach outperforms current state-of-the-art human detection methods. PMID:23708804
NASA Astrophysics Data System (ADS)
Geller, Michael R.; Martinis, John M.; Sornborger, Andrew T.; Stancil, Phillip C.; Pritchett, Emily J.; You, Hao; Galiautdinov, Andrei
2015-06-01
Current quantum computing architectures lack the size and fidelity required for universal fault-tolerant operation, limiting the practical implementation of key quantum algorithms to all but the smallest problem sizes. In this work we propose an alternative method for general-purpose quantum computation that is ideally suited for such "prethreshold" superconducting hardware. Computations are performed in the n -dimensional single-excitation subspace (SES) of a system of n tunably coupled superconducting qubits. The approach is not scalable, but allows many operations in the unitary group SU(n ) to be implemented by a single application of the Hamiltonian, bypassing the need to decompose a desired unitary into elementary gates. This feature makes large, nontrivial quantum computations possible within the available coherence time. We show how to use a programmable SES chip to perform fast amplitude amplification and phase estimation, two versatile quantum subalgorithms. We also show that an SES processor is well suited for Hamiltonian simulation, specifically simulation of the Schrödinger equation with a real but otherwise arbitrary n ×n Hamiltonian matrix. We discuss the utility and practicality of such a universal quantum simulator, and propose its application to the study of realistic atomic and molecular collisions.
Huang, Weimin; Yang, Yongzhong; Lin, Zhiping; Huang, Guang-Bin; Zhou, Jiayin; Duan, Yuping; Xiong, Wei
2014-01-01
This paper presents a new approach to detect and segment liver tumors. The detection and segmentation of liver tumors can be formulized as novelty detection or two-class classification problem. Each voxel is characterized by a rich feature vector, and a classifier using random feature subspace ensemble is trained to classify the voxels. Since Extreme Learning Machine (ELM) has advantages of very fast learning speed and good generalization ability, it is chosen to be the base classifier in the ensemble. Besides, majority voting is incorporated for fusion of classification results from the ensemble of base classifiers. In order to further increase testing accuracy, ELM autoencoder is implemented as a pre-training step. In automatic liver tumor detection, ELM is trained as a one-class classifier with only healthy liver samples, and the performance is compared with two-class ELM. In liver tumor segmentation, a semi-automatic approach is adopted by selecting samples in 3D space to train the classifier. The proposed method is tested and evaluated on a group of patients' CT data and experiment show promising results. PMID:25571035
NASA Astrophysics Data System (ADS)
Reynders, Edwin; Maes, Kristof; Lombaert, Geert; De Roeck, Guido
2016-01-01
Identified modal characteristics are often used as a basis for the calibration and validation of dynamic structural models, for structural control, for structural health monitoring, etc. It is therefore important to know their accuracy. In this article, a method for estimating the (co)variance of modal characteristics that are identified with the stochastic subspace identification method is validated for two civil engineering structures. The first structure is a damaged prestressed concrete bridge for which acceleration and dynamic strain data were measured in 36 different setups. The second structure is a mid-rise building for which acceleration data were measured in 10 different setups. There is a good quantitative agreement between the predicted levels of uncertainty and the observed variability of the eigenfrequencies and damping ratios between the different setups. The method can therefore be used with confidence for quantifying the uncertainty of the identified modal characteristics, also when some or all of them are estimated from a single batch of vibration data. Furthermore, the method is seen to yield valuable insight in the variability of the estimation accuracy from mode to mode and from setup to setup: the more informative a setup is regarding an estimated modal characteristic, the smaller is the estimated variance.
Cho, Soojin; Park, Jong-Woong; Sim, Sung-Han
2015-01-01
Wireless sensor networks (WSNs) facilitate a new paradigm to structural identification and monitoring for civil infrastructure. Conventional structural monitoring systems based on wired sensors and centralized data acquisition systems are costly for installation as well as maintenance. WSNs have emerged as a technology that can overcome such difficulties, making deployment of a dense array of sensors on large civil structures both feasible and economical. However, as opposed to wired sensor networks in which centralized data acquisition and processing is common practice, WSNs require decentralized computing algorithms to reduce data transmission due to the limitation associated with wireless communication. In this paper, the stochastic subspace identification (SSI) technique is selected for system identification, and SSI-based decentralized system identification (SDSI) is proposed to be implemented in a WSN composed of Imote2 wireless sensors that measure acceleration. The SDSI is tightly scheduled in the hierarchical WSN, and its performance is experimentally verified in a laboratory test using a 5-story shear building model. PMID:25856325
Super-low Dose Endotoxin Pre-conditioning Exacerbates Sepsis Mortality
Chen, Keqiang; Geng, Shuo; Yuan, Ruoxi; Diao, Na; Upchurch, Zachary; Li, Liwu
2015-01-01
Sepsis mortality varies dramatically in individuals of variable immune conditions, with poorly defined mechanisms. This phenomenon complements the hypothesis that innate immunity may adopt rudimentary memory, as demonstrated in vitro with endotoxin priming and tolerance in cultured monocytes. However, previous in vivo studies only examined the protective effect of endotoxin tolerance in the context of sepsis. In sharp contrast, we report herein that pre-conditioning with super-low or low dose endotoxin lipopolysaccharide (LPS) cause strikingly opposite survival outcomes. Mice pre-conditioned with super-low dose LPS experienced severe tissue damage, inflammation, increased bacterial load in circulation, and elevated mortality when they were subjected to cecal-ligation and puncture (CLP). This is in contrast to the well-reported protective phenomenon with CLP mice pre-conditioned with low dose LPS. Mechanistically, we demonstrated that super-low and low dose LPS differentially modulate the formation of neutrophil extracellular trap (NET) in neutrophils. Instead of increased ERK activation and NET formation in neutrophils pre-conditioned with low dose LPS, we observed significantly reduced ERK activation and compromised NET generation in neutrophils pre-conditioned with super-low dose LPS. Collectively, our findings reveal a mechanism potentially responsible for the dynamic programming of innate immunity in vivo as it relates to sepsis risks. PMID:26029736
Weinberg, Seth H.; Smith, Gregory D.
2012-01-01
Cardiac myocyte calcium signaling is often modeled using deterministic ordinary differential equations (ODEs) and mass-action kinetics. However, spatially restricted “domains” associated with calcium influx are small enough (e.g., 10−17 liters) that local signaling may involve 1–100 calcium ions. Is it appropriate to model the dynamics of subspace calcium using deterministic ODEs or, alternatively, do we require stochastic descriptions that account for the fundamentally discrete nature of these local calcium signals? To address this question, we constructed a minimal Markov model of a calcium-regulated calcium channel and associated subspace. We compared the expected value of fluctuating subspace calcium concentration (a result that accounts for the small subspace volume) with the corresponding deterministic model (an approximation that assumes large system size). When subspace calcium did not regulate calcium influx, the deterministic and stochastic descriptions agreed. However, when calcium binding altered channel activity in the model, the continuous deterministic description often deviated significantly from the discrete stochastic model, unless the subspace volume is unrealistically large and/or the kinetics of the calcium binding are sufficiently fast. This principle was also demonstrated using a physiologically realistic model of calmodulin regulation of L-type calcium channels introduced by Yue and coworkers. PMID:23509597
Harada, Yuhei; Noda, Junpei; Yatabe, Rui; Ikezaki, Hidekazu; Toko, Kiyoshi
2016-01-01
A taste sensor that uses lipid/polymer membranes can evaluate aftertastes felt by humans using Change in membrane Potential caused by Adsorption (CPA) measurements. The sensor membrane for evaluating bitterness, which is caused by acidic bitter substances such as iso-alpha acid contained in beer, needs an immersion process in monosodium glutamate (MSG) solution, called "MSG preconditioning". However, what happens to the lipid/polymer membrane during MSG preconditioning is not clear. Therefore, we carried out three experiments to investigate the changes in the lipid/polymer membrane caused by the MSG preconditioning, i.e., measurements of the taste sensor, measurements of the amount of the bitterness substance adsorbed onto the membrane and measurements of the contact angle of the membrane surface. The CPA values increased as the preconditioning process progressed, and became stable after 3 d of preconditioning. The response potentials to the reference solution showed the same tendency of the CPA value change during the preconditioning period. The contact angle of the lipid/polymer membrane surface decreased after 7 d of MSG preconditioning; in short, the surface of the lipid/polymer membrane became hydrophilic during MSG preconditioning. The amount of adsorbed iso-alpha acid was increased until 5 d preconditioning, and then it decreased. In this study, we revealed that the CPA values increased with the progress of MSG preconditioning in spite of the decrease of the amount of iso-alpha acid adsorbed onto the lipid/polymer membrane, and it was indicated that the CPA values increase because the sensor sensitivity was improved by the MSG preconditioning. PMID:26891299
Efficient Multi-Stage Time Marching for Viscous Flows via Local Preconditioning
NASA Technical Reports Server (NTRS)
Kleb, William L.; Wood, William A.; vanLeer, Bram
1999-01-01
A new method has been developed to accelerate the convergence of explicit time-marching, laminar, Navier-Stokes codes through the combination of local preconditioning and multi-stage time marching optimization. Local preconditioning is a technique to modify the time-dependent equations so that all information moves or decays at nearly the same rate, thus relieving the stiffness for a system of equations. Multi-stage time marching can be optimized by modifying its coefficients to account for the presence of viscous terms, allowing larger time steps. We show it is possible to optimize the time marching scheme for a wide range of cell Reynolds numbers for the scalar advection-diffusion equation, and local preconditioning allows this optimization to be applied to the Navier-Stokes equations. Convergence acceleration of the new method is demonstrated through numerical experiments with circular advection and laminar boundary-layer flow over a flat plate.
Preconditioned conjugate gradient technique for the analysis of symmetric anisotropic structures
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Peters, Jeanne M.
1987-01-01
An efficient preconditioned conjugate gradient (PCG) technique and a computational procedure are presented for the analysis of symmetric anisotropic structures. The technique is based on selecting the preconditioning matrix as the orthotropic part of the global stiffness matrix of the structure, with all the nonorthotropic terms set equal to zero. This particular choice of the preconditioning matrix results in reducing the size of the analysis model of the anisotropic structure to that of the corresponding orthotropic structure. The similarities between the proposed PCG technique and a reduction technique previously presented by the authors are identified and exploited to generate from the PCG technique direct measures for the sensitivity of the different response quantities to the nonorthotropic (anisotropic) material coefficients of the structure. The effectiveness of the PCG technique is demonstrated by means of a numerical example of an anisotropic cylindrical panel.
Does nitric oxide generation contribute to the mechanism of remote ischemic preconditioning?
Petrishchev, N N.; Vlasov, T D.; Sipovsky, V G.; Kurapeev, D I.; Galagudza, M M.
2001-03-01
The protective effect of local or remote ischemic preconditioning (IPC) on subsequent 40-min ischemic and 120-min reperfusion myocardial damage was investigated. Preconditioned rats underwent one cycle of myocardial ischemia/reperfusion consisting of 5-min ischemia produced as a left coronary artery (LCA) occlusion and 5 min of reperfusion. Remote IPC was produced as 15 min of small intestinal ischemia with 15 min of reperfusion as well as 30 min of limb ischemia with 15 min of reperfusion. A marked protective action was afforded by both IPC protocols with a more significant effect of local (classic) ischemic preconditioning. Since the protective effect of remote IPC was not abolished by nitric oxide (NO) synthase inhibition with Nomega-nitro-L-arginine (L-NNA) it is concluded that NO generation may not be involved in the mechanism of remote IPC. PMID:11228397
Preconditioning for the Navier-Stokes equations with finite-rate chemistry
NASA Technical Reports Server (NTRS)
Godfrey, Andrew G.; Walters, Robert W.; Van Leer, Bram
1993-01-01
The preconditioning procedure for generalized finite-rate chemistry and the proper preconditioning for the one-dimensional Navier-Stokes equations are presented. Eigenvalue stiffness is resolved and convergence-rate acceleration is demonstrated over the entire Mach-number range from the incompressible to the hypersonic. Specific benefits are realized at low and transonic flow speeds. The extended preconditioning matrix accounts for thermal and chemical non-equilibrium and its implementation is explained for both explicit and implicit time marching. The effect of higher-order spatial accuracy and various flux splittings is investigated. Numerical analysis reveals the possible theoretical improvements from using proconditioning at all Mach numbers. Numerical results confirm the expectations from the numerical analysis. Representative test cases include flows with previously troublesome embedded high-condition-number regions.
Manchenkov, Tania; Pasillas, Martina P.; Haddad, Gabriel G.; Imam, Farhad B.
2015-01-01
Severe hypoxia is a common cause of major brain, heart, and kidney injury in adults, children, and newborns. However, mild hypoxia can be protective against later, more severe hypoxia exposure via “hypoxic preconditioning,” a phenomenon that is not yet fully understood. Accordingly, we have established and optimized an embryonic zebrafish model to study hypoxic preconditioning. Using a functional genomic approach, we used this zebrafish model to identify and validate five novel hypoxia-protective genes, including irs2, crtc3, and camk2g2, which have been previously implicated in metabolic regulation. These results extend our understanding of the mechanisms of hypoxic preconditioning and affirm the discovery potential of this novel vertebrate hypoxic stress model. PMID:25840431
Maslov, L N; Lishmanov, Iu B; Kolar, F; Portnichenko, A G; Podoksenov, Iu K; Khaliulin, I G; Wang, H; Pei, J M
2010-12-01
The work covers the problem of hypoxic preconditioning (HP) carried out in isolated cardiomyocytes. Papers on delayed HP in vivo are comparatively few, and only some single works are devoted to early preconditioning in vivo. It has been established that the HP limits necrosis and apoptosis of cardiomyocytes and improves contractility of the isolated heart after ischemia (hypoxia) and reperfusion (reoxygenation). It was found that adenosine was a trigger of iP in vitro. It was proved that NO* was a trigger of HP both in vitro and in vivo. It was shown that reactive oxygen species also were triggers of hypoxic preconditioning. It was shown that ERK1/2 and p38 kinase played important role in delayed HP in vitro. PMID:21473105
Wide-field fluorescence molecular tomography with compressive sensing based preconditioning
Yao, Ruoyang; Pian, Qi; Intes, Xavier
2015-01-01
Wide-field optical tomography based on structured light illumination and detection strategies enables efficient tomographic imaging of large tissues at very fast acquisition speeds. However, the optical inverse problem based on such instrumental approach is still ill-conditioned. Herein, we investigate the benefit of employing compressive sensing-based preconditioning to wide-field structured illumination and detection approaches. We assess the performances of Fluorescence Molecular Tomography (FMT) when using such preconditioning methods both in silico and with experimental data. Additionally, we demonstrate that such methodology could be used to select the subset of patterns that provides optimal reconstruction performances. Lastly, we compare preconditioning data collected using a normal base that offers good experimental SNR against that directly acquired with optimal designed base. An experimental phantom study is provided to validate the proposed technique. PMID:26713202
Hypoxic preconditioning: effect, mechanism and clinical implication (Part 1).
Lu, Guo-wei; Shao, Guo
2014-11-01
Hypoxic preconditioning (HPC) refers to exposure of organisms, systems, organs, tissues or cells to moderate hypoxia/ischemia that is able to result in a resistance to subsequent severe hypoxia/ischemia in tissues and cells. The effects exerted by HPC are well documented. The original local in situ (LiHPC) is now broadened to remote ectopic organs-tissues (ReHPC) and extended crossly to cross pluripotential HPC(CpHPC) induced by a variety of stresses other than hypoxia/ischemia, including cancer, for example. We developed a unique animal model of repetitive autohypoxia in adult mice, and studied systematically on the effects and mechanisms of HPC on the model in our laboratory since the early 1960s. The tolerances to hypoxia and protection from injury increased significantly in this model. The adult mice behave like hypoxia-intolerant mammalian newborns and hypoxia-tolerant adult animals during their exposure to repetitive autohypoxia. The overall energy supply and demand decreased, the microorganization of the brain maintained and the spacial learning and memory ability improved but not impaired, the detrimental neurochemicals such as free radicals down-regulated and the beneficial neurochemicals such as adenosine(ADO) and antihypoxic gene(s)/factor(s) (AHGs/AHFs) up-regulated. Accordingly, we hypothesize that mechanisms for the tolerance/protective effects of HPC are fundamentally depending on energy saving and brain plasticity in particular. It is thought that these two major mechanisms are triggered by exposure to hypoxia/ischemia via oxygen sensing-transduction pathways and HIF-1 initiation cascades. We suggest that HPC is an intrinsic mechanism developed in biological evolution and is a novel potential strategy for fighting against hypoxia-ischemia and other stresses. Motivation of endogenous antihypoxic potential, activation of oxygen sensing--signal transduction systems and supplement of exogenous antihypoxic substances as well as development of HPC
Failure and rescue of preconditioning-induced neuroprotection in severe stroke-like insults.
Tauskela, Joseph S; Aylsworth, Amy; Hewitt, Melissa; Brunette, Eric; Blondeau, Nicolas
2016-06-01
Preconditioning is a well established neuroprotective modality. However, the mechanism and relative efficacy of neuroprotection between diverse preconditioners is poorly defined. Cultured neurons were preconditioned by 4-aminopyridine and bicuculline (4-AP/bic), rendering neurons tolerant to normally lethal (sufficient to kill most neurons) oxygen-glucose deprivation (OGD) or a chemical OGD-mimic, ouabain/TBOA, by suppression of extracellular glutamate (glutamateex) elevations. However, subjecting preconditioned neurons to longer-duration supra-lethal insults caused neurotoxic glutamateex elevations, thereby identifying a 'ceiling' to neuroprotection. Neuroprotective 'rescue' of neurons could be obtained by administration of an NMDA receptor antagonist, MK-801, just before glutamateex rose during these supra-lethal insults. Next, we evaluated if these concepts of glutamateex suppression during lethal OGD, and a neuroprotective ceiling requiring MK-801 rescue under supra-lethal OGD, extended to the preconditioning field. In screening a panel of 42 diverse putative preconditioners, neuroprotection against normally lethal OGD was observed in 12 cases, which correlated with glutamateex suppression, both of which could be reversed, either by the inclusion of a glutamate uptake inhibitor (TBOA, to increase glutamateex levels) during OGD or by exposure to supra-lethal OGD. Administrating MK-801 during the latter stages of supra-lethal OGD again rescued neurons, although to varying degrees dependent on the preconditioning agent. Thus, 'stress-testing' against the harshest ischemic-like insults yet tested identifies the most efficacious preconditioners, which dictates how early MK-801 needs to be administered during the insult in order to maintain neuroprotection. Preconditioning delays a neurotoxic rise in glutamateex levels, thereby 'buying time' for acute anti-excitotoxic pharmacologic rescue. PMID:26867506
Hosseini, Sayyed Mohsen; Wilson, Wouter; Ito, Keita; van Donkelaar, Corrinus C
2014-06-01
It is known that initial loading curves of soft biological tissues are substantially different from subsequent loadings. The later loading curves are generally used for assessing the mechanical properties of a tissue, and the first loading cycles, referred to as preconditioning, are omitted. However, slow viscoelastic phenomena related to fluid flow or collagen viscoelasticity are initiated during these first preconditioning loading cycles and may persist during the actual data collection. When these data are subsequently used for fitting of material properties, the viscoelastic phenomena that occurred during the initial cycles are not accounted for. The aim of the present study is to explore whether the above phenomena are significant for articular cartilage, by evaluating the effect of such time-dependent phenomena by means of computational modeling. Results show that under indentation, collagen viscoelasticity dominates the time-dependent behavior. Under UC, fluid-dependent effects are more important. Interestingly, viscoelastic and poroelastic effects may act in opposite directions and may cancel each other out in a stress-strain curve. Therefore, equilibrium may be apparent in a stress-strain relationship, even though internally the tissue is not in equilibrium. Also, the time-dependent effects of viscoelasticity and poroelasticity may reinforce each other, resulting in a sustained effect that lasts longer than suggested by their individual effects. Finally, the results illustrate that data collected from a mechanical test may depend on the preconditioning protocol. In conclusion, preconditioning influences the mechanical response of articular cartilage significantly and therefore cannot be neglected when determining the mechanical properties. To determine the full viscoelastic and poroelastic properties of articular cartilage requires fitting to both preconditioning and post-preconditioned loading cycles. PMID:23864393
Combustion of coal/water mixtures with thermal preconditioning. Final report
Novack, M.; Roffe, G.; Miller, G.
1985-12-01
Thermal preconditioning is a process in which coal/water mixtures are vaporized to produce coal/steam suspensions, and then superheated to allow the coal to devolatilize producing suspensions of char particles in hydrocarbon gases and steam. This final product of the process can be injected without atomization, and burned directly in a gas turbine combustor. This paper reports on the results of an experimental program in which thermally preconditioned coal/water mixture was successfully burned with a stable flame in a gas turbine combustor test rig. Tests were performed at a mixture flowrate of 300 lb/hr and combustor pressure of 8 atmospheres. The coal/water mixture was thermally preconditioned and injected into the combustor over a temperature range from 350 to 600/sup 0/F, and combustion air was supplied at between 600 to 725/sup 0/F. Test durations generally varied between 10 to 20 minutes. Major results of the combustion testing were that: a stable flame was maintained over a wide equivalence ratio range, between phi = 2.4 (rich) to 0.2 (lean); and, combustion efficiency of over 99% was achieved when the mixture was preconditioned to 600/sup 0/F and the combustion air preheated to 725/sup 0/F. Measurements of ash particulates captured in the exhaust sampling probe located 20 inches from the injector face, show typical sizes collected to be about 1 micron, with agglomerates of these particulates to be not more than 8 microns. The original mean coal particle size for these tests, prior to preconditioning was 25 microns. System studies indicate that preconditioning can be incorporated into either stationary or mobile power plant designs without system derating. On the basis of these results, thermal pretreatment offers a practical alternative to fuel atomization in gas turbine applications. 20 figs., 4 tabs.
Stathopoulos, A.; Fischer, C.F.; Saad, Y.
1994-12-31
The solution of the large, sparse, symmetric eigenvalue problem, Ax = {lambda}x, is central to many scientific applications. Among many iterative methods that attempt to solve this problem, the Lanczos and the Generalized Davidson (GD) are the most widely used methods. The Lanczos method builds an orthogonal basis for the Krylov subspace, from which the required eigenvectors are approximated through a Rayleigh-Ritz procedure. Each Lanczos iteration is economical to compute but the number of iterations may grow significantly for difficult problems. The GD method can be considered a preconditioned version of Lanczos. In each step the Rayleigh-Ritz procedure is solved and explicit orthogonalization of the preconditioned residual ((M {minus} {lambda}I){sup {minus}1}(A {minus} {lambda}I)x) is performed. Therefore, the GD method attempts to improve convergence and robustness at the expense of a more complicated step.
Budas, Grant R.; Jovanovic, Sofija; Crawford, Russell M.; Jovanovic, Aleksandar
2007-01-01
The opening of sarcolemmal and mitochondrial ATP-sensitive K+ (KATP) channels in the heart is believed to mediate ischemic preconditioning, a phenomenon whereby brief periods of ischemia/reperfusion protect the heart against myocardial infarction. Here, we have applied digital epifluorescent microscopy, immunoprecipitation and Western blotting, perforated patch clamp electrophysiology, and immunofluorescence/laser confocal microscopy to examine the involvement of KATP channels in cardioprotection afforded by preconditioning. We have shown that adult, stimulated-to-beat, guinea-pig cardiomyocytes survived in sustained hypoxia for ∼17 min. An episode of 5-min-long hypoxia/5-min-long reoxygenation before sustained hypoxia dramatically increased the duration of cellular survival. Experiments with different antagonists of KATP channels, applied at different times during the experimental protocol, suggested that the opening of sarcolemmal KATP channels at the beginning of sustained hypoxia mediate preconditioning. This conclusion was supported by perforated patch clamp experiments that revealed activation of sarcolemmal KATP channels by preconditioning. Immunoprecipitation and Western blotting as well as immunofluorescence and laser confocal microscopy showed that the preconditioning is associated with the increase in KATP channel proteins in sarcolemma. Inhibition of trafficking of KATP channel subunits prevented preconditioning without affecting sensitivity of cardiomyocytes to hypoxia in the absence of preconditioning. We conclude that the preconditioning is mediated by the activation and trafficking of sarcolemmal KATP channels. PMID:15084521
Fattebert, J.-L.
2010-01-20
An Accelerated Block Preconditioned Gradient (ABPG) method is proposed to solve electronic structure problems in Density Functional Theory. This iterative algorithm is designed to solve directly the non-linear Kohn-Sham equations for accurate discretization schemes involving a large number of degrees of freedom. It makes use of an acceleration scheme similar to what is known as RMM-DIIS in the electronic structure community. The method is illustrated with examples of convergence for large scale applications using a finite difference discretization and multigrid preconditioning.