NASA Astrophysics Data System (ADS)
Daftardar-Gejji, Varsha; Jafari, Hossein
2005-01-01
Adomian decomposition method has been employed to obtain solutions of a system of fractional differential equations. Convergence of the method has been discussed with some illustrative examples. In particular, for the initial value problem: where A=[aij] is a real square matrix, the solution turns out to be , where E([alpha]1,...,[alpha]n),1 denotes multivariate Mittag-Leffler function defined for matrix arguments and Ai is the matrix having ith row as [ai1...ain], and all other entries are zero. Fractional oscillation and Bagley-Torvik equations are solved as illustrative examples.
NASA Astrophysics Data System (ADS)
Gou, Ming-Jiang; Yang, Ming-Lin; Sheng, Xin-Qing
2016-10-01
Mature red blood cells (RBC) do not contain huge complex nuclei and organelles, makes them can be approximately regarded as homogeneous medium particles. To compute the radiation pressure force (RPF) exerted by multiple laser beams on this kind of arbitrary shaped homogenous nano-particles, a fast electromagnetic optics method is demonstrated. In general, based on the Maxwell's equations, the matrix equation formed by the method of moment (MOM) has many right hand sides (RHS's) corresponding to the different laser beams. In order to accelerate computing the matrix equation, the algorithm conducts low-rank decomposition on the excitation matrix consisting of all RHS's to figure out the so-called skeleton laser beams by interpolative decomposition (ID). After the solutions corresponding to the skeletons are obtained, the desired responses can be reconstructed efficiently. Some numerical results are performed to validate the developed method.
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
Optical systolic solutions of linear algebraic equations
NASA Technical Reports Server (NTRS)
Neuman, C. P.; Casasent, D.
1984-01-01
The philosophy and data encoding possible in systolic array optical processor (SAOP) were reviewed. The multitude of linear algebraic operations achievable on this architecture is examined. These operations include such linear algebraic algorithms as: matrix-decomposition, direct and indirect solutions, implicit and explicit methods for partial differential equations, eigenvalue and eigenvector calculations, and singular value decomposition. This architecture can be utilized to realize general techniques for solving matrix linear and nonlinear algebraic equations, least mean square error solutions, FIR filters, and nested-loop algorithms for control engineering applications. The data flow and pipelining of operations, design of parallel algorithms and flexible architectures, application of these architectures to computationally intensive physical problems, error source modeling of optical processors, and matching of the computational needs of practical engineering problems to the capabilities of optical processors are emphasized.
Research on the application of a decoupling algorithm for structure analysis
NASA Technical Reports Server (NTRS)
Denman, E. D.
1980-01-01
The mathematical theory for decoupling mth-order matrix differential equations is presented. It is shown that the decoupling precedure can be developed from the algebraic theory of matrix polynomials. The role of eigenprojectors and latent projectors in the decoupling process is discussed and the mathematical relationships between eigenvalues, eigenvectors, latent roots, and latent vectors are developed. It is shown that the eigenvectors of the companion form of a matrix contains the latent vectors as a subset. The spectral decomposition of a matrix and the application to differential equations is given.
NASA Astrophysics Data System (ADS)
Xie, Hang; Jiang, Feng; Tian, Heng; Zheng, Xiao; Kwok, Yanho; Chen, Shuguang; Yam, ChiYung; Yan, YiJing; Chen, Guanhua
2012-07-01
Basing on our hierarchical equations of motion for time-dependent quantum transport [X. Zheng, G. H. Chen, Y. Mo, S. K. Koo, H. Tian, C. Y. Yam, and Y. J. Yan, J. Chem. Phys. 133, 114101 (2010), 10.1063/1.3475566], we develop an efficient and accurate numerical algorithm to solve the Liouville-von-Neumann equation. We solve the real-time evolution of the reduced single-electron density matrix at the tight-binding level. Calculations are carried out to simulate the transient current through a linear chain of atoms, with each represented by a single orbital. The self-energy matrix is expanded in terms of multiple Lorentzian functions, and the Fermi distribution function is evaluated via the Padè spectrum decomposition. This Lorentzian-Padè decomposition scheme is employed to simulate the transient current. With sufficient Lorentzian functions used to fit the self-energy matrices, we show that the lead spectral function and the dynamics response can be treated accurately. Compared to the conventional master equation approaches, our method is much more efficient as the computational time scales cubically with the system size and linearly with the simulation time. As a result, the simulations of the transient currents through systems containing up to one hundred of atoms have been carried out. As density functional theory is also an effective one-particle theory, the Lorentzian-Padè decomposition scheme developed here can be generalized for first-principles simulation of realistic systems.
Solving periodic block tridiagonal systems using the Sherman-Morrison-Woodbury formula
NASA Technical Reports Server (NTRS)
Yarrow, Maurice
1989-01-01
Many algorithms for solving the Navier-Stokes equations require the solution of periodic block tridiagonal systems of equations. By applying a splitting to the matrix representing this system of equations, it may first be reduced to a block tridiagonal matrix plus an outer product of two block vectors. The Sherman-Morrison-Woodbury formula is then applied. The algorithm thus reduces a periodic banded system to a non-periodic banded system with additional right-hand sides and is of higher efficiency than standard Thomas algorithm/LU decompositions.
Matrix approach to uncertainty assessment and reduction for modeling terrestrial carbon cycle
NASA Astrophysics Data System (ADS)
Luo, Y.; Xia, J.; Ahlström, A.; Zhou, S.; Huang, Y.; Shi, Z.; Wang, Y.; Du, Z.; Lu, X.
2017-12-01
Terrestrial ecosystems absorb approximately 30% of the anthropogenic carbon dioxide emissions. This estimate has been deduced indirectly: combining analyses of atmospheric carbon dioxide concentrations with ocean observations to infer the net terrestrial carbon flux. In contrast, when knowledge about the terrestrial carbon cycle is integrated into different terrestrial carbon models they make widely different predictions. To improve the terrestrial carbon models, we have recently developed a matrix approach to uncertainty assessment and reduction. Specifically, the terrestrial carbon cycle has been commonly represented by a series of carbon balance equations to track carbon influxes into and effluxes out of individual pools in earth system models. This representation matches our understanding of carbon cycle processes well and can be reorganized into one matrix equation without changing any modeled carbon cycle processes and mechanisms. We have developed matrix equations of several global land C cycle models, including CLM3.5, 4.0 and 4.5, CABLE, LPJ-GUESS, and ORCHIDEE. Indeed, the matrix equation is generic and can be applied to other land carbon models. This matrix approach offers a suite of new diagnostic tools, such as the 3-dimensional (3-D) parameter space, traceability analysis, and variance decomposition, for uncertainty analysis. For example, predictions of carbon dynamics with complex land models can be placed in a 3-D parameter space (carbon input, residence time, and storage potential) as a common metric to measure how much model predictions are different. The latter can be traced to its source components by decomposing model predictions to a hierarchy of traceable components. Then, variance decomposition can help attribute the spread in predictions among multiple models to precisely identify sources of uncertainty. The highly uncertain components can be constrained by data as the matrix equation makes data assimilation computationally possible. We will illustrate various applications of this matrix approach to uncertainty assessment and reduction for terrestrial carbon cycle models.
Randomized Dynamic Mode Decomposition
NASA Astrophysics Data System (ADS)
Erichson, N. Benjamin; Brunton, Steven L.; Kutz, J. Nathan
2017-11-01
The dynamic mode decomposition (DMD) is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in dynamical systems. We present randomized algorithms to compute the near-optimal low-rank dynamic mode decomposition for massive datasets. Randomized algorithms are simple, accurate and able to ease the computational challenges arising with `big data'. Moreover, randomized algorithms are amenable to modern parallel and distributed computing. The idea is to derive a smaller matrix from the high-dimensional input data matrix using randomness as a computational strategy. Then, the dynamic modes and eigenvalues are accurately learned from this smaller representation of the data, whereby the approximation quality can be controlled via oversampling and power iterations. Here, we present randomized DMD algorithms that are categorized by how many passes the algorithm takes through the data. Specifically, the single-pass randomized DMD does not require data to be stored for subsequent passes. Thus, it is possible to approximately decompose massive fluid flows (stored out of core memory, or not stored at all) using single-pass algorithms, which is infeasible with traditional DMD algorithms.
Aladko, E Ya; Dyadin, Yu A; Fenelonov, V B; Larionov, E G; Manakov, A Yu; Mel'gunov, M S; Zhurko, F V
2006-10-05
The experimental data on decomposition temperatures for the gas hydrates of ethane, propane, and carbon dioxide dispersed in silica gel mesopores are reported. The studies were performed at pressures up to 1 GPa. It is shown that the experimental dependence of hydrate decomposition temperature on the size of pores that limit the size of hydrate particles can be described on the basis of the Gibbs-Thomson equation only if one takes into account changes in the shape coefficient that is present in the equation; in turn, the value of this coefficient depends on a method of mesopore size determination. A mechanism of hydrate formation in mesoporous medium is proposed. Experimental data providing evidence of the possibility of the formation of hydrate compounds in hydrophobic matrixes under high pressure are reported. Decomposition temperature of those hydrate compounds is higher than that for the bulk hydrates of the corresponding gases.
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1978-01-01
The paper describes the split-Cholesky strategy for banded matrices arising from the large systems of equations in certain fluid mechanics problems. The basic idea is that for a banded matrix the computation can be carried out in pieces, with only a small portion of the matrix residing in core. Mesh considerations are discussed by demonstrating the manner in which the assembly of finite element equations proceeds for linear trial functions on a triangular mesh. The FORTRAN code which implements the out-of-core decomposition strategy for banded symmetric positive definite matrices (mass matrices) of a coupled initial value problem is given.
Acoustooptic linear algebra processors - Architectures, algorithms, and applications
NASA Technical Reports Server (NTRS)
Casasent, D.
1984-01-01
Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.
Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.
2015-03-01
In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fachruddin, Imam, E-mail: imam.fachruddin@sci.ui.ac.id; Salam, Agus
2016-03-11
A new momentum-space formulation for scattering of two spin-half particles, both either identical or unidentical, is formulated. As basis states the free linear-momentum states are not expanded into the angular-momentum states, the system’s spin states are described by the product of the spin states of the two particles, and the system’s isospin states by the total isospin states of the two particles. We evaluate the Lippmann-Schwinger equations for the T-matrix elements in these basis states. The azimuthal behavior of the potential and of the T-matrix elements leads to a set of coupled integral equations for the T-matrix elements in twomore » variables only, which are the magnitude of the relative momentum and the scattering angle. Some symmetry relations for the potential and the T-matrix elements reduce the number of the integral equations to be solved. A set of six spin operators to express any interaction of two spin-half particles is introduced. We show the spin-averaged differential cross section as being calculated in terms of the solution of the set of the integral equations.« less
Modeling of outgassing and matrix decomposition in carbon-phenolic composites
NASA Technical Reports Server (NTRS)
Mcmanus, Hugh L.
1993-01-01
A new release rate equation to model the phase change of water to steam in composite materials was derived from the theory of molecular diffusion and equilibrium moisture concentration. The new model is dependent on internal pressure, the microstructure of the voids and channels in the composite materials, and the diffusion properties of the matrix material. Hence, it is more fundamental and accurate than the empirical Arrhenius rate equation currently in use. The model was mathematically formalized and integrated into the thermostructural analysis code CHAR. Parametric studies on variation of several parameters have been done. Comparisons to Arrhenius and straight-line models show that the new model produces physically realistic results under all conditions.
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
Li, Ruipeng; Saad, Yousef
2017-08-01
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ruipeng; Saad, Yousef
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Users manual for the Variable dimension Automatic Synthesis Program (VASP)
NASA Technical Reports Server (NTRS)
White, J. S.; Lee, H. Q.
1971-01-01
A dictionary and some problems for the Variable Automatic Synthesis Program VASP are submitted. The dictionary contains a description of each subroutine and instructions on its use. The example problems give the user a better perspective on the use of VASP for solving problems in modern control theory. These example problems include dynamic response, optimal control gain, solution of the sampled data matrix Ricatti equation, matrix decomposition, and pseudo inverse of a matrix. Listings of all subroutines are also included. The VASP program has been adapted to run in the conversational mode on the Ames 360/67 computer.
A decentralized process for finding equilibria given by linear equations.
Reiter, S
1994-01-01
I present a decentralized process for finding the equilibria of an economy characterized by a finite number of linear equilibrium conditions. The process finds all equilibria or, if there are none, reports that, in a finite number of steps at most equal to the number of equations. The communication and computational complexity compare favorably with other decentralized processes. The process may also be interpreted as an algorithm for solving a distributed system of linear equations. Comparisons with the Linpack program for LU (lower and upper triangular decomposition of the matrix of the equation system, a version of Gaussian elimination) are presented. PMID:11607486
NASA Astrophysics Data System (ADS)
Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo
2006-12-01
As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Scalable Parallel Computation for Extended MHD Modeling of Fusion Plasmas
NASA Astrophysics Data System (ADS)
Glasser, Alan H.
2008-11-01
Parallel solution of a linear system is scalable if simultaneously doubling the number of dependent variables and the number of processors results in little or no increase in the computation time to solution. Two approaches have this property for parabolic systems: multigrid and domain decomposition. Since extended MHD is primarily a hyperbolic rather than a parabolic system, additional steps must be taken to parabolize the linear system to be solved by such a method. Such physics-based preconditioning (PBP) methods have been pioneered by Chac'on, using finite volumes for spatial discretization, multigrid for solution of the preconditioning equations, and matrix-free Newton-Krylov methods for the accurate solution of the full nonlinear preconditioned equations. The work described here is an extension of these methods using high-order spectral element methods and FETI-DP domain decomposition. Application of PBP to a flux-source representation of the physics equations is discussed. The resulting scalability will be demonstrated for simple wave and for ideal and Hall MHD waves.
NASA Astrophysics Data System (ADS)
Lanen, Theo A.; Watt, David W.
1995-10-01
Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.
Efficient model reduction of parametrized systems by matrix discrete empirical interpolation
NASA Astrophysics Data System (ADS)
Negri, Federico; Manzoni, Andrea; Amsallem, David
2015-12-01
In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.
NASA Astrophysics Data System (ADS)
Zhang, Hongqin; Tian, Xiangjun
2018-04-01
Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.
The pointwise estimates of diffusion wave of the compressible micropolar fluids
NASA Astrophysics Data System (ADS)
Wu, Zhigang; Wang, Weike
2018-09-01
The pointwise estimates for the compressible micropolar fluids in dimension three are given, which exhibit generalized Huygens' principle for the fluid density and fluid momentum as the compressible Navier-Stokes equation, while the micro-rational momentum behaves like the fluid momentum of the Euler equation with damping. To circumvent the complexity from 7 × 7 Green's matrix, we use the decomposition of fluid part and electromagnetic part for the momentums to study three smaller Green's matrices. The following from this decomposition is that we have to deal with the new problem that the nonlinear terms contain nonlocal operators. We solve it by using the natural match of these new Green's functions and the nonlinear terms. Moreover, to derive the different pointwise estimates for different unknown variables such that the estimate of each unknown variable is in agreement with its Green's function, we develop some new estimates on the nonlinear interplay between different waves.
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Weissenberger, S.; Cuk, S. M.
1973-01-01
This report presents the development and description of the decomposition aggregation approach to stability investigations of high dimension mathematical models of dynamic systems. The high dimension vector differential equation describing a large dynamic system is decomposed into a number of lower dimension vector differential equations which represent interconnected subsystems. Then a method is described by which the stability properties of each subsystem are aggregated into a single vector Liapunov function, representing the aggregate system model, consisting of subsystem Liapunov functions as components. A linear vector differential inequality is then formed in terms of the vector Liapunov function. The matrix of the model, which reflects the stability properties of the subsystems and the nature of their interconnections, is analyzed to conclude over-all system stability characteristics. The technique is applied in detail to investigate the stability characteristics of a dynamic model of a hypothetical spinning Skylab.
Multi-color incomplete Cholesky conjugate gradient methods for vector computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poole, E.L.
1986-01-01
This research is concerned with the solution on vector computers of linear systems of equations. Ax = b, where A is a large, sparse symmetric positive definite matrix with non-zero elements lying only along a few diagonals of the matrix. The system is solved using the incomplete Cholesky conjugate gradient method (ICCG). Multi-color orderings are used of the unknowns in the linear system to obtain p-color matrices for which a no-fill block ICCG method is implemented on the CYBER 205 with O(N/p) length vector operations in both the decomposition of A and, more importantly, in the forward and back solvesmore » necessary at each iteration of the method. (N is the number of unknowns and p is a small constant). A p-colored matrix is a matrix that can be partitioned into a p x p block matrix where the diagonal blocks are diagonal matrices. The matrix is stored by diagonals and matrix multiplication by diagonals is used to carry out the decomposition of A and the forward and back solves. Additionally, if the vectors across adjacent blocks line up, then some of the overhead associated with vector startups can be eliminated in the matrix vector multiplication necessary at each conjugate gradient iteration. Necessary and sufficient conditions are given to determine which multi-color orderings of the unknowns correspond to p-color matrices, and a process is indicated for choosing multi-color orderings.« less
Quantum groups, Yang-Baxter maps and quasi-determinants
NASA Astrophysics Data System (ADS)
Tsuboi, Zengo
2018-01-01
For any quasi-triangular Hopf algebra, there exists the universal R-matrix, which satisfies the Yang-Baxter equation. It is known that the adjoint action of the universal R-matrix on the elements of the tensor square of the algebra constitutes a quantum Yang-Baxter map, which satisfies the set-theoretic Yang-Baxter equation. The map has a zero curvature representation among L-operators defined as images of the universal R-matrix. We find that the zero curvature representation can be solved by the Gauss decomposition of a product of L-operators. Thereby obtained a quasi-determinant expression of the quantum Yang-Baxter map associated with the quantum algebra Uq (gl (n)). Moreover, the map is identified with products of quasi-Plücker coordinates over a matrix composed of the L-operators. We also consider the quasi-classical limit, where the underlying quantum algebra reduces to a Poisson algebra. The quasi-determinant expression of the quantum Yang-Baxter map reduces to ratios of determinants, which give a new expression of a classical Yang-Baxter map.
Systems of Inhomogeneous Linear Equations
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.
Matrix eigenvalue method for free-oscillations modelling of spherical elastic bodies
NASA Astrophysics Data System (ADS)
Zábranová, E.; Hanyk, L.; Matyska, C.
2017-11-01
Deformations and changes of the gravitational potential of pre-stressed self-gravitating elastic bodies caused by free oscillations are described by means of the momentum and Poisson equations and the constitutive relation. For spherically symmetric bodies, the equations and boundary conditions are transformed into ordinary differential equations of the second order by the spherical harmonic decomposition and further discretized by highly accurate pseudospectral difference schemes on Chebyshev grids; we pay special attention to the conditions at the centre of the models. We thus obtain a series of matrix eigenvalue problems for eigenfrequencies and eigenfunctions of the free oscillations. Accuracy of the presented numerical approach is tested by means of the Rayleigh quotients calculated for the eigenfrequencies up to 500 mHz. Both the modal frequencies and eigenfunctions are benchmarked against the output from the Mineos software package based on shooting methods. The presented technique is a promising alternative to widely used methods because it is stable and with a good capability up to high frequencies.
Rank-based decompositions of morphological templates.
Sussner, P; Ritter, G X
2000-01-01
Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.
An invariant asymptotic formula for solutions of second-order linear ODE's
NASA Technical Reports Server (NTRS)
Gingold, H.
1988-01-01
An invariant-matrix technique for the approximate solution of second-order ordinary differential equations (ODEs) of form y-double-prime = phi(x)y is developed analytically and demonstrated. A set of linear transformations for the companion matrix differential system is proposed; the diagonalization procedure employed in the final stage of the asymptotic decomposition is explained; and a scalar formulation of solutions for the ODEs is obtained. Several typical ODEs are analyzed, and it is shown that the Liouville-Green or WKB approximation is a special case of the present formula, which provides an approximation which is valid for the entire interval (0, infinity).
Algorithms and Application of Sparse Matrix Assembly and Equation Solvers for Aeroacoustics
NASA Technical Reports Server (NTRS)
Watson, W. R.; Nguyen, D. T.; Reddy, C. J.; Vatsa, V. N.; Tang, W. H.
2001-01-01
An algorithm for symmetric sparse equation solutions on an unstructured grid is described. Efficient, sequential sparse algorithms for degree-of-freedom reordering, supernodes, symbolic/numerical factorization, and forward backward solution phases are reviewed. Three sparse algorithms for the generation and assembly of symmetric systems of matrix equations are presented. The accuracy and numerical performance of the sequential version of the sparse algorithms are evaluated over the frequency range of interest in a three-dimensional aeroacoustics application. Results show that the solver solutions are accurate using a discretization of 12 points per wavelength. Results also show that the first assembly algorithm is impractical for high-frequency noise calculations. The second and third assembly algorithms have nearly equal performance at low values of source frequencies, but at higher values of source frequencies the third algorithm saves CPU time and RAM. The CPU time and the RAM required by the second and third assembly algorithms are two orders of magnitude smaller than that required by the sparse equation solver. A sequential version of these sparse algorithms can, therefore, be conveniently incorporated into a substructuring for domain decomposition formulation to achieve parallel computation, where different substructures are handles by different parallel processors.
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Mohammadi, Vahid
2017-08-01
In this research, we investigate the numerical solution of nonlinear Schrödinger equations in two and three dimensions. The numerical meshless method which will be used here is RBF-FD technique. The main advantage of this method is the approximation of the required derivatives based on finite difference technique at each local-support domain as Ωi. At each Ωi, we require to solve a small linear system of algebraic equations with a conditionally positive definite matrix of order 1 (interpolation matrix). This scheme is efficient and its computational cost is same as the moving least squares (MLS) approximation. A challengeable issue is choosing suitable shape parameter for interpolation matrix in this way. In order to overcome this matter, an algorithm which was established by Sarra (2012), will be applied. This algorithm computes the condition number of the local interpolation matrix using the singular value decomposition (SVD) for obtaining the smallest and largest singular values of that matrix. Moreover, an explicit method based on Runge-Kutta formula of fourth-order accuracy will be applied for approximating the time variable. It also decreases the computational costs at each time step since we will not solve a nonlinear system. On the other hand, to compare RBF-FD method with another meshless technique, the moving kriging least squares (MKLS) approximation is considered for the studied model. Our results demonstrate the ability of the present approach for solving the applicable model which is investigated in the current research work.
Algebraic multigrid domain and range decomposition (AMG-DD / AMG-RD)*
Bank, R.; Falgout, R. D.; Jones, T.; ...
2015-10-29
In modern large-scale supercomputing applications, algebraic multigrid (AMG) is a leading choice for solving matrix equations. However, the high cost of communication relative to that of computation is a concern for the scalability of traditional implementations of AMG on emerging architectures. This paper introduces two new algebraic multilevel algorithms, algebraic multigrid domain decomposition (AMG-DD) and algebraic multigrid range decomposition (AMG-RD), that replace traditional AMG V-cycles with a fully overlapping domain decomposition approach. While the methods introduced here are similar in spirit to the geometric methods developed by Brandt and Diskin [Multigrid solvers on decomposed domains, in Domain Decomposition Methods inmore » Science and Engineering, Contemp. Math. 157, AMS, Providence, RI, 1994, pp. 135--155], Mitchell [Electron. Trans. Numer. Anal., 6 (1997), pp. 224--233], and Bank and Holst [SIAM J. Sci. Comput., 22 (2000), pp. 1411--1443], they differ primarily in that they are purely algebraic: AMG-RD and AMG-DD trade communication for computation by forming global composite “grids” based only on the matrix, not the geometry. (As is the usual AMG convention, “grids” here should be taken only in the algebraic sense, regardless of whether or not it corresponds to any geometry.) Another important distinguishing feature of AMG-RD and AMG-DD is their novel residual communication process that enables effective parallel computation on composite grids, avoiding the all-to-all communication costs of the geometric methods. The main purpose of this paper is to study the potential of these two algebraic methods as possible alternatives to existing AMG approaches for future parallel machines. As a result, this paper develops some theoretical properties of these methods and reports on serial numerical tests of their convergence properties over a spectrum of problem parameters.« less
1998-09-01
1 .AND. ICOUNT .GT. ISTRAIN )GOTO 55 Add additional terms in equations for interface nodes If radial loading is applied, add term BMAT (NTOT-1) = SR...term in bmat Using Bmat , and the L-U decomposition of Amat determine XSOL, the vector of radial and hoop stresses CALL LUBKSB(AMAT,NRA,LDA,IPVT... BMAT ,XSOL) Compute stresses from the XSOL solution vector Use Boundary conditions S(1,NTOT2) = SR S(2,1) = S(1,1) Compute total axial
Adomian decomposition method used to solve the one-dimensional acoustic equations
NASA Astrophysics Data System (ADS)
Dispini, Meta; Mungkasi, Sudi
2017-05-01
In this paper we propose the use of Adomian decomposition method to solve one-dimensional acoustic equations. This recursive method can be calculated easily and the result is an approximation of the exact solution. We use the Maple software to compute the series in the Adomian decomposition. We obtain that the Adomian decomposition method is able to solve the acoustic equations with the physically correct behavior.
Benhammouda, Brahim
2016-01-01
Since 1980, the Adomian decomposition method (ADM) has been extensively used as a simple powerful tool that applies directly to solve different kinds of nonlinear equations including functional, differential, integro-differential and algebraic equations. However, for differential-algebraic equations (DAEs) the ADM is applied only in four earlier works. There, the DAEs are first pre-processed by some transformations like index reductions before applying the ADM. The drawback of such transformations is that they can involve complex algorithms, can be computationally expensive and may lead to non-physical solutions. The purpose of this paper is to propose a novel technique that applies the ADM directly to solve a class of nonlinear higher-index Hessenberg DAEs systems efficiently. The main advantage of this technique is that; firstly it avoids complex transformations like index reductions and leads to a simple general algorithm. Secondly, it reduces the computational work by solving only linear algebraic systems with a constant coefficient matrix at each iteration, except for the first iteration where the algebraic system is nonlinear (if the DAE is nonlinear with respect to the algebraic variable). To demonstrate the effectiveness of the proposed technique, we apply it to a nonlinear index-three Hessenberg DAEs system with nonlinear algebraic constraints. This technique is straightforward and can be programmed in Maple or Mathematica to simulate real application problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S.
1995-10-01
Aztec is an iterative library that greatly simplifies the parallelization process when solving the linear systems of equations Ax = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. Aztec is intended as a software tool for users who want to avoid cumbersome parallel programming details but who have large sparse linear systems which require an efficiently utilized parallel processing system. A collection of data transformation tools are provided that allow for easy creation of distributed sparsemore » unstructured matrices for parallel solution. Once the distributed matrix is created, computation can be performed on any of the parallel machines running Aztec: nCUBE 2, IBM SP2 and Intel Paragon, MPI platforms as well as standard serial and vector platforms. Aztec includes a number of Krylov iterative methods such as conjugate gradient (CG), generalized minimum residual (GMRES) and stabilized biconjugate gradient (BICGSTAB) to solve systems of equations. These Krylov methods are used in conjunction with various preconditioners such as polynomial or domain decomposition methods using LU or incomplete LU factorizations within subdomains. Although the matrix A can be general, the package has been designed for matrices arising from the approximation of partial differential equations (PDEs). In particular, the Aztec package is oriented toward systems arising from PDE applications.« less
3D tensor-based blind multispectral image decomposition for tumor demarcation
NASA Astrophysics Data System (ADS)
Kopriva, Ivica; Peršin, Antun
2010-03-01
Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).
Fast polar decomposition of an arbitrary matrix
NASA Technical Reports Server (NTRS)
Higham, Nicholas J.; Schreiber, Robert S.
1988-01-01
The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.
Study of laser cooling in deep optical lattice: two-level quantum model
NASA Astrophysics Data System (ADS)
Prudnikov, O. N.; Il'enkov, R. Ya.; Taichenachev, A. V.; Yudin, V. I.; Rasel, E. M.
2018-01-01
We study a possibility of laser cooling of 24Mg atoms in deep optical lattice formed by intense off-resonant laser field in a presence of cooling field resonant to narrow (3s3s) 1 S 0 → (3s3p)3 P 1 (λ = 457 nm) optical transition. For description of laser cooling with taking into account quantum recoil effects we consider two quantum models. The first one is based on direct numerical solution of quantum kinetic equation for atom density matrix and the second one is simplified model based on decomposition of atom density matrix over vibration states in the lattice wells. We search cooling field intensity and detuning for minimum cooling energy and fast laser cooling.
Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves
NASA Astrophysics Data System (ADS)
Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua
2017-09-01
In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.
High-performance computing on GPUs for resistivity logging of oil and gas wells
NASA Astrophysics Data System (ADS)
Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.
2017-10-01
We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolda, Tamara Gibson
We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties ofmore » the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.« less
Finite elements and the method of conjugate gradients on a concurrent processor
NASA Technical Reports Server (NTRS)
Lyzenga, G. A.; Raefsky, A.; Hager, G. H.
1985-01-01
An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90 percent for sufficiently large problems.
Finite elements and the method of conjugate gradients on a concurrent processor
NASA Technical Reports Server (NTRS)
Lyzenga, G. A.; Raefsky, A.; Hager, B. H.
1984-01-01
An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90% for sufficiently large problems.
NASA Astrophysics Data System (ADS)
Sarna, Neeraj; Torrilhon, Manuel
2018-01-01
We define certain criteria, using the characteristic decomposition of the boundary conditions and energy estimates, which a set of stable boundary conditions for a linear initial boundary value problem, involving a symmetric hyperbolic system, must satisfy. We first use these stability criteria to show the instability of the Maxwell boundary conditions proposed by Grad (Commun Pure Appl Math 2(4):331-407, 1949). We then recognise a special block structure of the moment equations which arises due to the recursion relations and the orthogonality of the Hermite polynomials; the block structure will help us in formulating stable boundary conditions for an arbitrary order Hermite discretization of the Boltzmann equation. The formulation of stable boundary conditions relies upon an Onsager matrix which will be constructed such that the newly proposed boundary conditions stay close to the Maxwell boundary conditions at least in the lower order moments.
Modeling of outgassing and matrix decomposition in carbon-phenolic composites
NASA Technical Reports Server (NTRS)
Mcmanus, Hugh L.
1994-01-01
Work done in the period Jan. - June 1994 is summarized. Two threads of research have been followed. First, the thermodynamics approach was used to model the chemical and mechanical responses of composites exposed to high temperatures. The thermodynamics approach lends itself easily to the usage of variational principles. This thermodynamic-variational approach has been applied to the transpiration cooling problem. The second thread is the development of a better algorithm to solve the governing equations resulting from the modeling. Explicit finite difference method is explored for solving the governing nonlinear, partial differential equations. The method allows detailed material models to be included and solution on massively parallel supercomputers. To demonstrate the feasibility of the explicit scheme in solving nonlinear partial differential equations, a transpiration cooling problem was solved. Some interesting transient behaviors were captured such as stress waves and small spatial oscillations of transient pressure distribution.
Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saad, Yousef
2014-01-16
The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners formore » solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the problem of evaluating f(A)v which arises in statistical sampling. 11. As an application to the methods we developed, we tackled the problem of computing the diagonal of the inverse of a matrix. This arises in statistical applications as well as in many applications in physics. We explored probing methods as well as domain-decomposition type methods. 12. A collaboration with researchers from Toulouse, France, considered the important problem of computing the Schur complement in a domain-decomposition approach. 13. We explored new ways of preconditioning linear systems, based on low-rank approximations.« less
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
VASP- VARIABLE DIMENSION AUTOMATIC SYNTHESIS PROGRAM
NASA Technical Reports Server (NTRS)
White, J. S.
1994-01-01
VASP is a variable dimension Fortran version of the Automatic Synthesis Program, ASP. The program is used to implement Kalman filtering and control theory. Basically, it consists of 31 subprograms for solving most modern control problems in linear, time-variant (or time-invariant) control systems. These subprograms include operations of matrix algebra, computation of the exponential of a matrix and its convolution integral, and the solution of the matrix Riccati equation. The user calls these subprograms by means of a FORTRAN main program, and so can easily obtain solutions to most general problems of extremization of a quadratic functional of the state of the linear dynamical system. Particularly, these problems include the synthesis of the Kalman filter gains and the optimal feedback gains for minimization of a quadratic performance index. VASP, as an outgrowth of the Automatic Synthesis Program, has the following improvements: more versatile programming language; more convenient input/output format; some new subprograms which consolidate certain groups of statements that are often repeated; and variable dimensioning. The pertinent difference between the two programs is that VASP has variable dimensioning and more efficient storage. The documentation for the VASP program contains a VASP dictionary and example problems. The dictionary contains a description of each subroutine and instructions on its use. The example problems include dynamic response, optimal control gain, solution of the sampled data matrix Riccati equation, matrix decomposition, and a pseudo-inverse of a matrix. This program is written in FORTRAN IV and has been implemented on the IBM 360. The VASP program was developed in 1971.
NASA Astrophysics Data System (ADS)
Badia, Santiago; Martín, Alberto F.; Planas, Ramon
2014-10-01
The thermally coupled incompressible inductionless magnetohydrodynamics (MHD) problem models the flow of an electrically charged fluid under the influence of an external electromagnetic field with thermal coupling. This system of partial differential equations is strongly coupled and highly nonlinear for real cases of interest. Therefore, fully implicit time integration schemes are very desirable in order to capture the different physical scales of the problem at hand. However, solving the multiphysics linear systems of equations resulting from such algorithms is a very challenging task which requires efficient and scalable preconditioners. In this work, a new family of recursive block LU preconditioners is designed and tested for solving the thermally coupled inductionless MHD equations. These preconditioners are obtained after splitting the fully coupled matrix into one-physics problems for every variable (velocity, pressure, current density, electric potential and temperature) that can be optimally solved, e.g., using preconditioned domain decomposition algorithms. The main idea is to arrange the original matrix into an (arbitrary) 2 × 2 block matrix, and consider an LU preconditioner obtained by approximating the corresponding Schur complement. For every one of the diagonal blocks in the LU preconditioner, if it involves more than one type of unknowns, we proceed the same way in a recursive fashion. This approach is stated in an abstract way, and can be straightforwardly applied to other multiphysics problems. Further, we precisely explain a flexible and general software design for the code implementation of this type of preconditioners.
Domain decomposition methods in aerodynamics
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.; Saltz, Joel
1990-01-01
Compressible Euler equations are solved for two-dimensional problems by a preconditioned conjugate gradient-like technique. An approximate Riemann solver is used to compute the numerical fluxes to second order accuracy in space. Two ways to achieve parallelism are tested, one which makes use of parallelism inherent in triangular solves and the other which employs domain decomposition techniques. The vectorization/parallelism in triangular solves is realized by the use of a recording technique called wavefront ordering. This process involves the interpretation of the triangular matrix as a directed graph and the analysis of the data dependencies. It is noted that the factorization can also be done in parallel with the wave front ordering. The performances of two ways of partitioning the domain, strips and slabs, are compared. Results on Cray YMP are reported for an inviscid transonic test case. The performances of linear algebra kernels are also reported.
ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.
Lee, Keunbaik; Baek, Changryong; Daniels, Michael J
2017-11-01
In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.
Angular-Rate Estimation Using Delayed Quaternion Measurements
NASA Technical Reports Server (NTRS)
Azor, R.; Bar-Itzhack, I. Y.; Harman, R. R.
1999-01-01
This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared one that uses differentiated quaternion measurements to yield coarse rate measurements, which are then fed into two different estimators. In the other approach the raw quaternion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear part of the rotas rotational dynamics equation of a body into a product of an angular-rate dependent matrix and the angular-rate vector itself. This non unique decomposition, enables the treatment of the nonlinear spacecraft (SC) dynamics model as a linear one and, thus, the application of a PseudoLinear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the gain matrix and thus eliminates the need to compute recursively the filter covariance matrix. The replacement of the rotational dynamics by a simple Markov model is also examined. In this paper special consideration is given to the problem of delayed quaternion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results are presented.
On the local structure of spacetime in ghost-free bimetric theory and massive gravity
NASA Astrophysics Data System (ADS)
Hassan, S. F.; Kocic, Mikica
2018-05-01
The ghost-free bimetric theory describes interactions of gravity with another spin-2 field in terms of two Lorentzian metrics. However, if the two metrics do not admit compatible notions of space and time, the formulation of the initial value problem becomes problematic. Furthermore, the interaction potential is given in terms of the square root of a matrix which is in general nonunique and possibly nonreal. In this paper we show that both these issues are evaded by requiring reality and general covariance of the equations. First we prove that the reality of the square root matrix leads to a classification of the allowed metrics in terms of the intersections of their null cones. Then, the requirement of general covariance further restricts the allowed metrics to geometries that admit compatible notions of space and time. It also selects a unique definition of the square root matrix. The restrictions are compatible with the equations of motion. These results ensure that the ghost-free bimetric theory can be defined unambiguously and that the two metrics always admit compatible 3+1 decompositions, at least locally. In particular, these considerations rule out certain solutions of massive gravity with locally Closed Causal Curves, which have been used to argue that the theory is acausal.
Time integration algorithms for the two-dimensional Euler equations on unstructured meshes
NASA Technical Reports Server (NTRS)
Slack, David C.; Whitaker, D. L.; Walters, Robert W.
1994-01-01
Explicit and implicit time integration algorithms for the two-dimensional Euler equations on unstructured grids are presented. Both cell-centered and cell-vertex finite volume upwind schemes utilizing Roe's approximate Riemann solver are developed. For the cell-vertex scheme, a four-stage Runge-Kutta time integration, a fourstage Runge-Kutta time integration with implicit residual averaging, a point Jacobi method, a symmetric point Gauss-Seidel method and two methods utilizing preconditioned sparse matrix solvers are presented. For the cell-centered scheme, a Runge-Kutta scheme, an implicit tridiagonal relaxation scheme modeled after line Gauss-Seidel, a fully implicit lower-upper (LU) decomposition, and a hybrid scheme utilizing both Runge-Kutta and LU methods are presented. A reverse Cuthill-McKee renumbering scheme is employed for the direct solver to decrease CPU time by reducing the fill of the Jacobian matrix. A comparison of the various time integration schemes is made for both first-order and higher order accurate solutions using several mesh sizes, higher order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The results obtained for a transonic flow over a circular arc suggest that the preconditioned sparse matrix solvers perform better than the other methods as the number of elements in the mesh increases.
Students' Understanding of Quadratic Equations
ERIC Educational Resources Information Center
López, Jonathan; Robles, Izraim; Martínez-Planell, Rafael
2016-01-01
Action-Process-Object-Schema theory (APOS) was applied to study student understanding of quadratic equations in one variable. This required proposing a detailed conjecture (called a genetic decomposition) of mental constructions students may do to understand quadratic equations. The genetic decomposition which was proposed can contribute to help…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Hong; Davidson, Ronald C.; Burby, Joshua W.
2014-04-08
The dynamics of charged particles in general linear focusing lattices with quadrupole, skew-quadrupole, dipole, and solenoidal components, as well as torsion of the fiducial orbit and variation of beam energy is parametrized using a generalized Courant-Snyder (CS) theory, which extends the original CS theory for one degree of freedom to higher dimensions. The envelope function is generalized into an envelope matrix, and the phase advance is generalized into a 4D symplectic rotation, or a Uð2Þ element. The 1D envelope equation, also known as the Ermakov-Milne-Pinney equation in quantum mechanics, is generalized to an envelope matrix equation in higher dimensions. Othermore » components of the original CS theory, such as the transfer matrix, Twiss functions, and CS invariant (also known as the Lewis invariant) all have their counterparts, with remarkably similar expressions, in the generalized theory. The gauge group structure of the generalized theory is analyzed. By fixing the gauge freedom with a desired symmetry, the generalized CS parametrization assumes the form of the modified Iwasawa decomposition, whose importance in phase space optics and phase space quantum mechanics has been recently realized. This gauge fixing also symmetrizes the generalized envelope equation and expresses the theory using only the generalized Twiss function β. The generalized phase advance completely determines the spectral and structural stability properties of a general focusing lattice. For structural stability, the generalized CS theory enables application of the Krein-Moser theory to greatly simplify the stability analysis. The generalized CS theory provides an effective tool to study coupled dynamics and to discover more optimized lattice designs in the larger parameter space of general focusing lattices.« less
A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.
ERIC Educational Resources Information Center
Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven
2003-01-01
Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)
On the parallel solution of parabolic equations
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Youcef
1989-01-01
Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
NASA Astrophysics Data System (ADS)
Belitsky, A. V.
2017-10-01
The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang-Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4) matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.
NASA Astrophysics Data System (ADS)
Pomogailo, Anatolii D.; Dzhardimalieva, Gulzhian I.; Rozenberg, Aleksander S.; Muraviev, Dmitri N.
2003-12-01
The kinetic peculiarities of the thermal transformations of unsaturated metal carboxylates (transition metal acrylates and maleates as well as their cocrystallites) and properties of metal-polymer nanocomposites formed have been studied. The composition and structure of metal-containing precursors and the products of the thermolysis were identified by X-ray analysis, optical and electron microscopy, magnetic measurements, EXAFS, IR and mass spectroscopy. The thermal transformations of metal-containing monomers studied are the complex process including dehydration, solid phase polymerization, and thermolysis process which proceed at varied temperature ranges. At 200-300°C the rate of thermal decay can be described by first-order equations. The products of decompositions are nanometer-sized particles of metal or its oxides with a narrow size distribution (the mean particle diameter of 5-10nm) stabilized by the polymer matrix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, T; Dong, X; Petrongolo, M
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less
Separable decompositions of bipartite mixed states
NASA Astrophysics Data System (ADS)
Li, Jun-Li; Qiao, Cong-Feng
2018-04-01
We present a practical scheme for the decomposition of a bipartite mixed state into a sum of direct products of local density matrices, using the technique developed in Li and Qiao (Sci. Rep. 8:1442, 2018). In the scheme, the correlation matrix which characterizes the bipartite entanglement is first decomposed into two matrices composed of the Bloch vectors of local states. Then, we show that the symmetries of Bloch vectors are consistent with that of the correlation matrix, and the magnitudes of the local Bloch vectors are lower bounded by the correlation matrix. Concrete examples for the separable decompositions of bipartite mixed states are presented for illustration.
Introduction to quantized LIE groups and algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tjin, T.
1992-10-10
In this paper, the authors give a self-contained introduction to the theory of quantum groups according to Drinfeld, highlighting the formal aspects as well as the applications to the Yang-Baxter equation and representation theory. Introductions to Hopf algebras, Poisson structures and deformation quantization are also provided. After defining Poisson Lie groups the authors study their relation to Lie bialgebras and the classical Yang-Baxter equation. Then the authors explain in detail the concept of quantization for them. As an example the quantization of sl[sub 2] is explicitly carried out. Next, the authors show how quantum groups are related to the Yang-Baxtermore » equation and how they can be used to solve it. Using the quantum double construction, the authors explicitly construct the universal R matrix for the quantum sl[sub 2] algebra. In the last section, the authors deduce all finite-dimensional irreducible representations for q a root of unity. The authors also give their tensor product decomposition (fusion rules), which is relevant to conformal field theory.« less
NASA Technical Reports Server (NTRS)
Lee, Jeh Won
1990-01-01
The objective is the theoretical analysis and the experimental verification of dynamics and control of a two link flexible manipulator with a flexible parallel link mechanism. Nonlinear equations of motion of the lightweight manipulator are derived by the Lagrangian method in symbolic form to better understand the structure of the dynamic model. The resulting equation of motion have a structure which is useful to reduce the number of terms calculated, to check correctness, or to extend the model to higher order. A manipulator with a flexible parallel link mechanism is a constrained dynamic system whose equations are sensitive to numerical integration error. This constrained system is solved using singular value decomposition of the constraint Jacobian matrix. Elastic motion is expressed by the assumed mode method. Mode shape functions of each link are chosen using the load interfaced component mode synthesis. The discrepancies between the analytical model and the experiment are explained using a simplified and a detailed finite element model.
NASA Astrophysics Data System (ADS)
Ledet, Lasse S.; Sorokin, Sergey V.
2018-03-01
The paper addresses the classical problem of time-harmonic forced vibrations of a fluid-filled cylindrical shell considered as a multi-modal waveguide carrying infinitely many waves. The forced vibration problem is solved using tailored Green's matrices formulated in terms of eigenfunction expansions. The formulation of Green's matrix is based on special (bi-)orthogonality relations between the eigenfunctions, which are derived here for the fluid-filled shell. Further, the relations are generalised to any multi-modal symmetric waveguide. Using the orthogonality relations the transcendental equation system is converted into algebraic modal equations that can be solved analytically. Upon formulation of Green's matrices the solution space is studied in terms of completeness and convergence (uniformity and rate). Special features and findings exposed only through this modal decomposition method are elaborated and the physical interpretation of the bi-orthogonality relation is discussed in relation to the total energy flow which leads to derivation of simplified equations for the energy flow components.
Estimation of near-surface shear-wave velocity by inversion of Rayleigh waves
Xia, J.; Miller, R.D.; Park, C.B.
1999-01-01
The shear-wave (S-wave) velocity of near-surface materials (soil, rocks, pavement) and its effect on seismic-wave propagation are of fundamental interest in many groundwater, engineering, and environmental studies. Rayleigh-wave phase velocity of a layered-earth model is a function of frequency and four groups of earth properties: P-wave velocity, S-wave velocity, density, and thickness of layers. Analysis of the Jacobian matrix provides a measure of dispersion-curve sensitivity to earth properties. S-wave velocities are the dominant influence on a dispersion curve in a high-frequency range (>5 Hz) followed by layer thickness. An iterative solution technique to the weighted equation proved very effective in the high-frequency range when using the Levenberg-Marquardt and singular-value decomposition techniques. Convergence of the weighted solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Synthetic examples demonstrated calculation efficiency and stability of inverse procedures. We verify our method using borehole S-wave velocity measurements.Iterative solutions to the weighted equation by the Levenberg-Marquardt and singular-value decomposition techniques are derived to estimate near-surface shear-wave velocity. Synthetic and real examples demonstrate the calculation efficiency and stability of the inverse procedure. The inverse results of the real example are verified by borehole S-wave velocity measurements.
Angular-Rate Estimation Using Star Tracker Measurements
NASA Technical Reports Server (NTRS)
Azor, R.; Bar-Itzhack, I.; Deutschmann, Julie K.; Harman, Richard R.
1999-01-01
This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared, one that uses differentiated quatemion measurements to yield coarse rate measurements which are then fed into two different estimators. In the other approach the raw quatemion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear rate dependent part of the rotational dynamics equation of a rigid body into a product of an angular-rate dependent matrix and the angular-rate vector itself This decomposition, which is not unique, enables the treatment of the nonlinear spacecraft dynamics model as a linear one and, consequently, the application of a Pseudo-Linear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the Kalman gain matrix and thus eliminates the need to propagate and update the filter covariance matrix. The replacement of the elaborate rotational dynamics by a simple first order Markov model is also examined. In this paper a special consideration is given to the problem of delayed quatemion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results of these tests are presented.
Angular-Rate Estimation using Star Tracker Measurements
NASA Technical Reports Server (NTRS)
Azor, R.; Bar-Itzhack, Itzhack Y.; Deutschmann, Julie K.; Harman, Richard R.
1999-01-01
This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared, one that uses differentiated quaternion measurements to yield coarse rate measurements which are then fed into two different estimators. In the other approach the raw quaternion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear rate dependent part of the rotational dynamics equation of a rigid body into a product of an angular-rate dependent matrix and the angular-rate vector itself. This decomposition, which is not unique, enables the treatment of the nonlinear spacecraft dynamics model as a linear one and, consequently, the application of a Pseudo-Linear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the Kalman gain matrix and thus eliminates the need to propagate and update the filter covariance matrix. The replacement of the elaborate rotational dynamics by a simple first order Markov model is also examined. In this paper a special consideration is given to the problem of delayed quaternion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results of these tests are presented.
Matrix decompositions of two-dimensional nuclear magnetic resonance spectra.
Havel, T F; Najfeld, I; Yang, J X
1994-08-16
Two-dimensional NMR spectra are rectangular arrays of real numbers, which are commonly regarded as digitized images to be analyzed visually. If one treats them instead as mathematical matrices, linear algebra techniques can also be used to extract valuable information from them. This matrix approach is greatly facilitated by means of a physically significant decomposition of these spectra into a product of matrices--namely, S = PAPT. Here, P denotes a matrix whose columns contain the digitized contours of each individual peak or multiple in the one-dimensional spectrum, PT is its transpose, and A is an interaction matrix specific to the experiment in question. The practical applications of this decomposition are considered in detail for two important types of two-dimensional NMR spectra, double quantum-filtered correlated spectroscopy and nuclear Overhauser effect spectroscopy, both in the weak-coupling approximation. The elements of A are the signed intensities of the cross-peaks in a double quantum-filtered correlated spectrum, or the integrated cross-peak intensities in the case of a nuclear Overhauser effect spectrum. This decomposition not only permits these spectra to be efficiently simulated but also permits the corresponding inverse problems to be given an elegant mathematical formulation to which standard numerical methods are applicable. Finally, the extension of this decomposition to the case of strong coupling is given.
Matrix decompositions of two-dimensional nuclear magnetic resonance spectra.
Havel, T F; Najfeld, I; Yang, J X
1994-01-01
Two-dimensional NMR spectra are rectangular arrays of real numbers, which are commonly regarded as digitized images to be analyzed visually. If one treats them instead as mathematical matrices, linear algebra techniques can also be used to extract valuable information from them. This matrix approach is greatly facilitated by means of a physically significant decomposition of these spectra into a product of matrices--namely, S = PAPT. Here, P denotes a matrix whose columns contain the digitized contours of each individual peak or multiple in the one-dimensional spectrum, PT is its transpose, and A is an interaction matrix specific to the experiment in question. The practical applications of this decomposition are considered in detail for two important types of two-dimensional NMR spectra, double quantum-filtered correlated spectroscopy and nuclear Overhauser effect spectroscopy, both in the weak-coupling approximation. The elements of A are the signed intensities of the cross-peaks in a double quantum-filtered correlated spectrum, or the integrated cross-peak intensities in the case of a nuclear Overhauser effect spectrum. This decomposition not only permits these spectra to be efficiently simulated but also permits the corresponding inverse problems to be given an elegant mathematical formulation to which standard numerical methods are applicable. Finally, the extension of this decomposition to the case of strong coupling is given. PMID:8058742
NASA Astrophysics Data System (ADS)
Ford, Neville J.; Connolly, Joseph A.
2009-07-01
We give a comparison of the efficiency of three alternative decomposition schemes for the approximate solution of multi-term fractional differential equations using the Caputo form of the fractional derivative. The schemes we compare are based on conversion of the original problem into a system of equations. We review alternative approaches and consider how the most appropriate numerical scheme may be chosen to solve a particular equation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Xiang; Yang, Chao; State Key Laboratory of Computer Science, Chinese Academy of Sciences, Beijing 100190
2015-03-15
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracymore » (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less
None, None
2016-11-21
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less
Implementation of a kappa-epsilon turbulence model to RPLUS3D code
NASA Technical Reports Server (NTRS)
Chitsomboon, Tawit
1992-01-01
The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.
Implementation of a kappa-epsilon turbulence model to RPLUS3D code
NASA Astrophysics Data System (ADS)
Chitsomboon, Tawit
1992-02-01
The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomoda, T.
1982-07-01
The method developed in the preceding paper is applied to the calculation of the spectra of positrons produced in the U + U collision. Matrix elements of the radial derivative operator between adiabatic basis states are calculated in the monopole approximation, with the finite nuclear size taken into account. These matrix elements are then modified for the supercritical case with the use of the analytical method presented in paper I of this series. The coupled differential equations for the occupation amplitudes of the basis states are solved and the positron spectra are obtained for the U + U collision. Itmore » is shown that the decomposition of the production probability into a spontaneous and an induced part depends on the definition of the resonance state and cannot be given unambiguously. The results are compared with those obtained by Reinhardt et al.« less
Recursive partitioned inversion of large (1500 x 1500) symmetric matrices
NASA Technical Reports Server (NTRS)
Putney, B. H.; Brownd, J. E.; Gomez, R. A.
1976-01-01
A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.
Repeated decompositions reveal the stability of infomax decomposition of fMRI data
Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott
2010-01-01
In this study, we decomposed 12 fMRI data sets from six subjects each 101 times using the infomax algorithm. The first decomposition was taken as a reference decomposition; the others were used to form a component matrix of 100 by 100 components. Equivalence relations between components in this matrix, defined as maximum spatial correlations to the components of the reference decomposition, were found by the Hungarian sorting method and used to form 100 equivalence classes for each data set. We then tested the reproducibility of the matched components in the equivalence classes using uncertainty measures based on component distributions, time courses, and ROC curves. Infomax ICA rarely failed to derive nearly the same components in different decompositions. Very few components per data set were poorly reproduced, even using vector angle uncertainty measures stricter than correlation and detection theory measures. PMID:17281453
Curtis, Tyler E; Roeder, Ryan K
2017-10-01
Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.
Chen, Nan; Majda, Andrew J
2017-12-05
Solving the Fokker-Planck equation for high-dimensional complex dynamical systems is an important issue. Recently, the authors developed efficient statistically accurate algorithms for solving the Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures, which contain many strong non-Gaussian features such as intermittency and fat-tailed probability density functions (PDFs). The algorithms involve a hybrid strategy with a small number of samples [Formula: see text], where a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious Gaussian kernel density estimation in the remaining low-dimensional subspace. In this article, two effective strategies are developed and incorporated into these algorithms. The first strategy involves a judicious block decomposition of the conditional covariance matrix such that the evolutions of different blocks have no interactions, which allows an extremely efficient parallel computation due to the small size of each individual block. The second strategy exploits statistical symmetry for a further reduction of [Formula: see text] The resulting algorithms can efficiently solve the Fokker-Planck equation with strongly non-Gaussian PDFs in much higher dimensions even with orders in the millions and thus beat the curse of dimension. The algorithms are applied to a [Formula: see text]-dimensional stochastic coupled FitzHugh-Nagumo model for excitable media. An accurate recovery of both the transient and equilibrium non-Gaussian PDFs requires only [Formula: see text] samples! In addition, the block decomposition facilitates the algorithms to efficiently capture the distinct non-Gaussian features at different locations in a [Formula: see text]-dimensional two-layer inhomogeneous Lorenz 96 model, using only [Formula: see text] samples. Copyright © 2017 the Author(s). Published by PNAS.
Polar decomposition for attitude determination from vector observations
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.
1993-01-01
This work treats the problem of weighted least squares fitting of a 3D Euclidean-coordinate transformation matrix to a set of unit vectors measured in the reference and transformed coordinates. A closed-form analytic solution to the problem is re-derived. The fact that the solution is the closest orthogonal matrix to some matrix defined on the measured vectors and their weights is clearly demonstrated. Several known algorithms for computing the analytic closed form solution are considered. An algorithm is discussed which is based on the polar decomposition of matrices into the closest unitary matrix to the decomposed matrix and a Hermitian matrix. A somewhat longer improved algorithm is suggested too. A comparison of several algorithms is carried out using simulated data as well as real data from the Upper Atmosphere Research Satellite. The comparison is based on accuracy and time consumption. It is concluded that the algorithms based on polar decomposition yield a simple although somewhat less accurate solution. The precision of the latter algorithms increase with the number of the measured vectors and with the accuracy of their measurement.
NASA Technical Reports Server (NTRS)
Wade, T. O.
1984-01-01
Reduction techniques for traffic matrices are explored in some detail. These matrices arise in satellite switched time-division multiple access (SS/TDMA) techniques whereby switching of uplink and downlink beams is required to facilitate interconnectivity of beam zones. A traffic matrix is given to represent that traffic to be transmitted from n uplink beams to n downlink beams within a TDMA frame typically of 1 ms duration. The frame is divided into segments of time and during each segment a portion of the traffic is represented by a switching mode. This time slot assignment is characterized by a mode matrix in which there is not more than a single non-zero entry on each line (row or column) of the matrix. Investigation is confined to decomposition of an n x n traffic matrix by mode matrices with a requirement that the decomposition be 100 percent efficient or, equivalently, that the line(s) in the original traffic matrix whose sum is maximal (called critical line(s)) remain maximal as mode matrices are subtracted throughout the decomposition process. A method of decomposition of an n x n traffic matrix by mode matrices results in a number of steps that is bounded by n(2) - 2n + 2. It is shown that this upper bound exists for an n x n matrix wherein all the lines are maximal (called a quasi doubly stochastic (QDS) matrix) or for an n x n matrix that is completely arbitrary. That is, the fact that no method can exist with a lower upper bound is shown for both QDS and arbitrary matrices, in an elementary and straightforward manner.
Polystyrene Foam Products Equation of State as a Function of Porosity and Fill Gas
NASA Astrophysics Data System (ADS)
Mulford, R. N.; Swift, D. C.
2009-12-01
An accurate EOS for polystyrene foam is necessary for analysis of numerous experiments in shock compression, inertial confinement fusion, and astrophysics. Plastic to gas ratios vary between various samples of foam, according to the density and cell-size of the foam. A matrix of compositions has been investigated, allowing prediction of foam response as a function of the plastic-to-air ratio. The EOS code CHEETAH allows participation of the air in the decomposition reaction of the foam. Differences between air-filled, Ar-blown, and CO2-blown foams are investigated, to estimate the importance of allowing air to react with products of polystyrene decomposition. O2-blown foams are included in some comparisons, to amplify any consequences of reaction with oxygen in air. He-blown foams are included in some comparisons, to provide an extremum of density. Product pressures are slightly higher for oxygen-containing fill gases than for non-oxygen-containing fill gases. Examination of product species indicates that CO2 decomposes at high temperatures.
Mafusire, Cosmas; Krüger, Tjaart P J
2018-06-01
The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.
Smallwood, D. O.
1996-01-01
It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.
Bienvenu, François; Akçay, Erol; Legendre, Stéphane; McCandlish, David M
2017-06-01
Matrix projection models are a central tool in many areas of population biology. In most applications, one starts from the projection matrix to quantify the asymptotic growth rate of the population (the dominant eigenvalue), the stable stage distribution, and the reproductive values (the dominant right and left eigenvectors, respectively). Any primitive projection matrix also has an associated ergodic Markov chain that contains information about the genealogy of the population. In this paper, we show that these facts can be used to specify any matrix population model as a triple consisting of the ergodic Markov matrix, the dominant eigenvalue and one of the corresponding eigenvectors. This decomposition of the projection matrix separates properties associated with lineages from those associated with individuals. It also clarifies the relationships between many quantities commonly used to describe such models, including the relationship between eigenvalue sensitivities and elasticities. We illustrate the utility of such a decomposition by introducing a new method for aggregating classes in a matrix population model to produce a simpler model with a smaller number of classes. Unlike the standard method, our method has the advantage of preserving reproductive values and elasticities. It also has conceptually satisfying properties such as commuting with changes of units. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Snakowska, Anna; Jurkiewicz, Jerzy; Gorazd, Łukasz
2017-05-01
The paper presents derivation of the impedance matrix based on the rigorous solution of the wave equation obtained by the Wiener-Hopf technique for a semi-infinite unflanged cylindrical duct. The impedance matrix allows, in turn, calculate the acoustic impedance along the duct and, as a special case, the radiation impedance. The analysis is carried out for a multimode incident wave accounting for modes coupling on the duct outlet not only qualitatively but also quantitatively for a selected source operating inside. The quantitative evaluation of the acoustic impedance requires setting of modes amplitudes which has been obtained applying the mode decomposition method to the far-field pressure radiation measurements and theoretical formulae for single mode directivity characteristics for an unflanged duct. Calculation of the acoustic impedance for a non-uniform distribution of the sound pressure and the sound velocity on a duct cross section requires determination of the acoustic power transmitted along/radiated from a duct. In the paper, the impedance matrix, the power, and the acoustic impedance were derived as functions of Helmholtz number and distance from the outlet.
A Longitudinal Study on Human Outdoor Decomposition in Central Texas.
Suckling, Joanna K; Spradley, M Katherine; Godde, Kanya
2016-01-01
The development of a methodology that estimates the postmortem interval (PMI) from stages of decomposition is a goal for which forensic practitioners strive. A proposed equation (Megyesi et al. 2005) that utilizes total body score (TBS) and accumulated degree days (ADD) was tested using longitudinal data collected from human remains donated to the Forensic Anthropology Research Facility (FARF) at Texas State University-San Marcos. Exact binomial tests examined the rate of the equation to successfully predict ADD. Statistically significant differences were found between ADD estimated by the equation and the observed value for decomposition stage. Differences remained significant after carnivore scavenged donations were removed from analysis. Low success rates for the equation to predict ADD from TBS and the wide standard errors demonstrate the need to re-evaluate the use of this equation and methodology for PMI estimation in different environments; rather, multivariate methods and equations should be derived that are environmentally specific. © 2015 American Academy of Forensic Sciences.
ADM For Solving Linear Second-Order Fredholm Integro-Differential Equations
NASA Astrophysics Data System (ADS)
Karim, Mohd F.; Mohamad, Mahathir; Saifullah Rusiman, Mohd; Che-Him, Norziha; Roslan, Rozaini; Khalid, Kamil
2018-04-01
In this paper, we apply Adomian Decomposition Method (ADM) as numerically analyse linear second-order Fredholm Integro-differential Equations. The approximate solutions of the problems are calculated by Maple package. Some numerical examples have been considered to illustrate the ADM for solving this equation. The results are compared with the existing exact solution. Thus, the Adomian decomposition method can be the best alternative method for solving linear second-order Fredholm Integro-Differential equation. It converges to the exact solution quickly and in the same time reduces computational work for solving the equation. The result obtained by ADM shows the ability and efficiency for solving these equations.
Auto-Bäcklund transformations for a matrix partial differential equation
NASA Astrophysics Data System (ADS)
Gordoa, P. R.; Pickering, A.
2018-07-01
We derive auto-Bäcklund transformations, analogous to those of the matrix second Painlevé equation, for a matrix partial differential equation. We also then use these auto-Bäcklund transformations to derive matrix equations involving shifts in a discrete variable, a process analogous to the use of the auto-Bäcklund transformations of the matrix second Painlevé equation to derive a discrete matrix first Painlevé equation. The equations thus derived then include amongst other examples a semidiscrete matrix equation which can be considered to be an extension of this discrete matrix first Painlevé equation. The application of this technique to the auto-Bäcklund transformations of the scalar case of our partial differential equation has not been considered before, and so the results obtained here in this scalar case are also new. Other equations obtained here using this technique include a scalar semidiscrete equation which arises in the case of the second Painlevé equation, and which does not seem to have been thus derived previously.
The Three-Component Defocusing Nonlinear Schrödinger Equation with Nonzero Boundary Conditions
NASA Astrophysics Data System (ADS)
Biondini, Gino; Kraus, Daniel K.; Prinari, Barbara
2016-12-01
We present a rigorous theory of the inverse scattering transform (IST) for the three-component defocusing nonlinear Schrödinger (NLS) equation with initial conditions approaching constant values with the same amplitude as {xto±∞}. The theory combines and extends to a problem with non-zero boundary conditions three fundamental ideas: (i) the tensor approach used by Beals, Deift and Tomei for the n-th order scattering problem, (ii) the triangular decompositions of the scattering matrix used by Novikov, Manakov, Pitaevski and Zakharov for the N-wave interaction equations, and (iii) a generalization of the cross product via the Hodge star duality, which, to the best of our knowledge, is used in the context of the IST for the first time in this work. The combination of the first two ideas allows us to rigorously obtain a fundamental set of analytic eigenfunctions. The third idea allows us to establish the symmetries of the eigenfunctions and scattering data. The results are used to characterize the discrete spectrum and to obtain exact soliton solutions, which describe generalizations of the so-called dark-bright solitons of the two-component NLS equation.
Salient Object Detection via Structured Matrix Decomposition.
Peng, Houwen; Li, Bing; Ling, Haibin; Hu, Weiming; Xiong, Weihua; Maybank, Stephen J
2016-05-04
Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.
ML 3.0 smoothed aggregation user's guide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sala, Marzio; Hu, Jonathan Joseph; Tuminaro, Raymond Stephen
2004-05-01
ML is a multigrid preconditioning package intended to solve linear systems of equations Az = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package ormore » to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the AZTEC 2.1 and AZTECOO iterative package [15]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and non-symmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.« less
ML 3.1 smoothed aggregation user's guide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sala, Marzio; Hu, Jonathan Joseph; Tuminaro, Raymond Stephen
2004-10-01
ML is a multigrid preconditioning package intended to solve linear systems of equations Ax = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package ormore » to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the Aztec 2.1 and AztecOO iterative package [16]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and nonsymmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.« less
Hypermatrix scheme for finite element systems on CDC STAR-100 computer
NASA Technical Reports Server (NTRS)
Noor, A. K.; Voigt, S. J.
1975-01-01
A study is made of the adaptation of the hypermatrix (block matrix) scheme for solving large systems of finite element equations to the CDC STAR-100 computer. Discussion is focused on the organization of the hypermatrix computation using Cholesky decomposition and the mode of storage of the different submatrices to take advantage of the STAR pipeline (streaming) capability. Consideration is also given to the associated data handling problems and the means of balancing the I/Q and cpu times in the solution process. Numerical examples are presented showing anticipated gain in cpu speed over the CDC 6600 to be obtained by using the proposed algorithms on the STAR computer.
NASA Astrophysics Data System (ADS)
Tkachova, P. P.; Krot, A. M.
2009-04-01
This work investigates condition for origin of increasing rotational disturbance in a gas-liquid protoplanetary cloud under action of a periodic force. The model (based on Reynolds equations [1]) describing self-organization of rotational disturbance of viscous gas-liquid substance into a protoplanetary cloud is proposed. The Reynolds equations as well as continuity equation in cylindrical frame of reference (r, e, z) as basis relations for this analytical model are used. The mean velocity is supposed to be equal to zero from the beginning action of an exterior periodic force. The Reynolds' tensor of turbulent strain of velocity disturbances in a becoming fluid flow is sought for (besides, z-component of velocity disturbance is supposed to be equal to zero). In assumption that z-components of turbulent strains are equal to zero, the (r, e)-turbulent strain components are found. After all considerations the Reynolds equations and continuity one (in the cylindrical coordinate system) are reduced to the system of two differential equations in partial derivatives relatively to (r, e)-cylindrical components of turbulent strain of velocity disturbance. A common solution of these two equations permits us to reduce this task to solution of one differential equation relatively to (r, e)-turbulent strain. This homogeneous differential equation is solved with usage of the variables separation method. As a result, a superposition of two cosine's and sine's waves gives us (r, e)-turbulent strain wave with an elliptic (or circular) polarization. Moreover, this paper shows that amplitude of cosine-wave as well as sine-wave is an increasing function as r**(n**2-2). This paper finds that oscillations are intensified with growing a frequency of becoming oscillations. The computational experiments based on STAR-CD package [2] confirm the main analytical statements of the proposed model for becoming self-rotation in a gas-liquid protoplanetary cloud. This work develops also the nonlinear analysis of an attractor describing hydrodynamic state of rotating flows based on the matrix decomposition [3]. This analysis permits to estimate the values of characteristic parameters (including control one) of the attractor and predict its evolution in time analogously to the stated in [4]. References: [1] Loytsyansky, L.G. Mechanics of Fluid and Gas, Nauka: Moscow, 1973 (in Russian). [2] Methodology for STAR-CD: Version 3.24. Computational Dynamics Limited, 2004. [3] Krot, A.M. Matrix decompositions of vector functions and shift operators on the trajectories of a nonlinear dynamical system, Nonlinear Phenomena in Complex Systems, vol.4, no. 2, pp.106-115, 2001. [4] Krot, A.M. and Tkachova, P.P. Investigation of geometrical shapes of hydrodynamic structures for identification of dynamical states of convective liquid, in: Lecture Notes in Computer Sciences, Berlin, Germany: Springer, Part 1, vol. 2667, pp. 398-406, 2003.
Decomposition of the Multistatic Response Matrix and Target Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, D H
2008-02-14
Decomposition of the time-reversal operator for an array, or equivalently the singular value decomposition of the multistatic response matrix, has been used to improve imaging and localization of targets in complicated media. Typically, each singular value is associated with one scatterer even though it has been shown in several cases that a single scatterer can generate several singular values. In this paper we review the analysis of the time-reversal operator (TRO), or equivalently the multistatic response matrix (MRM), of an array system and a small target. We begin with two-dimensional scattering from a small cylinder then show the results formore » a small non-spherical target in three dimensions. We show that the number and magnitudes of the singular values contain information about target composition, shape, and orientation.« less
Pham, T. Anh; Nguyen, Huy -Viet; Rocca, Dario; ...
2013-04-26
Inmore » a recent paper we presented an approach to evaluate quasiparticle energies based on the spectral decomposition of the static dielectric matrix. This method does not require the calculation of unoccupied electronic states or the direct diagonalization of large dielectric matrices, and it avoids the use of plasmon-pole models. The numerical accuracy of the approach is controlled by a single parameter, i.e., the number of eigenvectors used in the spectral decomposition of the dielectric matrix. Here we present a comprehensive validation of the method, encompassing calculations of ionization potentials and electron affinities of various molecules and of band gaps for several crystalline and disordered semiconductors. Lastly, we demonstrate the efficiency of our approach by carrying out G W calculations for systems with several hundred valence electrons.« less
A Natural Language for AdS/CFT Correlators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fitzpatrick, A.Liam; /Boston U.; Kaplan, Jared
2012-02-14
We provide dramatic evidence that 'Mellin space' is the natural home for correlation functions in CFTs with weakly coupled bulk duals. In Mellin space, CFT correlators have poles corresponding to an OPE decomposition into 'left' and 'right' sub-correlators, in direct analogy with the factorization channels of scattering amplitudes. In the regime where these correlators can be computed by tree level Witten diagrams in AdS, we derive an explicit formula for the residues of Mellin amplitudes at the corresponding factorization poles, and we use the conformal Casimir to show that these amplitudes obey algebraic finite difference equations. By analyzing the recursivemore » structure of our factorization formula we obtain simple diagrammatic rules for the construction of Mellin amplitudes corresponding to tree-level Witten diagrams in any bulk scalar theory. We prove the diagrammatic rules using our finite difference equations. Finally, we show that our factorization formula and our diagrammatic rules morph into the flat space S-Matrix of the bulk theory, reproducing the usual Feynman rules, when we take the flat space limit of AdS/CFT. Throughout we emphasize a deep analogy with the properties of flat space scattering amplitudes in momentum space, which suggests that the Mellin amplitude may provide a holographic definition of the flat space S-Matrix.« less
Iterative image-domain decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Tianye; Dong, Xue; Petrongolo, Michael
2014-04-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less
Numerical simulations of incompressible laminar flows using viscous-inviscid interaction procedures
NASA Astrophysics Data System (ADS)
Shatalov, Alexander V.
The present method is based on Helmholtz velocity decomposition where velocity is written as a sum of irrotational (gradient of a potential) and rotational (correction due to vorticity) components. Substitution of the velocity decomposition into the continuity equation yields an equation for the potential, while substitution into the momentum equations yields equations for the velocity corrections. A continuation approach is used to relate the pressure to the gradient of the potential through a modified Bernoulli's law, which allows the elimination of the pressure variable from the momentum equations. The present work considers steady and unsteady two-dimensional incompressible flows over an infinite cylinder and NACA 0012 airfoil shape. The numerical results are compared against standard methods (stream function-vorticity and SMAC methods) and data available in literature. The results demonstrate that the proposed formulation leads to a good approximation with some possible benefits compared to the available formulations. The method is not restricted to two-dimensional flows and can be used for viscous-inviscid domain decomposition calculations.
A general solution strategy of modified power method for higher mode solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung, E-mail: deokjung@unist.ac.kr
2016-01-15
A general solution strategy of the modified power iteration method for calculating higher eigenmodes has been developed and applied in continuous energy Monte Carlo simulation. The new approach adopts four features: 1) the eigen decomposition of transfer matrix, 2) weight cancellation for higher modes, 3) population control with higher mode weights, and 4) stabilization technique of statistical fluctuations using multi-cycle accumulations. The numerical tests of neutron transport eigenvalue problems successfully demonstrate that the new strategy can significantly accelerate the fission source convergence with stable convergence behavior while obtaining multiple higher eigenmodes at the same time. The advantages of the newmore » strategy can be summarized as 1) the replacement of the cumbersome solution step of high order polynomial equations required by Booth's original method with the simple matrix eigen decomposition, 2) faster fission source convergence in inactive cycles, 3) more stable behaviors in both inactive and active cycles, and 4) smaller variances in active cycles. Advantages 3 and 4 can be attributed to the lower sensitivity of the new strategy to statistical fluctuations due to the multi-cycle accumulations. The application of the modified power method to continuous energy Monte Carlo simulation and the higher eigenmodes up to 4th order are reported for the first time in this paper. -- Graphical abstract: -- Highlights: •Modified power method is applied to continuous energy Monte Carlo simulation. •Transfer matrix is introduced to generalize the modified power method. •All mode based population control is applied to get the higher eigenmodes. •Statistic fluctuation can be greatly reduced using accumulated tally results. •Fission source convergence is accelerated with higher mode solutions.« less
Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.
Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong
2015-11-01
In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.
n + 1 formalism of f (Lovelock) gravity
NASA Astrophysics Data System (ADS)
Lachaume, Xavier
2018-06-01
In this note we perform the n + 1 decomposition, or Arnowitt–Deser–Misner (ADM) formulation of gravity theory. The Hamiltonian form of Lovelock gravity was known since the work of Teitelboim and Zanelli in 1987, but this result had not yet been extended to gravity. Besides, field equations of have been recently computed by Bueno et al, though without ADM decomposition. We focus on the non-degenerate case, i.e. when the Hessian of f is invertible. Using the same Legendre transform as for theories, we can identify the partial derivatives of f as scalar fields, and consider the theory as a generalised scalar‑tensor theory. We then derive the field equations, and project them along a n + 1 decomposition. We obtain an original system of constraint equations for gravity, as well as dynamical equations. We give explicit formulas for the case.
An asymptotic induced numerical method for the convection-diffusion-reaction equation
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.; Sorensen, Danny C.
1988-01-01
A parallel algorithm for the efficient solution of a time dependent reaction convection diffusion equation with small parameter on the diffusion term is presented. The method is based on a domain decomposition that is dictated by singular perturbation analysis. The analysis is used to determine regions where certain reduced equations may be solved in place of the full equation. Parallelism is evident at two levels. Domain decomposition provides parallelism at the highest level, and within each domain there is ample opportunity to exploit parallelism. Run time results demonstrate the viability of the method.
Mode detection in turbofan inlets from near field sensor arrays.
Castres, Fabrice O; Joseph, Phillip F
2007-02-01
Knowledge of the modal content of the sound field radiated from a turbofan inlet is important for source characterization and for helping to determine noise generation mechanisms in the engine. An inverse technique for determining the mode amplitudes at the duct outlet is proposed using pressure measurements made in the near field. The radiated sound pressure from a duct is modeled by directivity patterns of cut-on modes in the near field using a model based on the Kirchhoff approximation for flanged ducts with no flow. The resulting system of equations is ill posed and it is shown that the presence of modes with eigenvalues close to a cutoff frequency results in a poorly conditioned directivity matrix. An analysis of the conditioning of this directivity matrix is carried out to assess the inversion robustness and accuracy. A physical interpretation of the singular value decomposition is given and allows us to understand the issues of ill conditioning as well as the detection performance of the radiated sound field by a given sensor array.
Intrasystem Analysis Program (IAP) code summaries
NASA Astrophysics Data System (ADS)
Dobmeier, J. J.; Drozd, A. L. S.; Surace, J. A.
1983-05-01
This report contains detailed descriptions and capabilities of the codes that comprise the Intrasystem Analysis Program. The four codes are: Intrasystem Electromagnetic Compatibility Analysis Program (IEMCAP), General Electromagnetic Model for the Analysis of Complex Systems (GEMACS), Nonlinear Circuit Analysis Program (NCAP), and Wire Coupling Prediction Models (WIRE). IEMCAP is used for computer-aided evaluation of electromagnetic compatibility (ECM) at all stages of an Air Force system's life cycle, applicable to aircraft, space/missile, and ground-based systems. GEMACS utilizes a Method of Moments (MOM) formalism with the Electric Field Integral Equation (EFIE) for the solution of electromagnetic radiation and scattering problems. The code employs both full matrix decomposition and Banded Matrix Iteration solution techniques and is expressly designed for large problems. NCAP is a circuit analysis code which uses the Volterra approach to solve for the transfer functions and node voltage of weakly nonlinear circuits. The Wire Programs deal with the Application of Multiconductor Transmission Line Theory to the Prediction of Cable Coupling for specific classes of problems.
Multi-color incomplete Cholesky conjugate gradient methods for vector computers. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Poole, E. L.
1986-01-01
In this research, we are concerned with the solution on vector computers of linear systems of equations, Ax = b, where A is a larger, sparse symmetric positive definite matrix. We solve the system using an iterative method, the incomplete Cholesky conjugate gradient method (ICCG). We apply a multi-color strategy to obtain p-color matrices for which a block-oriented ICCG method is implemented on the CYBER 205. (A p-colored matrix is a matrix which can be partitioned into a pXp block matrix where the diagonal blocks are diagonal matrices). This algorithm, which is based on a no-fill strategy, achieves O(N/p) length vector operations in both the decomposition of A and in the forward and back solves necessary at each iteration of the method. We discuss the natural ordering of the unknowns as an ordering that minimizes the number of diagonals in the matrix and define multi-color orderings in terms of disjoint sets of the unknowns. We give necessary and sufficient conditions to determine which multi-color orderings of the unknowns correpond to p-color matrices. A performance model is given which is used both to predict execution time for ICCG methods and also to compare an ICCG method to conjugate gradient without preconditioning or another ICCG method. Results are given from runs on the CYBER 205 at NASA's Langley Research Center for four model problems.
Polystyrene foam products equation of state as a function of porosity and fill gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mulford, Roberta N; Swift, Damian C
2009-01-01
An accurate EOS for polystyrene foam is necessary for analysis of numerous experiments in shock compression, inertial confinement fusion, and astrophysics. Plastic to gas ratios vary between various samples of foam, according to the density and cell-size of the foam. A matrix of compositions has been investigated, allowing prediction of foam response as a function of the plastic-to-air ratio. The EOS code CHEETAH allows participation of the air in the decomposition reaction of the foam. Differences between air-filled, Ar-blown, and CO{sub 2}-blown foams are investigated, to estimate the importance of allowing air to react with products of polystyrene decomposition. O{submore » 2}-blown foams are included in some comparisons, to amplify any consequences of reaction with oxygen in air. He-blown foams are included in some comparisons, to provide an extremum of density. Product pressures are slightly higher for oxygen-containing fill gases than for non-oxygen-containing fill gases. Examination of product species indicates that CO{sub 2} decomposes at high temperatures.« less
NASA Astrophysics Data System (ADS)
Chen, Zhen; Chan, Tommy H. T.
2017-08-01
This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.
Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar
Sen, Satyabrata
2015-08-04
We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less
A fast new algorithm for a robot neurocontroller using inverse QR decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, A.S.; Khemaissia, S.
2000-01-01
A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less
Bada, J.L.; Shou, M.-Y.; Man, E.H.; Schroeder, R.A.
1978-01-01
The diagenesis of the hydroxy amino acids serine and threonine in foraminiferal tests has been investigated. The decomposition pathways of these amino acids are complex; the principal reactions appear to be dehydration, aldol cleavage and decarboxylation. Stereochemical studies indicate that the ??-amino-n-butyric acid (ABA) detected in foraminiferal tests is the end product of threonine dehydration pathway. Decomposition of serine and threonine in foraminiferal tests from two well-dated Caribbean deep-sea cores, P6304-8 and -9, has been found to follow irreversible first-order kinetics. Three empirical equations were derived for the disappearance of serine and threonine and the appearance of ABA. These equations can be used as a new geochronological method for dating foraminiferal tests from other deep-sea sediments. Preliminary results suggest that ages deduced from the ABA kinetics equation are most reliable because "species effect" and contamination problems are not important for this nonbiological amino acid. Because of the variable serine and threonine contents of modern foraminiferal species, it is likely that the accurate age estimates can be obtained from the serine and threonine decomposition equations only if a homogeneous species assemblage or single species sample isolated from mixed natural assemblages is used. ?? 1978.
On the solutions of fractional order of evolution equations
NASA Astrophysics Data System (ADS)
Morales-Delgado, V. F.; Taneco-Hernández, M. A.; Gómez-Aguilar, J. F.
2017-01-01
In this paper we present a discussion of generalized Cauchy problems in a diffusion wave process, we consider bi-fractional-order evolution equations in the Riemann-Liouville, Liouville-Caputo, and Caputo-Fabrizio sense. Through Fourier transforms and Laplace transform we derive closed-form solutions to the Cauchy problems mentioned above. Similarly, we establish fundamental solutions. Finally, we give an application of the above results to the determination of decompositions of Dirac type for bi-fractional-order equations and write a formula for the moments for the fractional vibration of a beam equation. This type of decomposition allows us to speak of internal degrees of freedom in the vibration of a beam equation.
Cotrufo, M Francesca; Wallenstein, Matthew D; Boot, Claudia M; Denef, Karolien; Paul, Eldor
2013-04-01
The decomposition and transformation of above- and below-ground plant detritus (litter) is the main process by which soil organic matter (SOM) is formed. Yet, research on litter decay and SOM formation has been largely uncoupled, failing to provide an effective nexus between these two fundamental processes for carbon (C) and nitrogen (N) cycling and storage. We present the current understanding of the importance of microbial substrate use efficiency and C and N allocation in controlling the proportion of plant-derived C and N that is incorporated into SOM, and of soil matrix interactions in controlling SOM stabilization. We synthesize this understanding into the Microbial Efficiency-Matrix Stabilization (MEMS) framework. This framework leads to the hypothesis that labile plant constituents are the dominant source of microbial products, relative to input rates, because they are utilized more efficiently by microbes. These microbial products of decomposition would thus become the main precursors of stable SOM by promoting aggregation and through strong chemical bonding to the mineral soil matrix. © 2012 Blackwell Publishing Ltd.
NASA Technical Reports Server (NTRS)
Booth, Morrison, Christopher; Seidman, David N.; Noebe, Ronald D.
2009-01-01
The effects of a 2.0 at.% addition of Ta to a model Ni-10.0Al-8.5Cr (at.%) superalloy aged at 1073 K are assessed using scanning electron microscopy and atom-probe tomography. The gamma'(Ll2)-precipitate morphology that develops as a result of gamma-(fcc)matrix phase decomposition is found to evolve from a bimodal distribution of spheroidal precipitates, to {001}-faceted cuboids and parallelepipeds aligned along the elastically soft {001}-type directions. The phase compositions and the widths of the gamma'-precipitate/gamma-matrix heterophase interfaces evolve temporally as the Ni-Al-Cr-Ta alloy undergoes quasi-stationary state coarsening after 1 h of aging. Tantalum is observed to partition preferentially to the gamma'-precipitate phase, and suppresses the mobility of Ni in the gamma-matrix sufficiently to cause an accumulation of Ni on the gamma-matrix side of the gamma'/gamma interface. Additionally, computational modeling, employing Thermo-Calc, Dictra and PrecipiCalc, is employed to elucidate the kinetic pathways that lead to phase decomposition in this concentrated Ni-Al-Cr-Ta alloy.
NASA Technical Reports Server (NTRS)
Chew, W. C.; Song, J. M.; Lu, C. C.; Weedon, W. H.
1995-01-01
In the first phase of our work, we have concentrated on laying the foundation to develop fast algorithms, including the use of recursive structure like the recursive aggregate interaction matrix algorithm (RAIMA), the nested equivalence principle algorithm (NEPAL), the ray-propagation fast multipole algorithm (RPFMA), and the multi-level fast multipole algorithm (MLFMA). We have also investigated the use of curvilinear patches to build a basic method of moments code where these acceleration techniques can be used later. In the second phase, which is mainly reported on here, we have concentrated on implementing three-dimensional NEPAL on a massively parallel machine, the Connection Machine CM-5, and have been able to obtain some 3D scattering results. In order to understand the parallelization of codes on the Connection Machine, we have also studied the parallelization of 3D finite-difference time-domain (FDTD) code with PML material absorbing boundary condition (ABC). We found that simple algorithms like the FDTD with material ABC can be parallelized very well allowing us to solve within a minute a problem of over a million nodes. In addition, we have studied the use of the fast multipole method and the ray-propagation fast multipole algorithm to expedite matrix-vector multiplication in a conjugate-gradient solution to integral equations of scattering. We find that these methods are faster than LU decomposition for one incident angle, but are slower than LU decomposition when many incident angles are needed as in the monostatic RCS calculations.
Yang, Chifu; Zhao, Jinsong; Li, Liyi; Agrawal, Sunil K
2018-01-01
Robotic spine brace based on parallel-actuated robotic system is a new device for treatment and sensing of scoliosis, however, the strong dynamic coupling and anisotropy problem of parallel manipulators result in accuracy loss of rehabilitation force control, including big error in direction and value of force. A novel active force control strategy named modal space force control is proposed to solve these problems. Considering the electrical driven system and contact environment, the mathematical model of spatial parallel manipulator is built. The strong dynamic coupling problem in force field is described via experiments as well as the anisotropy problem of work space of parallel manipulators. The effects of dynamic coupling on control design and performances are discussed, and the influences of anisotropy on accuracy are also addressed. With mass/inertia matrix and stiffness matrix of parallel manipulators, a modal matrix can be calculated by using eigenvalue decomposition. Making use of the orthogonality of modal matrix with mass matrix of parallel manipulators, the strong coupled dynamic equations expressed in work space or joint space of parallel manipulator may be transformed into decoupled equations formulated in modal space. According to this property, each force control channel is independent of others in the modal space, thus we proposed modal space force control concept which means the force controller is designed in modal space. A modal space active force control is designed and implemented with only a simple PID controller employed as exampled control method to show the differences, uniqueness, and benefits of modal space force control. Simulation and experimental results show that the proposed modal space force control concept can effectively overcome the effects of the strong dynamic coupling and anisotropy problem in the physical space, and modal space force control is thus a very useful control framework, which is better than the current joint space control and work space control. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Krylov subspace iterative methods for boundary element method based near-field acoustic holography.
Valdivia, Nicolas; Williams, Earl G
2005-02-01
The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.
Matrix with Prescribed Eigenvectors
ERIC Educational Resources Information Center
Ahmad, Faiz
2011-01-01
It is a routine matter for undergraduates to find eigenvalues and eigenvectors of a given matrix. But the converse problem of finding a matrix with prescribed eigenvalues and eigenvectors is rarely discussed in elementary texts on linear algebra. This problem is related to the "spectral" decomposition of a matrix and has important technical…
Data-adaptive harmonic spectra and multilayer Stuart-Landau models
NASA Astrophysics Data System (ADS)
Chekroun, Mickaël D.; Kondrashov, Dmitri
2017-09-01
Harmonic decompositions of multivariate time series are considered for which we adopt an integral operator approach with periodic semigroup kernels. Spectral decomposition theorems are derived that cover the important cases of two-time statistics drawn from a mixing invariant measure. The corresponding eigenvalues can be grouped per Fourier frequency and are actually given, at each frequency, as the singular values of a cross-spectral matrix depending on the data. These eigenvalues obey, furthermore, a variational principle that allows us to define naturally a multidimensional power spectrum. The eigenmodes, as far as they are concerned, exhibit a data-adaptive character manifested in their phase which allows us in turn to define a multidimensional phase spectrum. The resulting data-adaptive harmonic (DAH) modes allow for reducing the data-driven modeling effort to elemental models stacked per frequency, only coupled at different frequencies by the same noise realization. In particular, the DAH decomposition extracts time-dependent coefficients stacked by Fourier frequency which can be efficiently modeled—provided the decay of temporal correlations is sufficiently well-resolved—within a class of multilayer stochastic models (MSMs) tailored here on stochastic Stuart-Landau oscillators. Applications to the Lorenz 96 model and to a stochastic heat equation driven by a space-time white noise are considered. In both cases, the DAH decomposition allows for an extraction of spatio-temporal modes revealing key features of the dynamics in the embedded phase space. The multilayer Stuart-Landau models (MSLMs) are shown to successfully model the typical patterns of the corresponding time-evolving fields, as well as their statistics of occurrence.
Novel Calibration Algorithm for a Three-Axis Strapdown Magnetometer
Liu, Yan Xia; Li, Xi Sheng; Zhang, Xiao Juan; Feng, Yi Bo
2014-01-01
A complete error calibration model with 12 independent parameters is established by analyzing the three-axis magnetometer error mechanism. The said model conforms to an ellipsoid restriction, the parameters of the ellipsoid equation are estimated, and the ellipsoid coefficient matrix is derived. However, the calibration matrix cannot be determined completely, as there are fewer ellipsoid parameters than calibration model parameters. Mathematically, the calibration matrix derived from the ellipsoid coefficient matrix by a different matrix decomposition method is not unique, and there exists an unknown rotation matrix R between them. This paper puts forward a constant intersection angle method (angles between the geomagnetic field and gravitational field are fixed) to estimate R. The Tikhonov method is adopted to solve the problem that rounding errors or other errors may seriously affect the calculation results of R when the condition number of the matrix is very large. The geomagnetic field vector and heading error are further corrected by R. The constant intersection angle method is convenient and practical, as it is free from any additional calibration procedure or coordinate transformation. In addition, the simulation experiment indicates that the heading error declines from ±1° calibrated by classical ellipsoid fitting to ±0.2° calibrated by a constant intersection angle method, and the signal-to-noise ratio is 50 dB. The actual experiment exhibits that the heading error is further corrected from ±0.8° calibrated by the classical ellipsoid fitting to ±0.3° calibrated by a constant intersection angle method. PMID:24831110
Suseela, Vidya; Tharayil, Nishanth
2018-04-01
Decomposition of plant litter is a fundamental ecosystem process that can act as a feedback to climate change by simultaneously influencing both the productivity of ecosystems and the flux of carbon dioxide from the soil. The influence of climate on decomposition from a postsenescence perspective is relatively well known; in particular, climate is known to regulate the rate of litter decomposition via its direct influence on the reaction kinetics and microbial physiology on processes downstream of tissue senescence. Climate can alter plant metabolism during the formative stage of tissues and could shape the final chemical composition of plant litter that is available for decomposition, and thus indirectly influence decomposition; however, these indirect effects are relatively poorly understood. Climatic stress disrupts cellular homeostasis in plants and results in the reprogramming of primary and secondary metabolic pathways, which leads to changes in the quantity, composition, and organization of small molecules and recalcitrant heteropolymers, including lignins, tannins, suberins, and cuticle within the plant tissue matrix. Furthermore, by regulating metabolism during tissue senescence, climate influences the resorption of nutrients from senescing tissues. Thus, the final chemical composition of plant litter that forms the substrate of decomposition is a combined product of presenescence physiological processes through the production and resorption of metabolites. The changes in quantity, composition, and localization of the molecular construct of the litter could enhance or hinder tissue decomposition and soil nutrient cycling by altering the recalcitrance of the lignocellulose matrix, the composition of microbial communities, and the activity of microbial exo-enzymes via various complexation reactions. Also, the climate-induced changes in the molecular composition of litter could differentially influence litter decomposition and soil nutrient cycling. Compared with temperate ecosystems, the indirect effects of climate on litter decomposition in the tropics are not well understood, which underscores the need to conduct additional studies in tropical biomes. We also emphasize the need to focus on how climatic stress affects the root chemistry as roots contribute significantly to biogeochemical cycling, and on utilizing more robust analytical approaches to capture the molecular composition of tissue matrix that fuel microbial metabolism. © 2017 John Wiley & Sons Ltd.
Fast divide-and-conquer algorithm for evaluating polarization in classical force fields
NASA Astrophysics Data System (ADS)
Nocito, Dominique; Beran, Gregory J. O.
2017-03-01
Evaluation of the self-consistent polarization energy forms a major computational bottleneck in polarizable force fields. In large systems, the linear polarization equations are typically solved iteratively with techniques based on Jacobi iterations (JI) or preconditioned conjugate gradients (PCG). Two new variants of JI are proposed here that exploit domain decomposition to accelerate the convergence of the induced dipoles. The first, divide-and-conquer JI (DC-JI), is a block Jacobi algorithm which solves the polarization equations within non-overlapping sub-clusters of atoms directly via Cholesky decomposition, and iterates to capture interactions between sub-clusters. The second, fuzzy DC-JI, achieves further acceleration by employing overlapping blocks. Fuzzy DC-JI is analogous to an additive Schwarz method, but with distance-based weighting when averaging the fuzzy dipoles from different blocks. Key to the success of these algorithms is the use of K-means clustering to identify natural atomic sub-clusters automatically for both algorithms and to determine the appropriate weights in fuzzy DC-JI. The algorithm employs knowledge of the 3-D spatial interactions to group important elements in the 2-D polarization matrix. When coupled with direct inversion in the iterative subspace (DIIS) extrapolation, fuzzy DC-JI/DIIS in particular converges in a comparable number of iterations as PCG, but with lower computational cost per iteration. In the end, the new algorithms demonstrated here accelerate the evaluation of the polarization energy by 2-3 fold compared to existing implementations of PCG or JI/DIIS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anton, Luis; MartI, Jose M; Ibanez, Jose M
2010-05-01
We obtain renormalized sets of right and left eigenvectors of the flux vector Jacobians of the relativistic MHD equations, which are regular and span a complete basis in any physical state including degenerate ones. The renormalization procedure relies on the characterization of the degeneracy types in terms of the normal and tangential components of the magnetic field to the wave front in the fluid rest frame. Proper expressions of the renormalized eigenvectors in conserved variables are obtained through the corresponding matrix transformations. Our work completes previous analysis that present different sets of right eigenvectors for non-degenerate and degenerate states, andmore » can be seen as a relativistic generalization of earlier work performed in classical MHD. Based on the full wave decomposition (FWD) provided by the renormalized set of eigenvectors in conserved variables, we have also developed a linearized (Roe-type) Riemann solver. Extensive testing against one- and two-dimensional standard numerical problems allows us to conclude that our solver is very robust. When compared with a family of simpler solvers that avoid the knowledge of the full characteristic structure of the equations in the computation of the numerical fluxes, our solver turns out to be less diffusive than HLL and HLLC, and comparable in accuracy to the HLLD solver. The amount of operations needed by the FWD solver makes it less efficient computationally than those of the HLL family in one-dimensional problems. However, its relative efficiency increases in multidimensional simulations.« less
An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-02-13
The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less
An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Chung, Eric T.
The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less
Beyond Principal Component Analysis: A Trilinear Decomposition Model and Least Squares Estimation.
ERIC Educational Resources Information Center
Pham, Tuan Dinh; Mocks, Joachim
1992-01-01
Sufficient conditions are derived for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis. The limiting covariance matrix is computed. (Author/SLD)
NASA Astrophysics Data System (ADS)
Zou, Chunrong; Li, Bin; Zhang, Changrui; Wang, Siqing; Xie, Zhengfang; Shao, Changwei
2016-02-01
The structural evolution of a silicon oxynitride fiber reinforced boron nitride matrix (Si-N-Of/BN) wave-transparent composite at high temperatures was investigated. When heat treated at 1600 °C, the composite retained a favorable bending strength of 55.3 MPa while partially crystallizing to Si2N2O and h-BN from the as-received amorphous structure. The Si-N-O fibers still performed as effective reinforcements despite the presence of small pores due to fiber decomposition. Upon heat treatment at 1800 °C, the Si-N-O fibers already lost their reinforcing function and rough hollow microstructure formed within the fibers because of the accelerated decomposition. Further heating to 2000 °C led to the complete decomposition of the reinforcing fibers and only h-BN particles survived. The crystallization and decomposition behaviors of the composite at high temperatures are discussed.
Niederegger, Senta; Schermer, Julia; Höfig, Juliane; Mall, Gita
2015-01-01
Estimating time of death of buried human bodies is a very difficult task. Casper's rule from 1860 is still widely used which illustrates the lack of suitable methods. In this case study excavations in an arbor revealed the crouching body of a human being, dressed only in boxer shorts and socks. Witnesses were not able to generate a concise answer as to when the person in question was last seen alive; the pieces of information opened a window of 2-6 weeks for the possible time of death. To determine the post mortem interval (PMI) an experiment using a pig carcass was conducted to set up a decomposition matrix. Fitting the autopsy findings of the victim into the decomposition matrix yielded a time of death estimation of 2-3 weeks. This time frame was later confirmed by a new witness. The authors feel confident that a widespread conduction of decomposition matrices using pig carcasses can lead to a great increase of experience and knowledge in PMI estimation of buried bodies and will eventually lead to applicable new methods. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Darboux transformation and explicit solutions for some (2+1)-dimensional equations
NASA Astrophysics Data System (ADS)
Wang, Yan; Shen, Lijuan; Du, Dianlou
2007-06-01
Three systems of (2+1)-dimensional soliton equations and their decompositions into the (1+1)-dimensional soliton equations are proposed. These equations include KPI, CKP, MKPI. With the help of Darboux transformation of (1+1)-dimensional equations, we get the explicit solutions of the (2+1)-dimensional equations.
NASA Astrophysics Data System (ADS)
Xu, Xiankun; Li, Peiwen
2017-11-01
Fixman's work in 1974 and the follow-up studies have developed a method that can factorize the inverse of mass matrix into an arithmetic combination of three sparse matrices-one of them is positive definite and needs to be further factorized by using the Cholesky decomposition or similar methods. When the molecule subjected to study is of serial chain structure, this method can achieve O (n) time complexity. However, for molecules with long branches, Cholesky decomposition about the corresponding positive definite matrix will introduce massive fill-in due to its nonzero structure. Although there are several methods can be used to reduce the number of fill-in, none of them could strictly guarantee for zero fill-in for all molecules according to our test, and thus cannot obtain O (n) time complexity by using these traditional methods. In this paper we present a new method that can guarantee for no fill-in in doing the Cholesky decomposition, which was developed based on the correlations between the mass matrix and the geometrical structure of molecules. As a result, the inverting of mass matrix will remain the O (n) time complexity, no matter the molecule structure has long branches or not.
Cao, Buwen; Deng, Shuguang; Qin, Hua; Ding, Pingjian; Chen, Shaopeng; Li, Guanghui
2018-06-15
High-throughput technology has generated large-scale protein interaction data, which is crucial in our understanding of biological organisms. Many complex identification algorithms have been developed to determine protein complexes. However, these methods are only suitable for dense protein interaction networks, because their capabilities decrease rapidly when applied to sparse protein⁻protein interaction (PPI) networks. In this study, based on penalized matrix decomposition ( PMD ), a novel method of penalized matrix decomposition for the identification of protein complexes (i.e., PMD pc ) was developed to detect protein complexes in the human protein interaction network. This method mainly consists of three steps. First, the adjacent matrix of the protein interaction network is normalized. Second, the normalized matrix is decomposed into three factor matrices. The PMD pc method can detect protein complexes in sparse PPI networks by imposing appropriate constraints on factor matrices. Finally, the results of our method are compared with those of other methods in human PPI network. Experimental results show that our method can not only outperform classical algorithms, such as CFinder, ClusterONE, RRW, HC-PIN, and PCE-FR, but can also achieve an ideal overall performance in terms of a composite score consisting of F-measure, accuracy (ACC), and the maximum matching ratio (MMR).
High performance computation of radiative transfer equation using the finite element method
NASA Astrophysics Data System (ADS)
Badri, M. A.; Jolivet, P.; Rousseau, B.; Favennec, Y.
2018-05-01
This article deals with an efficient strategy for numerically simulating radiative transfer phenomena using distributed computing. The finite element method alongside the discrete ordinate method is used for spatio-angular discretization of the monochromatic steady-state radiative transfer equation in an anisotropically scattering media. Two very different methods of parallelization, angular and spatial decomposition methods, are presented. To do so, the finite element method is used in a vectorial way. A detailed comparison of scalability, performance, and efficiency on thousands of processors is established for two- and three-dimensional heterogeneous test cases. Timings show that both algorithms scale well when using proper preconditioners. It is also observed that our angular decomposition scheme outperforms our domain decomposition method. Overall, we perform numerical simulations at scales that were previously unattainable by standard radiative transfer equation solvers.
NASA Astrophysics Data System (ADS)
Gao, Pengzhi; Wang, Meng; Chow, Joe H.; Ghiocel, Scott G.; Fardanesh, Bruce; Stefopoulos, George; Razanousky, Michael P.
2016-11-01
This paper presents a new framework of identifying a series of cyber data attacks on power system synchrophasor measurements. We focus on detecting "unobservable" cyber data attacks that cannot be detected by any existing method that purely relies on measurements received at one time instant. Leveraging the approximate low-rank property of phasor measurement unit (PMU) data, we formulate the identification problem of successive unobservable cyber attacks as a matrix decomposition problem of a low-rank matrix plus a transformed column-sparse matrix. We propose a convex-optimization-based method and provide its theoretical guarantee in the data identification. Numerical experiments on actual PMU data from the Central New York power system and synthetic data are conducted to verify the effectiveness of the proposed method.
Thermodynamic properties of water in confined environments: a Monte Carlo study
NASA Astrophysics Data System (ADS)
Gladovic, Martin; Bren, Urban; Urbic, Tomaž
2018-05-01
Monte Carlo simulations of Mercedes-Benz water in a crowded environment were performed. The simulated systems are representative of both composite, porous or sintered materials and living cells with typical matrix packings. We studied the influence of overall temperature as well as the density and size of matrix particles on water density, particle distributions, hydrogen bond formation and thermodynamic quantities. Interestingly, temperature and space occupancy of matrix exhibit a similar effect on water properties following the competition between the kinetic and the potential energy of the system, whereby temperature increases the kinetic and matrix packing decreases the potential contribution. A novel thermodynamic decomposition approach was applied to gain insight into individual contributions of different types of inter-particle interactions. This decomposition proved to be useful and in good agreement with the total thermodynamic quantities especially at higher temperatures and matrix packings, where higher-order potential-energy mixing terms lose their importance.
NASA Technical Reports Server (NTRS)
Rancourt, J. D.; Porta, G. M.; Moyer, E. S.; Madeleine, D. G.; Taylor, L. T.
1988-01-01
Polyimide-metal oxide (Co3O4 or CuO) composite films have been prepared via in situ thermal decomposition of cobalt (II) chloride or bis(trifluoroacetylacetonato)copper(II). A soluble polyimide (XU-218) and its corresponding prepolymer (polyamide acid) were individually employed as the reaction matrix. The resulting composites exhibited a greater metal oxide concentration at the air interface with polyamide acid as the reaction matrix. The water of imidization that is released during the concurrent polyamide acid cure and additive decomposition is believed to promote metal migration and oxide formation. In contrast, XU-218 doped with either HAuCl4.3H2O or AgNO3 yields surface gold or silver when thermolyzed (300 C).
Pyrolysis and Matrix-Isolation FTIR of Acetoin
NASA Astrophysics Data System (ADS)
Cole, Sarah; Ellis, Martha; Sowards, John; McCunn, Laura R.
2017-06-01
Acetoin, CH_3C(O)CH(OH)CH_3, is an additive used in foods and cigarettes as well as a common component of biomass pyrolysate during the production of biofuels, yet little is known about its thermal decomposition mechanism. In order to identify thermal decomposition products of acetoin, a gas-phase mixture of approximately 0.3% acetoin in argon was subject to pyrolysis in a resistively heated SiC microtubular reactor at 1100-1500 K. Matrix-isolation FTIR spectroscopy was used to identify pyrolysis products. Many products were observed in analysis of the spectra, including acetylene, propyne, ethylene, and vinyl alcohol. These results provide clues to the overall mechanism of thermal decomposition and are important for predicting emissions from many industrial and residential processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kempka, S.N.; Strickland, J.H.; Glass, M.W.
1995-04-01
formulation to satisfy velocity boundary conditions for the vorticity form of the incompressible, viscous fluid momentum equations is presented. The tangential and normal components of the velocity boundary condition are satisfied simultaneously by creating vorticity adjacent to boundaries. The newly created vorticity is determined using a kinematical formulation which is a generalization of Helmholtz` decomposition of a vector field. Though it has not been generally recognized, these formulations resolve the over-specification issue associated with creating voracity to satisfy velocity boundary conditions. The generalized decomposition has not been widely used, apparently due to a lack of a useful physical interpretation. Anmore » analysis is presented which shows that the generalized decomposition has a relatively simple physical interpretation which facilitates its numerical implementation. The implementation of the generalized decomposition is discussed in detail. As an example the flow in a two-dimensional lid-driven cavity is simulated. The solution technique is based on a Lagrangian transport algorithm in the hydrocode ALEGRA. ALEGRA`s Lagrangian transport algorithm has been modified to solve the vorticity transport equation and the generalized decomposition, thus providing a new, accurate method to simulate incompressible flows. This numerical implementation and the new boundary condition formulation allow vorticity-based formulations to be used in a wider range of engineering problems.« less
Turbulent fluid motion IV-averages, Reynolds decomposition, and the closure problem
NASA Technical Reports Server (NTRS)
Deissler, Robert G.
1992-01-01
Ensemble, time, and space averages as applied to turbulent quantities are discussed, and pertinent properties of the averages are obtained. Those properties, together with Reynolds decomposition, are used to derive the averaged equations of motion and the one- and two-point moment or correlation equations. The terms in the various equations are interpreted. The closure problem of the averaged equations is discussed, and possible closure schemes are considered. Those schemes usually require an input of supplemental information unless the averaged equations are closed by calculating their terms by a numerical solution of the original unaveraged equations. The law of the wall for velocities and temperatures, the velocity- and temperature-defect laws, and the logarithmic laws for velocities and temperatures are derived. Various notions of randomness and their relation to turbulence are considered in light of ergodic theory.
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Hajarian, Masoud
2012-08-01
A matrix P is called a symmetric orthogonal if P = P T = P -1. A matrix X is said to be a generalised bisymmetric with respect to P if X = X T = PXP. It is obvious that any symmetric matrix is also a generalised bisymmetric matrix with respect to I (identity matrix). By extending the idea of the Jacobi and the Gauss-Seidel iterations, this article proposes two new iterative methods, respectively, for computing the generalised bisymmetric (containing symmetric solution as a special case) and skew-symmetric solutions of the generalised Sylvester matrix equation ? (including Sylvester and Lyapunov matrix equations as special cases) which is encountered in many systems and control applications. When the generalised Sylvester matrix equation has a unique generalised bisymmetric (skew-symmetric) solution, the first (second) iterative method converges to the generalised bisymmetric (skew-symmetric) solution of this matrix equation for any initial generalised bisymmetric (skew-symmetric) matrix. Finally, some numerical results are given to illustrate the effect of the theoretical results.
Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.
Park, Jongin; Wi, Seok-Min; Lee, Jin S
2016-02-01
Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.
A Tensor-Train accelerated solver for integral equations in complex geometries
NASA Astrophysics Data System (ADS)
Corona, Eduardo; Rahimian, Abtin; Zorin, Denis
2017-04-01
We present a framework using the Quantized Tensor Train (QTT) decomposition to accurately and efficiently solve volume and boundary integral equations in three dimensions. We describe how the QTT decomposition can be used as a hierarchical compression and inversion scheme for matrices arising from the discretization of integral equations. For a broad range of problems, computational and storage costs of the inversion scheme are extremely modest O (log N) and once the inverse is computed, it can be applied in O (Nlog N) . We analyze the QTT ranks for hierarchically low rank matrices and discuss its relationship to commonly used hierarchical compression techniques such as FMM and HSS. We prove that the QTT ranks are bounded for translation-invariant systems and argue that this behavior extends to non-translation invariant volume and boundary integrals. For volume integrals, the QTT decomposition provides an efficient direct solver requiring significantly less memory compared to other fast direct solvers. We present results demonstrating the remarkable performance of the QTT-based solver when applied to both translation and non-translation invariant volume integrals in 3D. For boundary integral equations, we demonstrate that using a QTT decomposition to construct preconditioners for a Krylov subspace method leads to an efficient and robust solver with a small memory footprint. We test the QTT preconditioners in the iterative solution of an exterior elliptic boundary value problem (Laplace) formulated as a boundary integral equation in complex, multiply connected geometries.
Parsimonious extreme learning machine using recursive orthogonal least squares.
Wang, Ning; Er, Meng Joo; Han, Min
2014-10-01
Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares procedure into sequential partial orthogonalization (SPO). The salient features of the proposed approaches are as follows: 1) Initial hidden nodes are randomly generated by the ELM methodology and recursively orthogonalized into an upper triangular matrix with dramatic reduction in matrix size; 2) the constructive SPO in the CP-ELM focuses on the partial matrix with the subcolumn of the selected regressor including nonzeros as the first column while the destructive SPO in the DP-ELM operates on the partial matrix including elements determined by the removed regressor; 3) termination criteria for CP- and DP-ELM are simplified by the additional residual error reduction method; and 4) the output weights of the SLFN need not be solved in the model selection procedure and is derived from the final upper triangular equation by backward substitution. Both single- and multi-output real-world regression data sets are used to verify the effectiveness and superiority of the CP- and DP-ELM in terms of parsimonious architecture and generalization accuracy. Innovative applications to nonlinear time-series modeling demonstrate superior identification results.
An algorithm for solving an arbitrary triangular fully fuzzy Sylvester matrix equations
NASA Astrophysics Data System (ADS)
Daud, Wan Suhana Wan; Ahmad, Nazihah; Malkawi, Ghassan
2017-11-01
Sylvester matrix equations played a prominent role in various areas including control theory. Considering to any un-certainty problems that can be occurred at any time, the Sylvester matrix equation has to be adapted to the fuzzy environment. Therefore, in this study, an algorithm for solving an arbitrary triangular fully fuzzy Sylvester matrix equation is constructed. The construction of the algorithm is based on the max-min arithmetic multiplication operation. Besides that, an associated arbitrary matrix equation is modified in obtaining the final solution. Finally, some numerical examples are presented to illustrate the proposed algorithm.
ERIC Educational Resources Information Center
Man, Yiu-Kwong
2012-01-01
Partial fraction decomposition is a useful technique often taught at senior secondary or undergraduate levels to handle integrations, inverse Laplace transforms or linear ordinary differential equations, etc. In recent years, an improved Heaviside's approach to partial fraction decomposition was introduced and developed by the author. An important…
Structure and decomposition of the silver formate Ag(HCO{sub 2})
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puzan, Anna N., E-mail: anna_puzan@mail.ru; Baumer, Vyacheslav N.; Mateychenko, Pavel V.
Crystal structure of the silver formate Ag(HCO{sub 2}) has been determined (orthorhombic, sp.gr. Pccn, a=7.1199(5), b=10.3737(4), c=6.4701(3)Å, V=477.88(4) Å{sup 3}, Z=8). The structure contains isolated formate ions and the pairs Ag{sub 2}{sup 2+} which form the layers in (001) planes (the shortest Ag–Ag distances is 2.919 in the pair and 3.421 and 3.716 Å between the nearest Ag atoms of adjacent pairs). Silver formate is unstable compound which decompose spontaneously vs time. Decomposition was studied using Rietveld analysis of the powder diffraction patterns. It was concluded that the diffusion of Ag atoms leads to the formation of plate-like metal particlesmore » as nuclei in the (100) planes which settle parallel to (001) planes of the silver formate matrix. - Highlights: • Silver formate Ag(HCO{sub 2}) was synthesized and characterized. • Layered packing of Ag-Ag pairs in the structure was found. • Decomposition of Ag(HCO{sub 2}) and formation of metal phase were studied. • Rietveld-refined micro-structural characteristics during decomposition reveal the space relationship between the matrix structure and forming Ag phase REPLACE with: Space relationship between the matrix structure and forming Ag phase.« less
Two-point correlators revisited: fast and slow scales in multifield models of inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghersi, José T. Gálvez; Frolov, Andrei V., E-mail: joseg@sfu.ca, E-mail: frolov@sfu.ca
2017-05-01
We study the structure of two-point correlators of the inflationary field fluctuations in order to improve the accuracy and efficiency of the existing methods to calculate primordial spectra. We present a description motivated by the separation of the fast and slow evolving components of the spectrum which is based on Cholesky decomposition of the field correlator matrix. Our purpose is to rewrite all the relevant equations of motion in terms of slowly varying quantities. This is important in order to consider the contribution from high-frequency modes to the spectrum without affecting computational performance. The slow-roll approximation is not required tomore » reproduce the main distinctive features in the power spectrum for each specific model of inflation.« less
Multiresolution image gathering and restoration
NASA Technical Reports Server (NTRS)
Fales, Carl L.; Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1992-01-01
In this paper we integrate multiresolution decomposition with image gathering and restoration. This integration leads to a Wiener-matrix filter that accounts for the aliasing, blurring, and noise in image gathering, together with the digital filtering and decimation in signal decomposition. Moreover, as implemented here, the Wiener-matrix filter completely suppresses the blurring and raster effects of the image-display device. We demonstrate that this filter can significantly improve the fidelity and visual quality produced by conventional image reconstruction. The extent of this improvement, in turn, depends on the design of the image-gathering device.
Covariant Conformal Decomposition of Einstein Equations
NASA Astrophysics Data System (ADS)
Gourgoulhon, E.; Novak, J.
It has been shown1,2 that the usual 3+1 form of Einstein's equations may be ill-posed. This result has been previously observed in numerical simulations3,4. We present a 3+1 type formalism inspired by these works to decompose Einstein's equations. This decomposition is motivated by the aim of stable numerical implementation and resolution of the equations. We introduce the conformal 3-``metric'' (scaled by the determinant of the usual 3-metric) which is a tensor density of weight -2/3. The Einstein equations are then derived in terms of this ``metric'', of the conformal extrinsic curvature and in terms of the associated derivative. We also introduce a flat 3-metric (the asymptotic metric for isolated systems) and the associated derivative. Finally, the generalized Dirac gauge (introduced by Smarr and York5) is used in this formalism and some examples of formulation of Einstein's equations are shown.
NASA Astrophysics Data System (ADS)
Alshaery, Aisha; Ebaid, Abdelhalim
2017-11-01
Kepler's equation is one of the fundamental equations in orbital mechanics. It is a transcendental equation in terms of the eccentric anomaly of a planet which orbits the Sun. Determining the position of a planet in its orbit around the Sun at a given time depends upon the solution of Kepler's equation, which we will solve in this paper by the Adomian decomposition method (ADM). Several properties of the periodicity of the obtained approximate solutions have been proved in lemmas. Our calculations demonstrated a rapid convergence of the obtained approximate solutions which are displayed in tables and graphs. Also, it has been shown in this paper that only a few terms of the Adomian decomposition series are sufficient to achieve highly accurate numerical results for any number of revolutions of the Earth around the Sun as a consequence of the periodicity property. Numerically, the four-term approximate solution coincides with the Bessel-Fourier series solution in the literature up to seven decimal places at some values of the time parameter and nine decimal places at other values. Moreover, the absolute error approaches zero using the nine term approximate Adomian solution. In addition, the approximate Adomian solutions for the eccentric anomaly have been used to show the convergence of the approximate radial distances of the Earth from the Sun for any number of revolutions. The minimal distance (perihelion) and maximal distance (aphelion) approach 147 million kilometers and 152.505 million kilometers, respectively, and these coincide with the well known results in astronomical physics. Therefore, the Adomian decomposition method is validated as an effective tool to solve Kepler's equation for elliptical orbits.
Wave-filter-based approach for generation of a quiet space in a rectangular cavity
NASA Astrophysics Data System (ADS)
Iwamoto, Hiroyuki; Tanaka, Nobuo; Sanada, Akira
2018-02-01
This paper is concerned with the generation of a quiet space in a rectangular cavity using active wave control methodology. It is the purpose of this paper to present the wave filtering method for a rectangular cavity using multiple microphones and its application to an adaptive feedforward control system. Firstly, the transfer matrix method is introduced for describing the wave dynamics of the sound field, and then feedforward control laws for eliminating transmitted waves is derived. Furthermore, some numerical simulations are conducted that show the best possible result of active wave control. This is followed by the derivation of the wave filtering equations that indicates the structure of the wave filter. It is clarified that the wave filter consists of three portions; modal group filter, rearrangement filter and wave decomposition filter. Next, from a numerical point of view, the accuracy of the wave decomposition filter which is expressed as a function of frequency is investigated using condition numbers. Finally, an experiment on the adaptive feedforward control system using the wave filter is carried out, demonstrating that a quiet space is generated in the target space by the proposed method.
NASA Astrophysics Data System (ADS)
Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-01
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.
Sotiriou, Georgios A.; Singh, Dilpreet; Zhang, Fang; Chalbot, Marie-Cecile G.; Spielman-Sun, Eleanor; Hoering, Lutz; Kavouras, Ilias G.; Lowry, Gregory V.; Wohlleben, Wendel; Demokritou, Philip
2015-01-01
Nano-enabled products (NEPs) are currently part of our life prompting for detailed investigation of potential nano-release across their life-cycle. Particularly interesting is their end-of-life thermal decomposition scenario. Here, we examine the thermal decomposition of a widely used NEP, namely thermoplastic nanocomposites, and assess the properties of the byproducts (released aerosol and residual ash) and possible environmental health and safety implications. We focus on establishing a fundamental understanding on the effect of thermal decomposition parameters, such as polymer matrix, nanofiller properties, decomposition temperature, on the properties of byproducts using a recently-developed lab-based experimental integrated platform. Our results indicate that thermoplastic polymer matrix strongly influences size and morphology of released aerosol, while there was minimal but detectable nano-release, especially when inorganic nanofillers were used. The chemical composition of the released aerosol was found not to be strongly influenced by the presence of nanofiller at least for the low, industry-relevant loadings assessed here. Furthermore, the morphology and composition of residual ash was found to be strongly influenced by the presence of nanofiller. The findings presented here on thermal decomposition/incineration of NEPs raise important questions and concerns regarding the potential fate and transport of released engineered nanomaterials in environmental media and potential environmental health and safety implications. PMID:26642449
Numerical Technology for Large-Scale Computational Electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharpe, R; Champagne, N; White, D
The key bottleneck of implicit computational electromagnetics tools for large complex geometries is the solution of the resulting linear system of equations. The goal of this effort was to research and develop critical numerical technology that alleviates this bottleneck for large-scale computational electromagnetics (CEM). The mathematical operators and numerical formulations used in this arena of CEM yield linear equations that are complex valued, unstructured, and indefinite. Also, simultaneously applying multiple mathematical modeling formulations to different portions of a complex problem (hybrid formulations) results in a mixed structure linear system, further increasing the computational difficulty. Typically, these hybrid linear systems aremore » solved using a direct solution method, which was acceptable for Cray-class machines but does not scale adequately for ASCI-class machines. Additionally, LLNL's previously existing linear solvers were not well suited for the linear systems that are created by hybrid implicit CEM codes. Hence, a new approach was required to make effective use of ASCI-class computing platforms and to enable the next generation design capabilities. Multiple approaches were investigated, including the latest sparse-direct methods developed by our ASCI collaborators. In addition, approaches that combine domain decomposition (or matrix partitioning) with general-purpose iterative methods and special purpose pre-conditioners were investigated. Special-purpose pre-conditioners that take advantage of the structure of the matrix were adapted and developed based on intimate knowledge of the matrix properties. Finally, new operator formulations were developed that radically improve the conditioning of the resulting linear systems thus greatly reducing solution time. The goal was to enable the solution of CEM problems that are 10 to 100 times larger than our previous capability.« less
Many Masses on One Stroke:. Economic Computation of Quark Propagators
NASA Astrophysics Data System (ADS)
Frommer, Andreas; Nöckel, Bertold; Güsken, Stephan; Lippert, Thomas; Schilling, Klaus
The computational effort in the calculation of Wilson fermion quark propagators in Lattice Quantum Chromodynamics can be considerably reduced by exploiting the Wilson fermion matrix structure in inversion algorithms based on the non-symmetric Lanczos process. We consider two such methods: QMR (quasi minimal residual) and BCG (biconjugate gradients). Based on the decomposition M/κ = 1/κ-D of the Wilson mass matrix, using QMR, one can carry out inversions on a whole trajectory of masses simultaneously, merely at the computational expense of a single propagator computation. In other words, one has to compute the propagator corresponding to the lightest mass only, while all the heavier masses are given for free, at the price of extra storage. Moreover, the symmetry γ5M = M†γ5 can be used to cut the computational effort in QMR and BCG by a factor of two. We show that both methods then become — in the critical regime of small quark masses — competitive to BiCGStab and significantly better than the standard MR method, with optimal relaxation factor, and CG as applied to the normal equations.
A Thermodynamically Consistent Approach to Phase-Separating Viscous Fluids
NASA Astrophysics Data System (ADS)
Anders, Denis; Weinberg, Kerstin
2018-04-01
The de-mixing properties of heterogeneous viscous fluids are determined by an interplay of diffusion, surface tension and a superposed velocity field. In this contribution a variational model of the decomposition, based on the Navier-Stokes equations for incompressible laminar flow and the extended Korteweg-Cahn-Hilliard equations, is formulated. An exemplary numerical simulation using C1-continuous finite elements demonstrates the capability of this model to compute phase decomposition and coarsening of the moving fluid.
Transformation matrices between non-linear and linear differential equations
NASA Technical Reports Server (NTRS)
Sartain, R. L.
1983-01-01
In the linearization of systems of non-linear differential equations, those systems which can be exactly transformed into the second order linear differential equation Y"-AY'-BY=0 where Y, Y', and Y" are n x 1 vectors and A and B are constant n x n matrices of real numbers were considered. The 2n x 2n matrix was used to transform the above matrix equation into the first order matrix equation X' = MX. Specially the matrix M and the conditions which will diagonalize or triangularize M were studied. Transformation matrices P and P sub -1 were used to accomplish this diagonalization or triangularization to return to the solution of the second order matrix differential equation system from the first order system.
Lorentz force electrical impedance tomography using magnetic field measurements.
Zengin, Reyhan; Gençer, Nevzat Güneri
2016-08-21
In this study, magnetic field measurement technique is investigated to image the electrical conductivity properties of biological tissues using Lorentz forces. This technique is based on electrical current induction using ultrasound together with an applied static magnetic field. The magnetic field intensity generated due to induced currents is measured using two coil configurations, namely, a rectangular loop coil and a novel xy coil pair. A time-varying voltage is picked-up and recorded while the acoustic wave propagates along its path. The forward problem of this imaging modality is defined as calculation of the pick-up voltages due to a given acoustic excitation and known body properties. Firstly, the feasibility of the proposed technique is investigated analytically. The basic field equations governing the behaviour of time-varying electromagnetic fields are presented. Secondly, the general formulation of the partial differential equations for the scalar and magnetic vector potentials are derived. To investigate the feasibility of this technique, numerical studies are conducted using a finite element method based software. To sense the pick-up voltages a novel coil configuration (xy coil pairs) is proposed. Two-dimensional numerical geometry with a 16-element linear phased array (LPA) ultrasonic transducer (1 MHz) and a conductive body (breast fat) with five tumorous tissues is modeled. The static magnetic field is assumed to be 4 Tesla. To understand the performance of the imaging system, the sensitivity matrix is analyzed. The sensitivity matrix is obtained for two different locations of LPA transducer with eleven steering angles from [Formula: see text] to [Formula: see text] at intervals of [Formula: see text]. The characteristics of the imaging system are shown with the singular value decomposition (SVD) of the sensitivity matrix. The images are reconstructed with the truncated SVD algorithm. The signal-to-noise ratio in measurements is assumed 80 dB. Simulation studies based on the sensitivity matrix analysis reveal that perturbations with [Formula: see text] mm size can be detected up to a 3.5 cm depth.
Lorentz force electrical impedance tomography using magnetic field measurements
NASA Astrophysics Data System (ADS)
Zengin, Reyhan; Güneri Gençer, Nevzat
2016-08-01
In this study, magnetic field measurement technique is investigated to image the electrical conductivity properties of biological tissues using Lorentz forces. This technique is based on electrical current induction using ultrasound together with an applied static magnetic field. The magnetic field intensity generated due to induced currents is measured using two coil configurations, namely, a rectangular loop coil and a novel xy coil pair. A time-varying voltage is picked-up and recorded while the acoustic wave propagates along its path. The forward problem of this imaging modality is defined as calculation of the pick-up voltages due to a given acoustic excitation and known body properties. Firstly, the feasibility of the proposed technique is investigated analytically. The basic field equations governing the behaviour of time-varying electromagnetic fields are presented. Secondly, the general formulation of the partial differential equations for the scalar and magnetic vector potentials are derived. To investigate the feasibility of this technique, numerical studies are conducted using a finite element method based software. To sense the pick-up voltages a novel coil configuration (xy coil pairs) is proposed. Two-dimensional numerical geometry with a 16-element linear phased array (LPA) ultrasonic transducer (1 MHz) and a conductive body (breast fat) with five tumorous tissues is modeled. The static magnetic field is assumed to be 4 Tesla. To understand the performance of the imaging system, the sensitivity matrix is analyzed. The sensitivity matrix is obtained for two different locations of LPA transducer with eleven steering angles from -{{25}\\circ} to {{25}\\circ} at intervals of {{5}\\circ} . The characteristics of the imaging system are shown with the singular value decomposition (SVD) of the sensitivity matrix. The images are reconstructed with the truncated SVD algorithm. The signal-to-noise ratio in measurements is assumed 80 dB. Simulation studies based on the sensitivity matrix analysis reveal that perturbations with 5~\\text{mm}× 5 mm size can be detected up to a 3.5 cm depth.
Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery.
Xie, Qi; Zhao, Qian; Meng, Deyu; Xu, Zongben
2017-08-02
It is well known that the sparsity/low-rank of a vector/matrix can be rationally measured by nonzero-entries-number ($l_0$ norm)/nonzero- singular-values-number (rank), respectively. However, data from real applications are often generated by the interaction of multiple factors, which obviously cannot be sufficiently represented by a vector/matrix, while a high order tensor is expected to provide more faithful representation to deliver the intrinsic structure underlying such data ensembles. Unlike the vector/matrix case, constructing a rational high order sparsity measure for tensor is a relatively harder task. To this aim, in this paper we propose a measure for tensor sparsity, called Kronecker-basis-representation based tensor sparsity measure (KBR briefly), which encodes both sparsity insights delivered by Tucker and CANDECOMP/PARAFAC (CP) low-rank decompositions for a general tensor. Then we study the KBR regularization minimization (KBRM) problem, and design an effective ADMM algorithm for solving it, where each involved parameter can be updated with closed-form equations. Such an efficient solver makes it possible to extend KBR to various tasks like tensor completion and tensor robust principal component analysis. A series of experiments, including multispectral image (MSI) denoising, MSI completion and background subtraction, substantiate the superiority of the proposed methods beyond state-of-the-arts.
NASA Technical Reports Server (NTRS)
Vlahopoulos, Nickolas
2005-01-01
The Energy Finite Element Analysis (EFEA) is a finite element based computational method for high frequency vibration and acoustic analysis. The EFEA solves with finite elements governing differential equations for energy variables. These equations are developed from wave equations. Recently, an EFEA method for computing high frequency vibration of structures either in vacuum or in contact with a dense fluid has been presented. The presence of fluid loading has been considered through added mass and radiation damping. The EFEA developments were validated by comparing EFEA results to solutions obtained by very dense conventional finite element models and solutions from classical techniques such as statistical energy analysis (SEA) and the modal decomposition method for bodies of revolution. EFEA results have also been compared favorably with test data for the vibration and the radiated noise generated by a large scale submersible vehicle. The primary variable in EFEA is defined as the time averaged over a period and space averaged over a wavelength energy density. A joint matrix computed from the power transmission coefficients is utilized for coupling the energy density variables across any discontinuities, such as change of plate thickness, plate/stiffener junctions etc. When considering the high frequency vibration of a periodically stiffened plate or cylinder, the flexural wavelength is smaller than the interval length between two periodic stiffeners, therefore the stiffener stiffness can not be smeared by computing an equivalent rigidity for the plate or cylinder. The periodic stiffeners must be regarded as coupling components between periodic units. In this paper, Periodic Structure (PS) theory is utilized for computing the coupling joint matrix and for accounting for the periodicity characteristics.
Effect of dry torrefaction on kinetics of catalytic pyrolysis of sugarcane bagasse
NASA Astrophysics Data System (ADS)
Daniyanto, Sutijan, Deendarlianto, Budiman, Arief
2015-12-01
Decreasing world reserve of fossil resources (i.e. petroleum oil, coal and natural gas) encourage discovery of renewable resources as subtitute for fossil resources. Biomass is one of the main natural renewable resources which is promising resource as alternate resources to meet the world's energy needs and raw material to produce chemical platform. Conversion of biomass, as source of energy, fuel and biochemical, is conducted using thermochemical process such as pyrolysis-gasification process. Pyrolysis step is an important step in the mechanism of pyrolysis - gasification of biomass. The objective of this study is to obtain the kinetic reaction of catalytic pyrolysis of dry torrified sugarcane bagasse which used Ca and Mg as catalysts. The model of kinetic reaction is interpreted using model n-order of single reaction equation of biomass. Rate of catalytic pyrolysis reaction depends on the weight of converted biomass into char and volatile matters. Based on TG/DTA analysis, rate of pyrolysis reaction is influenced by the composition of biomass (i.e. hemicellulose, cellulose and lignin) and inorganic component especially alkali and alkaline earth metallic (AAEM). From this study, it has found two equations rate of reaction of catalytic pyrolysis in sugarcane bagasse using catalysts Ca and Mg. First equation is equation of pyrolysis reaction in rapid zone of decomposition and the second equation is slow zone of decomposition. Value of order reaction for rapid decomposition is n > 1 and for slow decomposition is n<1. Constant and order of reactions for catalytic pyrolysis of dry-torrified sugarcane bagasse with presence of Ca tend to higher than that's of presence of Mg.
A direct method for unfolding the resolution function from measurements of neutron induced reactions
NASA Astrophysics Data System (ADS)
Žugec, P.; Colonna, N.; Sabate-Gilarte, M.; Vlachoudis, V.; Massimi, C.; Lerendegui-Marco, J.; Stamatopoulos, A.; Bacak, M.; Warren, S. G.; n TOF Collaboration
2017-12-01
The paper explores the numerical stability and the computational efficiency of a direct method for unfolding the resolution function from the measurements of the neutron induced reactions. A detailed resolution function formalism is laid out, followed by an overview of challenges present in a practical implementation of the method. A special matrix storage scheme is developed in order to facilitate both the memory management of the resolution function matrix, and to increase the computational efficiency of the matrix multiplication and decomposition procedures. Due to its admirable computational properties, a Cholesky decomposition is at the heart of the unfolding procedure. With the smallest but necessary modification of the matrix to be decomposed, the method is successfully applied to system of 105 × 105. However, the amplification of the uncertainties during the direct inversion procedures limits the applicability of the method to high-precision measurements of neutron induced reactions.
NASA Astrophysics Data System (ADS)
Qing, Zhou; Weili, Jiao; Tengfei, Long
2014-03-01
The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix.
The study of Thai stock market across the 2008 financial crisis
NASA Astrophysics Data System (ADS)
Kanjamapornkul, K.; Pinčák, Richard; Bartoš, Erik
2016-11-01
The cohomology theory for financial market can allow us to deform Kolmogorov space of time series data over time period with the explicit definition of eight market states in grand unified theory. The anti-de Sitter space induced from a coupling behavior field among traders in case of a financial market crash acts like gravitational field in financial market spacetime. Under this hybrid mathematical superstructure, we redefine a behavior matrix by using Pauli matrix and modified Wilson loop for time series data. We use it to detect the 2008 financial market crash by using a degree of cohomology group of sphere over tensor field in correlation matrix over all possible dominated stocks underlying Thai SET50 Index Futures. The empirical analysis of financial tensor network was performed with the help of empirical mode decomposition and intrinsic time scale decomposition of correlation matrix and the calculation of closeness centrality of planar graph.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Boian S.; Vesselinov, Velimir V.; Stanev, Valentin
The ShiftNMFk1.2 code, or as we call it, GreenNMFk, represents a hybrid algorithm combining unsupervised adaptive machine learning and Green's function inverse method. GreenNMFk allows an efficient and high performance de-mixing and feature extraction of a multitude of nonnegative signals that change their shape propagating through the medium. The signals are mixed and recorded by a network of uncorrelated sensors. The code couples Non-negative Matrix Factorization (NMF) and inverse-analysis Green's functions method. GreenNMF synergistically performs decomposition of the recorded mixtures, finds the number of the unknown sources and uses the Green's function of the governing partial differential equation to identifymore » the unknown sources and their charecteristics. GreenNMF can be applied directly to any problem controlled by a known partial-differential parabolic equation where mixtures of an unknown number of sources are measured at multiple locations. Full GreenNMFk method is a subject LANL U.S. Patent application S133364.000 August, 2017. The ShiftNMFk 1.2 version here is a toy version of this method that can work with a limited number of unknown sources (4 or less).« less
NASA Astrophysics Data System (ADS)
Hu, Shujuan; Cheng, Jianbo; Xu, Ming; Chou, Jifan
2018-04-01
The three-pattern decomposition of global atmospheric circulation (TPDGAC) partitions three-dimensional (3D) atmospheric circulation into horizontal, meridional and zonal components to study the 3D structures of global atmospheric circulation. This paper incorporates the three-pattern decomposition model (TPDM) into primitive equations of atmospheric dynamics and establishes a new set of dynamical equations of the horizontal, meridional and zonal circulations in which the operator properties are studied and energy conservation laws are preserved, as in the primitive equations. The physical significance of the newly established equations is demonstrated. Our findings reveal that the new equations are essentially the 3D vorticity equations of atmosphere and that the time evolution rules of the horizontal, meridional and zonal circulations can be described from the perspective of 3D vorticity evolution. The new set of dynamical equations includes decomposed expressions that can be used to explore the source terms of large-scale atmospheric circulation variations. A simplified model is presented to demonstrate the potential applications of the new equations for studying the dynamics of the Rossby, Hadley and Walker circulations. The model shows that the horizontal air temperature anomaly gradient (ATAG) induces changes in meridional and zonal circulations and promotes the baroclinic evolution of the horizontal circulation. The simplified model also indicates that the absolute vorticity of the horizontal circulation is not conserved, and its changes can be described by changes in the vertical vorticities of the meridional and zonal circulations. Moreover, the thermodynamic equation shows that the induced meridional and zonal circulations and advection transport by the horizontal circulation in turn cause a redistribution of the air temperature. The simplified model reveals the fundamental rules between the evolution of the air temperature and the horizontal, meridional and zonal components of global atmospheric circulation.
The Rigid Orthogonal Procrustes Rotation Problem
ERIC Educational Resources Information Center
ten Berge, Jos M. F.
2006-01-01
The problem of rotating a matrix orthogonally to a best least squares fit with another matrix of the same order has a closed-form solution based on a singular value decomposition. The optimal rotation matrix is not necessarily rigid, but may also involve a reflection. In some applications, only rigid rotations are permitted. Gower (1976) has…
A knowledge-based tool for multilevel decomposition of a complex design problem
NASA Technical Reports Server (NTRS)
Rogers, James L.
1989-01-01
Although much work has been done in applying artificial intelligence (AI) tools and techniques to problems in different engineering disciplines, only recently has the application of these tools begun to spread to the decomposition of complex design problems. A new tool based on AI techniques has been developed to implement a decomposition scheme suitable for multilevel optimization and display of data in an N x N matrix format.
Matrix form of Legendre polynomials for solving linear integro-differential equations of high order
NASA Astrophysics Data System (ADS)
Kammuji, M.; Eshkuvatov, Z. K.; Yunus, Arif A. M.
2017-04-01
This paper presents an effective approximate solution of high order of Fredholm-Volterra integro-differential equations (FVIDEs) with boundary condition. Legendre truncated series is used as a basis functions to estimate the unknown function. Matrix operation of Legendre polynomials is used to transform FVIDEs with boundary conditions into matrix equation of Fredholm-Volterra type. Gauss Legendre quadrature formula and collocation method are applied to transfer the matrix equation into system of linear algebraic equations. The latter equation is solved by Gauss elimination method. The accuracy and validity of this method are discussed by solving two numerical examples and comparisons with wavelet and methods.
NASA Astrophysics Data System (ADS)
Roehl, Jan Hendrik; Oberrath, Jens
2016-09-01
``Active plasma resonance spectroscopy'' (APRS) is a widely used diagnostic method to measure plasma parameter like electron density. Measurements with APRS probes in plasmas of a few Pa typically show a broadening of the spectrum due to kinetic effects. To analyze the broadening a general kinetic model in electrostatic approximation based on functional analytic methods has been presented [ 1 ] . One of the main results is, that the system response function Y(ω) is given in terms of the matrix elements of the resolvent of the dynamic operator evaluated for values on the imaginary axis. To determine the response function of a specific probe the resolvent has to be approximated by a huge matrix which is given by a banded block structure. Due to this structure a block based LU decomposition can be implemented. It leads to a solution of Y(ω) which is given only by products of matrices of the inner block size. This LU decomposition allows to analyze the influence of kinetic effects on the broadening and saves memory and calculation time. Gratitude is expressed to the internal funding of Leuphana University.
Yang, Haixuan; Seoighe, Cathal
2016-01-01
Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em, E-mail: george_karniadakis@brown.edu
2014-08-01
The Karhunen–Lòeve (KL) decomposition provides a low-dimensional representation for random fields as it is optimal in the mean square sense. Although for many stochastic systems of practical interest, described by stochastic partial differential equations (SPDEs), solutions possess this low-dimensional character, they also have a strongly time-dependent form and to this end a fixed-in-time basis may not describe the solution in an efficient way. Motivated by this limitation of standard KL expansion, Sapsis and Lermusiaux (2009) [26] developed the dynamically orthogonal (DO) field equations which allow for the simultaneous evolution of both the spatial basis where uncertainty ‘lives’ but also themore » stochastic characteristics of uncertainty. Recently, Cheng et al. (2013) [28] introduced an alternative approach, the bi-orthogonal (BO) method, which performs the exact same tasks, i.e. it evolves the spatial basis and the stochastic characteristics of uncertainty. In the current work we examine the relation of the two approaches and we prove theoretically and illustrate numerically their equivalence, in the sense that one method is an exact reformulation of the other. We show this by deriving a linear and invertible transformation matrix described by a matrix differential equation that connects the BO and the DO solutions. We also examine a pathology of the BO equations that occurs when two eigenvalues of the solution cross, resulting in an instantaneous, infinite-speed, internal rotation of the computed spatial basis. We demonstrate that despite the instantaneous duration of the singularity this has important implications on the numerical performance of the BO approach. On the other hand, it is observed that the BO is more stable in nonlinear problems involving a relatively large number of modes. Several examples, linear and nonlinear, are presented to illustrate the DO and BO methods as well as their equivalence.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matuttis, Hans-Georg; Wang, Xiaoxing
Decomposition methods of the Suzuki-Trotter type of various orders have been derived in different fields. Applying them both to classical ordinary differential equations (ODEs) and quantum systems allows to judge their effectiveness and gives new insights for many body quantum mechanics where reference data are scarce. Further, based on data for 6 × 6 system we conclude that sampling with sign (minus-sign problem) is probably detrimental to the accuracy of fermionic simulations with determinant algorithms.
Homotopy decomposition method for solving one-dimensional time-fractional diffusion equation
NASA Astrophysics Data System (ADS)
Abuasad, Salah; Hashim, Ishak
2018-04-01
In this paper, we present the homotopy decomposition method with a modified definition of beta fractional derivative for the first time to find exact solution of one-dimensional time-fractional diffusion equation. In this method, the solution takes the form of a convergent series with easily computable terms. The exact solution obtained by the proposed method is compared with the exact solution obtained by using fractional variational homotopy perturbation iteration method via a modified Riemann-Liouville derivative.
Polar and singular value decomposition of 3×3 magic squares
NASA Astrophysics Data System (ADS)
Trenkler, Götz; Schmidt, Karsten; Trenkler, Dietrich
2013-07-01
In this note, we find polar as well as singular value decompositions of a 3×3 magic square, i.e. a 3×3 matrix M with real elements where each row, column and diagonal adds up to the magic sum s of the magic square.
Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-05
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.
Scilab software as an alternative low-cost computing in solving the linear equations problem
NASA Astrophysics Data System (ADS)
Agus, Fahrul; Haviluddin
2017-02-01
Numerical computation packages are widely used both in teaching and research. These packages consist of license (proprietary) and open source software (non-proprietary). One of the reasons to use the package is a complexity of mathematics function (i.e., linear problems). Also, number of variables in a linear or non-linear function has been increased. The aim of this paper was to reflect on key aspects related to the method, didactics and creative praxis in the teaching of linear equations in higher education. If implemented, it could be contribute to a better learning in mathematics area (i.e., solving simultaneous linear equations) that essential for future engineers. The focus of this study was to introduce an additional numerical computation package of Scilab as an alternative low-cost computing programming. In this paper, Scilab software was proposed some activities that related to the mathematical models. In this experiment, four numerical methods such as Gaussian Elimination, Gauss-Jordan, Inverse Matrix, and Lower-Upper Decomposition (LU) have been implemented. The results of this study showed that a routine or procedure in numerical methods have been created and explored by using Scilab procedures. Then, the routine of numerical method that could be as a teaching material course has exploited.
Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.
Li, Xingyu; Plataniotis, Konstantinos N
2017-01-01
In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.
Dominant modal decomposition method
NASA Astrophysics Data System (ADS)
Dombovari, Zoltan
2017-03-01
The paper deals with the automatic decomposition of experimental frequency response functions (FRF's) of mechanical structures. The decomposition of FRF's is based on the Green function representation of free vibratory systems. After the determination of the impulse dynamic subspace, the system matrix is formulated and the poles are calculated directly. By means of the corresponding eigenvectors, the contribution of each element of the impulse dynamic subspace is determined and the sufficient decomposition of the corresponding FRF is carried out. With the presented dominant modal decomposition (DMD) method, the mode shapes, the modal participation vectors and the modal scaling factors are identified using the decomposed FRF's. Analytical example is presented along with experimental case studies taken from machine tool industry.
Application of Direct Parallel Methods to Reconstruction and Forecasting Problems
NASA Astrophysics Data System (ADS)
Song, Changgeun
Many important physical processes in nature are represented by partial differential equations. Numerical weather prediction in particular, requires vast computational resources. We investigate the significance of parallel processing technology to the real world problem of atmospheric prediction. In this paper we consider the classic problem of decomposing the observed wind field into the irrotational and nondivergent components. Recognizing the fact that on a limited domain this problem has a non-unique solution, Lynch (1989) described eight different ways to accomplish the decomposition. One set of elliptic equations is associated with the decomposition--this determines the initial nondivergent state for the forecast model. It is shown that the entire decomposition problem can be solved in a fraction of a second using multi-vector processor such as ALLIANT FX/8. Secondly, the barotropic model is used to track hurricanes. Also, one set of elliptic equations is solved to recover the streamfunction from the forecasted vorticity. A 72 h prediction of Elena is made while it is in the Gulf of Mexico. During this time the hurricane executes a dramatic re-curvature that is captured by the model. Furthermore, an improvement in the track prediction results when a simple assimilation strategy is used. This technique makes use of the wind fields in the 24 h period immediately preceding the initial time for the prediction. In this particular application, solutions to systems of elliptic equations are the center of the computational mechanics. We demonstrate that direct, parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to the decomposition, the forecast and adjoint assimilation.
NASA Astrophysics Data System (ADS)
Cowperthwaite, M.
1994-03-01
Methods of differential geometry and Bernoulli's equation, written as B=0, are used to develop a new approach for constructing an exact solution for axial flow in a classical, two-dimensional, ZND detonation wave in a polytropic explosive with an arbitrary rate of decomposition. This geometric approach is fundamentally different from the traditional approaches to this axial flow problem formulated by Wood and Kirkwood (WK) and Fickett and Davis (FD), and gives equations for the axial particle velocity (u), the sound speed (c), the pressure (p), and the density (ρ), that are expressed in terms of the detonation velocity (D), the extent of decomposition (λ), the polytropic index (K), and two nonideal parameters ɛ3 and ɛ1, and reduce to the equations for steady-state, one-dimensional detonation as ɛ3 and ɛ1 approach zero. In contrast to the FD approach, the equations for u and c are obtained from first integrals of a tangent vector à on (u,c,λ) space, and the invariant condition, ÃB=aB=0, bypasses the FD eigenvalue problem by defining ɛ3 in terms of the detonation velocity deficit D/D∞ and K. In contrast to the WK approach, the equations for p and ρ are obtained from equations expressing the conservation of axial momentum and energy. Because the equations for these flow variables are derived without using the conservation of mass, the axial radial particle velocity gradient (war) associated with the flow can be obtained from the continuity equation without making approximations. The relationship between ɛ1 and ɛ3 that closes the solution is obtained from equations expressing constraints imposed on the axial flow at the shock front by the axial and radial momentum equations, the curved shock and the decomposition rate law, and a particular solution is constructed from the ɛ1-ɛ3 relationship determined by a prescribed rate law and value of K. Properties of particular solutions are presented to provide a better understanding of two-dimensional detonation, and a new axial condition for detonation failure is used to show that detonation failure can occur before the curve relating D/D∞ to the axial radius of curvature of the shock (Sa) becomes infinite.
Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta
2016-01-01
This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.
Generalized decompositions of dynamic systems and vector Lyapunov functions
NASA Astrophysics Data System (ADS)
Ikeda, M.; Siljak, D. D.
1981-10-01
The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.
Numeric Modified Adomian Decomposition Method for Power System Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth
This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested.more » It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez, Thomas; Nagayama, Taisuke; Fontes, Chris
Atomic structure of N-electron atoms is often determined by solving the Hartree-Fock equations, which are a set of integro-differential equations. The integral part of the Hartree-Fock equations treats electron exchange, but the Hartree-Fock equations are not often treated as an integro-differential equation. The exchange term is often approximated as an inhomogeneous or an effective potential so that the Hartree-Fock equations become a set of ordinary differential equations (which can be solved using the usual shooting methods). Because the Hartree-Fock equations are an iterative-refinement method, the inhomogeneous term relies on the previous guess of the wavefunction. In addition, there are numericalmore » complications associated with solving inhomogeneous differential equations. This work uses matrix methods to solve the Hartree-Fock equations as an integro-differential equation. It is well known that a derivative operator can be expressed as a matrix made of finite-difference coefficients; energy eigenvalues and eigenvectors can be obtained by using linear-algebra packages. The integral (exchange) part of the Hartree-Fock equation can be approximated as a sum and written as a matrix. The Hartree-Fock equations can be solved as a matrix that is the sum of the differential and integral matrices. We compare calculations using this method against experiment and standard atomic structure calculations. This matrix method can also be used to solve for free-electron wavefunctions, thus improving how the atoms and free electrons interact. Here, this technique is important for spectral line broadening in two ways: it improves the atomic structure calculations, and it improves the motion of the plasma electrons that collide with the atom.« less
Gomez, Thomas; Nagayama, Taisuke; Fontes, Chris; ...
2018-04-23
Atomic structure of N-electron atoms is often determined by solving the Hartree-Fock equations, which are a set of integro-differential equations. The integral part of the Hartree-Fock equations treats electron exchange, but the Hartree-Fock equations are not often treated as an integro-differential equation. The exchange term is often approximated as an inhomogeneous or an effective potential so that the Hartree-Fock equations become a set of ordinary differential equations (which can be solved using the usual shooting methods). Because the Hartree-Fock equations are an iterative-refinement method, the inhomogeneous term relies on the previous guess of the wavefunction. In addition, there are numericalmore » complications associated with solving inhomogeneous differential equations. This work uses matrix methods to solve the Hartree-Fock equations as an integro-differential equation. It is well known that a derivative operator can be expressed as a matrix made of finite-difference coefficients; energy eigenvalues and eigenvectors can be obtained by using linear-algebra packages. The integral (exchange) part of the Hartree-Fock equation can be approximated as a sum and written as a matrix. The Hartree-Fock equations can be solved as a matrix that is the sum of the differential and integral matrices. We compare calculations using this method against experiment and standard atomic structure calculations. This matrix method can also be used to solve for free-electron wavefunctions, thus improving how the atoms and free electrons interact. Here, this technique is important for spectral line broadening in two ways: it improves the atomic structure calculations, and it improves the motion of the plasma electrons that collide with the atom.« less
Deng, Xinyang; Jiang, Wen; Zhang, Jiandong
2017-01-01
The zero-sum matrix game is one of the most classic game models, and it is widely used in many scientific and engineering fields. In the real world, due to the complexity of the decision-making environment, sometimes the payoffs received by players may be inexact or uncertain, which requires that the model of matrix games has the ability to represent and deal with imprecise payoffs. To meet such a requirement, this paper develops a zero-sum matrix game model with Dempster–Shafer belief structure payoffs, which effectively represents the ambiguity involved in payoffs of a game. Then, a decomposition method is proposed to calculate the value of such a game, which is also expressed with belief structures. Moreover, for the possible computation-intensive issue in the proposed decomposition method, as an alternative solution, a Monte Carlo simulation approach is presented, as well. Finally, the proposed zero-sum matrix games with payoffs of Dempster–Shafer belief structures is illustratively applied to the sensor selection and intrusion detection of sensor networks, which shows its effectiveness and application process. PMID:28430156
Solution of matrix equations using sparse techniques
NASA Technical Reports Server (NTRS)
Baddourah, Majdi
1994-01-01
The solution of large systems of matrix equations is key to the solution of a large number of scientific and engineering problems. This talk describes the sparse matrix solver developed at Langley which can routinely solve in excess of 263,000 equations in 40 seconds on one Cray C-90 processor. It appears that for large scale structural analysis applications, sparse matrix methods have a significant performance advantage over other methods.
NASA Astrophysics Data System (ADS)
Lin, Zeng; Wang, Dongdong
2017-10-01
Due to the nonlocal property of the fractional derivative, the finite element analysis of fractional diffusion equation often leads to a dense and non-symmetric stiffness matrix, in contrast to the conventional finite element formulation with a particularly desirable symmetric and banded stiffness matrix structure for the typical diffusion equation. This work first proposes a finite element formulation that preserves the symmetry and banded stiffness matrix characteristics for the fractional diffusion equation. The key point of the proposed formulation is the symmetric weak form construction through introducing a fractional weight function. It turns out that the stiffness part of the present formulation is identical to its counterpart of the finite element method for the conventional diffusion equation and thus the stiffness matrix formulation becomes trivial. Meanwhile, the fractional derivative effect in the discrete formulation is completely transferred to the force vector, which is obviously much easier and efficient to compute than the dense fractional derivative stiffness matrix. Subsequently, it is further shown that for the general fractional advection-diffusion-reaction equation, the symmetric and banded structure can also be maintained for the diffusion stiffness matrix, although the total stiffness matrix is not symmetric in this case. More importantly, it is demonstrated that under certain conditions this symmetric diffusion stiffness matrix formulation is capable of producing very favorable numerical solutions in comparison with the conventional non-symmetric diffusion stiffness matrix finite element formulation. The effectiveness of the proposed methodology is illustrated through a series of numerical examples.
Model and Data Reduction for Control, Identification and Compressed Sensing
NASA Astrophysics Data System (ADS)
Kramer, Boris
This dissertation focuses on problems in design, optimization and control of complex, large-scale dynamical systems from different viewpoints. The goal is to develop new algorithms and methods, that solve real problems more efficiently, together with providing mathematical insight into the success of those methods. There are three main contributions in this dissertation. In Chapter 3, we provide a new method to solve large-scale algebraic Riccati equations, which arise in optimal control, filtering and model reduction. We present a projection based algorithm utilizing proper orthogonal decomposition, which is demonstrated to produce highly accurate solutions at low rank. The method is parallelizable, easy to implement for practitioners, and is a first step towards a matrix free approach to solve AREs. Numerical examples for n ≥ 106 unknowns are presented. In Chapter 4, we develop a system identification method which is motivated by tangential interpolation. This addresses the challenge of fitting linear time invariant systems to input-output responses of complex dynamics, where the number of inputs and outputs is relatively large. The method reduces the computational burden imposed by a full singular value decomposition, by carefully choosing directions on which to project the impulse response prior to assembly of the Hankel matrix. The identification and model reduction step follows from the eigensystem realization algorithm. We present three numerical examples, a mass spring damper system, a heat transfer problem, and a fluid dynamics system. We obtain error bounds and stability results for this method. Chapter 5 deals with control and observation design for parameter dependent dynamical systems. We address this by using local parametric reduced order models, which can be used online. Data available from simulations of the system at various configurations (parameters, boundary conditions) is used to extract a sparse basis to represent the dynamics (via dynamic mode decomposition). Subsequently, a new, compressed sensing based classification algorithm is developed which incorporates the extracted dynamic information into the sensing basis. We show that this augmented classification basis makes the method more robust to noise, and results in superior identification of the correct parameter. Numerical examples consist of a Navier-Stokes, as well as a Boussinesq flow application.
Simple Derivation of the Lindblad Equation
ERIC Educational Resources Information Center
Pearle, Philip
2012-01-01
The Lindblad equation is an evolution equation for the density matrix in quantum theory. It is the general linear, Markovian, form which ensures that the density matrix is Hermitian, trace 1, positive and completely positive. Some elementary examples of the Lindblad equation are given. The derivation of the Lindblad equation presented here is…
Definition of a parametric form of nonsingular Mueller matrices.
Devlaminck, Vincent; Terrier, Patrick
2008-11-01
The goal of this paper is to propose a mathematical framework to define and analyze a general parametric form of an arbitrary nonsingular Mueller matrix. Starting from previous results about nondepolarizing matrices, we generalize the method to any nonsingular Mueller matrix. We address this problem in a six-dimensional space in order to introduce a transformation group with the same number of degrees of freedom and explain why subsets of O(5,1), the orthogonal group associated with six-dimensional Minkowski space, is a physically admissible solution to this question. Generators of this group are used to define possible expressions of an arbitrary nonsingular Mueller matrix. Ultimately, the problem of decomposition of these matrices is addressed, and we point out that the "reverse" and "forward" decomposition concepts recently introduced may be inferred from the formalism we propose.
Factor Analytic Approach to Transitive Text Mining using Medline Descriptors
NASA Astrophysics Data System (ADS)
Stegmann, J.; Grohmann, G.
Matrix decomposition methods were applied to examples of noninteractive literature sets sharing implicit relations. Document-by-term matrices were created from downloaded PubMed literature sets, the terms being the Medical Subject Headings (MeSH descriptors) assigned to the documents. The loadings of the factors derived from singular value or eigenvalue matrix decomposition were sorted according to absolute values and subsequently inspected for positions of terms relevant to the discovery of hidden connections. It was found that only a small number of factors had to be screened to find key terms in close neighbourhood, being separated by a small number of terms only.
Dispersion toughened ceramic composites and method for making same
Stinton, David P.; Lackey, Walter J.; Lauf, Robert J.
1986-01-01
Ceramic composites exhibiting increased fracture toughness are produced by the simultaneous codeposition of silicon carbide and titanium disilicide by chemical vapor deposition. A mixture of hydrogen, methyltrichlorosilane and titanium tetrachloride is introduced into a furnace containing a substrate such as graphite or silicon carbide. The thermal decomposition of the methyltrichlorosilane provides a silicon carbide matrix phase and the decomposition of the titanium tetrachloride provides a uniformly dispersed second phase of the intermetallic titanium disilicide within the matrix phase. The fracture toughness of the ceramic composite is in the range of about 6.5 to 7.0 MPa.sqroot.m which represents a significant increase over that of silicon carbide.
Dispersion toughened ceramic composites and method for making same
Stinton, D.P.; Lackey, W.J.; Lauf, R.J.
1984-09-28
Ceramic composites exhibiting increased fracture toughness are produced by the simultaneous codeposition of silicon carbide and titanium disilicide by chemical vapor deposition. A mixture of hydrogen, methyltrichlorosilane and titanium tetrachloride is introduced into a furnace containing a substrate such as graphite or silicon carbide. The thermal decomposition of the methyltrichlorosilane provides a silicon carbide matrix phase and the decomposition of the titanium tetrachloride provides a uniformly dispersed second phase of the intermetallic titanium disilicide within the matrix phase. The fracture toughness of the ceramic composite is in the range of about 6.5 to 7.0 MPa..sqrt..m which represents a significant increase over that of silicon carbide.
A technique for plasma velocity-space cross-correlation
NASA Astrophysics Data System (ADS)
Mattingly, Sean; Skiff, Fred
2018-05-01
An advance in experimental plasma diagnostics is presented and used to make the first measurement of a plasma velocity-space cross-correlation matrix. The velocity space correlation function can detect collective fluctuations of plasmas through a localized measurement. An empirical decomposition, singular value decomposition, is applied to this Hermitian matrix in order to obtain the plasma fluctuation eigenmode structure on the ion distribution function. A basic theory is introduced and compared to the modes obtained by the experiment. A full characterization of these modes is left for future work, but an outline of this endeavor is provided. Finally, the requirements for this experimental technique in other plasma regimes are discussed.
Reconstruction of Complex Network based on the Noise via QR Decomposition and Compressed Sensing.
Li, Lixiang; Xu, Dafei; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian
2017-11-08
It is generally known that the states of network nodes are stable and have strong correlations in a linear network system. We find that without the control input, the method of compressed sensing can not succeed in reconstructing complex networks in which the states of nodes are generated through the linear network system. However, noise can drive the dynamics between nodes to break the stability of the system state. Therefore, a new method integrating QR decomposition and compressed sensing is proposed to solve the reconstruction problem of complex networks under the assistance of the input noise. The state matrix of the system is decomposed by QR decomposition. We construct the measurement matrix with the aid of Gaussian noise so that the sparse input matrix can be reconstructed by compressed sensing. We also discover that noise can build a bridge between the dynamics and the topological structure. Experiments are presented to show that the proposed method is more accurate and more efficient to reconstruct four model networks and six real networks by the comparisons between the proposed method and only compressed sensing. In addition, the proposed method can reconstruct not only the sparse complex networks, but also the dense complex networks.
Decomposition of gas-phase trichloroethene by the UV/TiO2 process in the presence of ozone.
Shen, Y S; Ku, Y
2002-01-01
The decomposition of gas-phase trichloroethene (TCE) in air streams by direct photolysis, the UV/TiO2 and UV/O3 processes was studied. The experiments were carried out under various UV light intensities and wavelengths, ozone dosages, and initial concentrations of TCE to investigate and compare the removal efficiency of the pollutant. For UV/TiO2 process, the individual contribution to the decomposition of TCE by direct photolysis and hydroxyl radicals destruction was differentiated to discuss the quantum efficiency with 254 and 365 nm UV lamps. The removal of gaseous TCE was found to reduce by UV/TiO2 process in the presence of ozone possibly because of the ozone molecules could scavenge hydroxyl radicals produced from the excitation of TiO2 by UV radiation to inhibit the decomposition of TCE. A photoreactor design equation for the decomposition of gaseous TCE by the UV/TiO2 process in air streams was developed by combining the continuity equation of the pollutant and the surface catalysis reaction rate expression. By the proposed design scheme, the temporal distribution of TCE at various operation conditions by the UV/TiO2 process can be well modeled.
Application of singular value decomposition to structural dynamics systems with constraints
NASA Technical Reports Server (NTRS)
Juang, J.-N.; Pinson, L. D.
1985-01-01
Singular value decomposition is used to construct a coordinate transformation for a linear dynamic system subject to linear, homogeneous constraint equations. The method is compared with two commonly used methods, namely classical Gaussian elimination and Walton-Steeves approach. Although the classical method requires fewer numerical operations, the singular value decomposition method is more accurate and convenient in eliminating the dependent coordinates. Numerical examples are presented to demonstrate the application of the method.
Universal shocks in the Wishart random-matrix ensemble.
Blaizot, Jean-Paul; Nowak, Maciej A; Warchoł, Piotr
2013-05-01
We show that the derivative of the logarithm of the average characteristic polynomial of a diffusing Wishart matrix obeys an exact partial differential equation valid for an arbitrary value of N, the size of the matrix. In the large N limit, this equation generalizes the simple inviscid Burgers equation that has been obtained earlier for Hermitian or unitary matrices. The solution, through the method of characteristics, presents singularities that we relate to the precursors of shock formation in the Burgers equation. The finite N effects appear as a viscosity term in the Burgers equation. Using a scaling analysis of the complete equation for the characteristic polynomial, in the vicinity of the shocks, we recover in a simple way the universal Bessel oscillations (so-called hard-edge singularities) familiar in random-matrix theory.
Performance of Scattering Matrix Decomposition and Color Spaces for Synthetic Aperture Radar Imagery
2010-03-01
Color Spaces and Synthetic Aperture Radar (SAR) Multicolor Imaging. 15 2.3.1 Colorimetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.3.2...III. Decomposition Techniques on SAR Polarimetry and Colorimetry applied to SAR Imagery...space polarimetric SAR systems. Colorimetry is also introduced in this chapter, presenting the fundamentals of the RGB and CMY color spaces, defined for
Scalable direct Vlasov solver with discontinuous Galerkin method on unstructured mesh.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J.; Ostroumov, P. N.; Mustapha, B.
2010-12-01
This paper presents the development of parallel direct Vlasov solvers with discontinuous Galerkin (DG) method for beam and plasma simulations in four dimensions. Both physical and velocity spaces are in two dimesions (2P2V) with unstructured mesh. Contrary to the standard particle-in-cell (PIC) approach for kinetic space plasma simulations, i.e., solving Vlasov-Maxwell equations, direct method has been used in this paper. There are several benefits to solving a Vlasov equation directly, such as avoiding noise associated with a finite number of particles and the capability to capture fine structure in the plasma. The most challanging part of a direct Vlasov solvermore » comes from higher dimensions, as the computational cost increases as N{sup 2d}, where d is the dimension of the physical space. Recently, due to the fast development of supercomputers, the possibility has become more realistic. Many efforts have been made to solve Vlasov equations in low dimensions before; now more interest has focused on higher dimensions. Different numerical methods have been tried so far, such as the finite difference method, Fourier Spectral method, finite volume method, and spectral element method. This paper is based on our previous efforts to use the DG method. The DG method has been proven to be very successful in solving Maxwell equations, and this paper is our first effort in applying the DG method to Vlasov equations. DG has shown several advantages, such as local mass matrix, strong stability, and easy parallelization. These are particularly suitable for Vlasov equations. Domain decomposition in high dimensions has been used for parallelization; these include a highly scalable parallel two-dimensional Poisson solver. Benchmark results have been shown and simulation results will be reported.« less
NASA Technical Reports Server (NTRS)
Shertzer, Janine; Temkin, Aaron
2004-01-01
The development of a practical method of accurately calculating the full scattering amplitude, without making a partial wave decomposition is continued. The method is developed in the context of electron-hydrogen scattering, and here exchange is dealt with by considering e-H scattering in the static exchange approximation. The Schroedinger equation in this approximation can be simplified to a set of coupled integro-differential equations. The equations are solved numerically for the full scattering wave function. The scattering amplitude can most accurately be calculated from an integral expression for the amplitude; that integral can be formally simplified, and then evaluated using the numerically determined wave function. The results are essentially identical to converged partial wave results.
COMPADRE: an R and web resource for pathway activity analysis by component decompositions.
Ramos-Rodriguez, Roberto-Rafael; Cuevas-Diaz-Duran, Raquel; Falciani, Francesco; Tamez-Peña, Jose-Gerardo; Trevino, Victor
2012-10-15
The analysis of biological networks has become essential to study functional genomic data. Compadre is a tool to estimate pathway/gene sets activity indexes using sub-matrix decompositions for biological networks analyses. The Compadre pipeline also includes one of the direct uses of activity indexes to detect altered gene sets. For this, the gene expression sub-matrix of a gene set is decomposed into components, which are used to test differences between groups of samples. This procedure is performed with and without differentially expressed genes to decrease false calls. During this process, Compadre also performs an over-representation test. Compadre already implements four decomposition methods [principal component analysis (PCA), Isomaps, independent component analysis (ICA) and non-negative matrix factorization (NMF)], six statistical tests (t- and f-test, SAM, Kruskal-Wallis, Welch and Brown-Forsythe), several gene sets (KEGG, BioCarta, Reactome, GO and MsigDB) and can be easily expanded. Our simulation results shown in Supplementary Information suggest that Compadre detects more pathways than over-representation tools like David, Babelomics and Webgestalt and less false positives than PLAGE. The output is composed of results from decomposition and over-representation analyses providing a more complete biological picture. Examples provided in Supplementary Information show the utility, versatility and simplicity of Compadre for analyses of biological networks. Compadre is freely available at http://bioinformatica.mty.itesm.mx:8080/compadre. The R package is also available at https://sourceforge.net/p/compadre.
Real-time optical laboratory solution of parabolic differential equations
NASA Technical Reports Server (NTRS)
Casasent, David; Jackson, James
1988-01-01
An optical laboratory matrix-vector processor is used to solve parabolic differential equations (the transient diffusion equation with two space variables and time) by an explicit algorithm. This includes optical matrix-vector nonbase-2 encoded laboratory data, the combination of nonbase-2 and frequency-multiplexed data on such processors, a high-accuracy optical laboratory solution of a partial differential equation, new data partitioning techniques, and a discussion of a multiprocessor optical matrix-vector architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xiaojun; Lei, Guangtsai; Pan, Guangwen
In this paper, the continuous operator is discretized into matrix forms by Galerkin`s procedure, using periodic Battle-Lemarie wavelets as basis/testing functions. The polynomial decomposition of wavelets is applied to the evaluation of matrix elements, which makes the computational effort of the matrix elements no more expensive than that of method of moments (MoM) with conventional piecewise basis/testing functions. A new algorithm is developed employing the fast wavelet transform (FWT). Owing to localization, cancellation, and orthogonal properties of wavelets, very sparse matrices have been obtained, which are then solved by the LSQR iterative method. This algorithm is also adaptive in thatmore » one can add at will finer wavelet bases in the regions where fields vary rapidly, without any damage to the system orthogonality of the wavelet basis functions. To demonstrate the effectiveness of the new algorithm, we applied it to the evaluation of frequency-dependent resistance and inductance matrices of multiple lossy transmission lines. Numerical results agree with previously published data and laboratory measurements. The valid frequency range of the boundary integral equation results has been extended two to three decades in comparison with the traditional MoM approach. The new algorithm has been integrated into the computer aided design tool, MagiCAD, which is used for the design and simulation of high-speed digital systems and multichip modules Pan et al. 29 refs., 7 figs., 6 tabs.« less
On Partial Fraction Decompositions by Repeated Polynomial Divisions
ERIC Educational Resources Information Center
Man, Yiu-Kwong
2017-01-01
We present a method for finding partial fraction decompositions of rational functions with linear or quadratic factors in the denominators by means of repeated polynomial divisions. This method does not involve differentiation or solving linear equations for obtaining the unknown partial fraction coefficients, which is very suitable for either…
Inverse solutions for electrical impedance tomography based on conjugate gradients methods
NASA Astrophysics Data System (ADS)
Wang, M.
2002-01-01
A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.
NASA Astrophysics Data System (ADS)
Anderson, D. V.; Koniges, A. E.; Shumaker, D. E.
1988-11-01
Many physical problems require the solution of coupled partial differential equations on three-dimensional domains. When the time scales of interest dictate an implicit discretization of the equations a rather complicated global matrix system needs solution. The exact form of the matrix depends on the choice of spatial grids and on the finite element or finite difference approximations employed. CPDES3 allows each spatial operator to have 7, 15, 19, or 27 point stencils and allows for general couplings between all of the component PDE's and it automatically generates the matrix structures needed to perform the algorithm. The resulting sparse matrix equation is solved by either the preconditioned conjugate gradient (CG) method or by the preconditioned biconjugate gradient (BCG) algorithm. An arbitrary number of component equations are permitted only limited by available memory. In the sub-band representation used, we generate an algorithm that is written compactly in terms of indirect induces which is vectorizable on some of the newer scientific computers.
NASA Astrophysics Data System (ADS)
Anderson, D. V.; Koniges, A. E.; Shumaker, D. E.
1988-11-01
Many physical problems require the solution of coupled partial differential equations on two-dimensional domains. When the time scales of interest dictate an implicit discretization of the equations a rather complicated global matrix system needs solution. The exact form of the matrix depends on the choice of spatial grids and on the finite element or finite difference approximations employed. CPDES2 allows each spatial operator to have 5 or 9 point stencils and allows for general couplings between all of the component PDE's and it automatically generates the matrix structures needed to perform the algorithm. The resulting sparse matrix equation is solved by either the preconditioned conjugate gradient (CG) method or by the preconditioned biconjugate gradient (BCG) algorithm. An arbitrary number of component equations are permitted only limited by available memory. In the sub-band representation used, we generate an algorithm that is written compactly in terms of indirect indices which is vectorizable on some of the newer scientific computers.
NASA Astrophysics Data System (ADS)
Kumar, Devendra; Singh, Jagdev; Baleanu, Dumitru
2018-02-01
The mathematical model of breaking of non-linear dispersive water waves with memory effect is very important in mathematical physics. In the present article, we examine a novel fractional extension of the non-linear Fornberg-Whitham equation occurring in wave breaking. We consider the most recent theory of differentiation involving the non-singular kernel based on the extended Mittag-Leffler-type function to modify the Fornberg-Whitham equation. We examine the existence of the solution of the non-linear Fornberg-Whitham equation of fractional order. Further, we show the uniqueness of the solution. We obtain the numerical solution of the new arbitrary order model of the non-linear Fornberg-Whitham equation with the aid of the Laplace decomposition technique. The numerical outcomes are displayed in the form of graphs and tables. The results indicate that the Laplace decomposition algorithm is a very user-friendly and reliable scheme for handling such type of non-linear problems of fractional order.
Coupling lattice Boltzmann and continuum equations for flow and reactive transport in porous media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coon, Ethan; Porter, Mark L.; Kang, Qinjun
2012-06-18
In spatially and temporally localized instances, capturing sub-reservoir scale information is necessary. Capturing sub-reservoir scale information everywhere is neither necessary, nor computationally possible. The lattice Boltzmann Method for solving pore-scale systems. At the pore-scale, LBM provides an extremely scalable, efficient way of solving Navier-Stokes equations on complex geometries. Coupling pore-scale and continuum scale systems via domain decomposition. By leveraging the interpolations implied by pore-scale and continuum scale discretizations, overlapping Schwartz domain decomposition is used to ensure continuity of pressure and flux. This approach is demonstrated on a fractured medium, in which Navier-Stokes equations are solved within the fracture while Darcy'smore » equation is solved away from the fracture Coupling reactive transport to pore-scale flow simulators allows hybrid approaches to be extended to solve multi-scale reactive transport.« less
Numerical simulation of tonal fan noise of computers and air conditioning systems
NASA Astrophysics Data System (ADS)
Aksenov, A. A.; Gavrilyuk, V. N.; Timushev, S. F.
2016-07-01
Current approaches to fan noise simulation are mainly based on the Lighthill equation and socalled aeroacoustic analogy, which are also based on the transformed Lighthill equation, such as the wellknown FW-H equation or the Kirchhoff theorem. A disadvantage of such methods leading to significant modeling errors is associated with incorrect solution of the decomposition problem, i.e., separation of acoustic and vortex (pseudosound) modes in the area of the oscillation source. In this paper, we propose a method for tonal noise simulation based on the mesh solution of the Helmholtz equation for the Fourier transform of pressure perturbation with boundary conditions in the form of the complex impedance. A noise source is placed on the surface surrounding each fan rotor. The acoustic fan power is determined by the acoustic-vortex method, which ensures more accurate decomposition and determination of the pressure pulsation amplitudes in the near field of the fan.
Dynamics in the Decompositions Approach to Quantum Mechanics
NASA Astrophysics Data System (ADS)
Harding, John
2017-12-01
In Harding (Trans. Amer. Math. Soc. 348(5), 1839-1862 1996) it was shown that the direct product decompositions of any non-empty set, group, vector space, and topological space X form an orthomodular poset Fact X. This is the basis for a line of study in foundational quantum mechanics replacing Hilbert spaces with other types of structures. Here we develop dynamics and an abstract version of a time independent Schrödinger's equation in the setting of decompositions by considering representations of the group of real numbers in the automorphism group of the orthomodular poset Fact X of decompositions.
A unique set of micromechanics equations for high temperature metal matrix composites
NASA Technical Reports Server (NTRS)
Hopkins, D. A.; Chamis, C. C.
1985-01-01
A unique set of micromechanic equations is presented for high temperature metal matrix composites. The set includes expressions to predict mechanical properties, thermal properties and constituent microstresses for the unidirectional fiber reinforced ply. The equations are derived based on a mechanics of materials formulation assuming a square array unit cell model of a single fiber, surrounding matrix and an interphase to account for the chemical reaction which commonly occurs between fiber and matrix. A three-dimensional finite element analysis was used to perform a preliminary validation of the equations. Excellent agreement between properties predicted using the micromechanics equations and properties simulated by the finite element analyses are demonstrated. Implementation of the micromechanics equations as part of an integrated computational capability for nonlinear structural analysis of high temperature multilayered fiber composites is illustrated.
Solving Cubic Equations by Polynomial Decomposition
ERIC Educational Resources Information Center
Kulkarni, Raghavendra G.
2011-01-01
Several mathematicians struggled to solve cubic equations, and in 1515 Scipione del Ferro reportedly solved the cubic while participating in a local mathematical contest, but did not bother to publish his method. Then it was Cardano (1539) who first published the solution to the general cubic equation in his book "The Great Art, or, The Rules of…
Covariance expressions for eigenvalue and eigenvector problems
NASA Astrophysics Data System (ADS)
Liounis, Andrew J.
There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.
An overview of NSPCG: A nonsymmetric preconditioned conjugate gradient package
NASA Astrophysics Data System (ADS)
Oppe, Thomas C.; Joubert, Wayne D.; Kincaid, David R.
1989-05-01
The most recent research-oriented software package developed as part of the ITPACK Project is called "NSPCG" since it contains many nonsymmetric preconditioned conjugate gradient procedures. It is designed to solve large sparse systems of linear algebraic equations by a variety of different iterative methods. One of the main purposes for the development of the package is to provide a common modular structure for research on iterative methods for nonsymmetric matrices. Another purpose for the development of the package is to investigate the suitability of several iterative methods for vector computers. Since the vectorizability of an iterative method depends greatly on the matrix structure, NSPCG allows great flexibility in the operator representation. The coefficient matrix can be passed in one of several different matrix data storage schemes. These sparse data formats allow matrices with a wide range of structures from highly structured ones such as those with all nonzeros along a relatively small number of diagonals to completely unstructured sparse matrices. Alternatively, the package allows the user to call the accelerators directly with user-supplied routines for performing certain matrix operations. In this case, one can use the data format from an application program and not be required to copy the matrix into one of the package formats. This is particularly advantageous when memory space is limited. Some of the basic preconditioners that are available are point methods such as Jacobi, Incomplete LU Decomposition and Symmetric Successive Overrelaxation as well as block and multicolor preconditioners. The user can select from a large collection of accelerators such as Conjugate Gradient (CG), Chebyshev (SI, for semi-iterative), Generalized Minimal Residual (GMRES), Biconjugate Gradient Squared (BCGS) and many others. The package is modular so that almost any accelerator can be used with almost any preconditioner.
Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.
Compressed Continuous Computation v. 12/20/2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorodetsky, Alex
2017-02-17
A library for performing numerical computation with low-rank functions. The (C3) library enables performing continuous linear and multilinear algebra with multidimensional functions. Common tasks include taking "matrix" decompositions of vector- or matrix-valued functions, approximating multidimensional functions in low-rank format, adding or multiplying functions together, integrating multidimensional functions.
Pulse reflectometry as an acoustical inverse problem: Regularization of the bore reconstruction
NASA Astrophysics Data System (ADS)
Forbes, Barbara J.; Sharp, David B.; Kemp, Jonathan A.
2002-11-01
The theoretical basis of acoustic pulse reflectometry, a noninvasive method for the reconstruction of an acoustical duct from the reflections measured in response to an input pulse, is reviewed in terms of the inversion of the central Fredholm equation. It is known that this is an ill-posed problem in the context of finite-bandwidth experimental signals. Recent work by the authors has proposed the truncated singular value decomposition (TSVD) in the regularization of the transient input impulse response, a non-measurable quantity from which the spatial bore reconstruction is derived. In the present paper we further emphasize the relevance of the singular system framework to reflectometry applications, examining for the first time the transient bases of the system. In particular, by varying the truncation point for increasing condition numbers of the system matrix, it is found that the effects of out-of-bandwidth singular functions on the bore reconstruction can be systematically studied.
Compressed-sensing wavenumber-scanning interferometry
NASA Astrophysics Data System (ADS)
Bai, Yulei; Zhou, Yanzhou; He, Zhaoshui; Ye, Shuangli; Dong, Bo; Xie, Shengli
2018-01-01
The Fourier transform (FT), the nonlinear least-squares algorithm (NLSA), and eigenvalue decomposition algorithm (EDA) are used to evaluate the phase field in depth-resolved wavenumber-scanning interferometry (DRWSI). However, because the wavenumber series of the laser's output is usually accompanied by nonlinearity and mode-hop, FT, NLSA, and EDA, which are only suitable for equidistant interference data, often lead to non-negligible phase errors. In this work, a compressed-sensing method for DRWSI (CS-DRWSI) is proposed to resolve this problem. By using the randomly spaced inverse Fourier matrix and solving the underdetermined equation in the wavenumber domain, CS-DRWSI determines the nonuniform sampling and spectral leakage of the interference spectrum. Furthermore, it can evaluate interference data without prior knowledge of the object. The experimental results show that CS-DRWSI improves the depth resolution and suppresses sidelobes. It can replace the FT as a standard algorithm for DRWSI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, W.D.; Keyes, D.E.
1988-03-01
The authors discuss the parallel implementation of preconditioned conjugate gradient (PCG)-based domain decomposition techniques for self-adjoint elliptic partial differential equations in two dimensions on several architectures. The complexity of these methods is described on a variety of message-passing parallel computers as a function of the size of the problem, number of processors and relative communication speeds of the processors. They show that communication startups are very important, and that even the small amount of global communication in these methods can significantly reduce the performance of many message-passing architectures.
Effects of the oceans on polar motion: Extended investigations
NASA Technical Reports Server (NTRS)
Dickman, Steven R.
1986-01-01
A method was found for expressing the tide current velocities in terms of the tide height (with all variables expanded in spherical harmonics). All time equations were then combined into a single, nondifferential matrix equation involving only the unknown tide height. The pole tide was constrained so that no tidewater flows across continental boundaries. The constraint was derived for the case of turbulent oceans; with the tide velocities expressed in terms of the tide height. The two matrix equations were combined. Simple matrix inversion then yielded the constrained solution. Programs to construct and invert the matrix equations were written. Preliminary results were obtained and are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, J.J.; Hewitt, T.
1985-08-01
This note describes some experiments on simple, dense linear algebra algorithms. These experiments show that the CRAY X-MP is capable of small-grain multitasking arising from standard implementations of LU and Cholesky decomposition. The implementation described here provides the ''fastest'' execution rate for LU decomposition, 718 MFLOPS for a matrix of order 1000.
A systematic linear space approach to solving partially described inverse eigenvalue problems
NASA Astrophysics Data System (ADS)
Hu, Sau-Lon James; Li, Haujun
2008-06-01
Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
2015-03-11
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
Diffusion Forecasting Model with Basis Functions from QR-Decomposition
NASA Astrophysics Data System (ADS)
Harlim, John; Yang, Haizhao
2018-06-01
The diffusion forecasting is a nonparametric approach that provably solves the Fokker-Planck PDE corresponding to Itô diffusion without knowing the underlying equation. The key idea of this method is to approximate the solution of the Fokker-Planck equation with a discrete representation of the shift (Koopman) operator on a set of basis functions generated via the diffusion maps algorithm. While the choice of these basis functions is provably optimal under appropriate conditions, computing these basis functions is quite expensive since it requires the eigendecomposition of an N× N diffusion matrix, where N denotes the data size and could be very large. For large-scale forecasting problems, only a few leading eigenvectors are computationally achievable. To overcome this computational bottleneck, a new set of basis functions constructed by orthonormalizing selected columns of the diffusion matrix and its leading eigenvectors is proposed. This computation can be carried out efficiently via the unpivoted Householder QR factorization. The efficiency and effectiveness of the proposed algorithm will be shown in both deterministically chaotic and stochastic dynamical systems; in the former case, the superiority of the proposed basis functions over purely eigenvectors is significant, while in the latter case forecasting accuracy is improved relative to using a purely small number of eigenvectors. Supporting arguments will be provided on three- and six-dimensional chaotic ODEs, a three-dimensional SDE that mimics turbulent systems, and also on the two spatial modes associated with the boreal winter Madden-Julian Oscillation obtained from applying the Nonlinear Laplacian Spectral Analysis on the measured Outgoing Longwave Radiation.
Diffusion Forecasting Model with Basis Functions from QR-Decomposition
NASA Astrophysics Data System (ADS)
Harlim, John; Yang, Haizhao
2017-12-01
The diffusion forecasting is a nonparametric approach that provably solves the Fokker-Planck PDE corresponding to Itô diffusion without knowing the underlying equation. The key idea of this method is to approximate the solution of the Fokker-Planck equation with a discrete representation of the shift (Koopman) operator on a set of basis functions generated via the diffusion maps algorithm. While the choice of these basis functions is provably optimal under appropriate conditions, computing these basis functions is quite expensive since it requires the eigendecomposition of an N× N diffusion matrix, where N denotes the data size and could be very large. For large-scale forecasting problems, only a few leading eigenvectors are computationally achievable. To overcome this computational bottleneck, a new set of basis functions constructed by orthonormalizing selected columns of the diffusion matrix and its leading eigenvectors is proposed. This computation can be carried out efficiently via the unpivoted Householder QR factorization. The efficiency and effectiveness of the proposed algorithm will be shown in both deterministically chaotic and stochastic dynamical systems; in the former case, the superiority of the proposed basis functions over purely eigenvectors is significant, while in the latter case forecasting accuracy is improved relative to using a purely small number of eigenvectors. Supporting arguments will be provided on three- and six-dimensional chaotic ODEs, a three-dimensional SDE that mimics turbulent systems, and also on the two spatial modes associated with the boreal winter Madden-Julian Oscillation obtained from applying the Nonlinear Laplacian Spectral Analysis on the measured Outgoing Longwave Radiation.
NASA Astrophysics Data System (ADS)
Kumar, Ravi; Bhaduri, Basanta; Nishchal, Naveen K.
2018-01-01
In this study, we propose a quick response (QR) code based nonlinear optical image encryption technique using spiral phase transform (SPT), equal modulus decomposition (EMD) and singular value decomposition (SVD). First, the primary image is converted into a QR code and then multiplied with a spiral phase mask (SPM). Next, the product is spiral phase transformed with particular spiral phase function, and further, the EMD is performed on the output of SPT, which results into two complex images, Z 1 and Z 2. Among these, Z 1 is further Fresnel propagated with distance d, and Z 2 is reserved as a decryption key. Afterwards, SVD is performed on Fresnel propagated output to get three decomposed matrices i.e. one diagonal matrix and two unitary matrices. The two unitary matrices are modulated with two different SPMs and then, the inverse SVD is performed using the diagonal matrix and modulated unitary matrices to get the final encrypted image. Numerical simulation results confirm the validity and effectiveness of the proposed technique. The proposed technique is robust against noise attack, specific attack, and brutal force attack. Simulation results are presented in support of the proposed idea.
On the computation and updating of the modified Cholesky decomposition of a covariance matrix
NASA Technical Reports Server (NTRS)
Vanrooy, D. L.
1976-01-01
Methods for obtaining and updating the modified Cholesky decomposition (MCD) for the particular case of a covariance matrix when one is given only the original data are described. These methods are the standard method of forming the covariance matrix K then solving for the MCD, L and D (where K=LDLT); a method based on Householder reflections; and lastly, a method employing the composite-t algorithm. For many cases in the analysis of remotely sensed data, the composite-t method is the superior method despite the fact that it is the slowest one, since (1) the relative amount of time computing MCD's is often quite small, (2) the stability properties of it are the best of the three, and (3) it affords an efficient and numerically stable procedure for updating the MCD. The properties of these methods are discussed and FORTRAN programs implementing these algorithms are listed.
A study of the parallel algorithm for large-scale DC simulation of nonlinear systems
NASA Astrophysics Data System (ADS)
Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel
Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.
Controlled nucleation and growth of CdS nanoparticles in a polymer matrix.
Di Luccio, Tiziana; Laera, Anna Maria; Tapfer, Leander; Kempter, Susanne; Kraus, Robert; Nickel, Bert
2006-06-29
In-situ synchrotron X-ray diffraction (XRD) was used to monitor the thermal decomposition (thermolysis) of Cd thiolates precursors embedded in a polymer matrix and the nucleation of CdS nanoparticles. A thiolate precursor/polymer solid foil was heated to 300 degrees C in the X-ray diffraction setup of beamline W1.1 at Hasylab, and the diffraction curves were each recorded at 10 degrees C. At temperatures above 240 degrees C, the precursor decomposition is complete and CdS nanoparticles grow within the polymer matrix forming a nanocomposite with interesting optical properties. The nanoparticle structural properties (size and crystal structure) depend on the annealing temperature. Transmission electron microscopy (TEM) and photoluminescence (PL) analyses were used to characterize the nanoparticles. A possible mechanism driving the structural transformation of the precursor is inferred from the diffraction features arising at the different temperatures.
Iterative methods for elliptic finite element equations on general meshes
NASA Technical Reports Server (NTRS)
Nicolaides, R. A.; Choudhury, Shenaz
1986-01-01
Iterative methods for arbitrary mesh discretizations of elliptic partial differential equations are surveyed. The methods discussed are preconditioned conjugate gradients, algebraic multigrid, deflated conjugate gradients, an element-by-element techniques, and domain decomposition. Computational results are included.
Absolute Value Boundedness, Operator Decomposition, and Stochastic Media and Equations
NASA Technical Reports Server (NTRS)
Adomian, G.; Miao, C. C.
1973-01-01
The research accomplished during this period is reported. Published abstracts and technical reports are listed. Articles presented include: boundedness of absolute values of generalized Fourier coefficients, propagation in stochastic media, and stationary conditions for stochastic differential equations.
NASA Astrophysics Data System (ADS)
Elbeih, Ahmed; Abd-Elghany, Mohamed; Elshenawy, Tamer
2017-03-01
Vacuum stability test (VST) is mainly used to study compatibility and stability of energetic materials. In this work, VST has been investigated to study thermal decomposition kinetics of four cyclic nitramines, 1,3,5-trinitro-1,3,5-triazinane (RDX) and 1,3,5,7-tetranitro-1,3,5,7-tetrazocane (HMX), cis-1,3,4,6-tetranitrooctahydroimidazo-[4,5-d]imidazole (BCHMX), 2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (ε-HNIW, CL-20), bonded by polyurethane matrix based on hydroxyl terminated polybutadiene (HTPB). Model fitting and model free (isoconversional) methods have been applied to determine the decomposition kinetics from VST results. For comparison, the decomposition kinetics were determined isothermally by ignition delay technique and non-isothermally by Advanced Kinetics and Technology Solution (AKTS) software. The activation energies for thermolysis obtained by isoconversional method based on VST technique of RDX/HTPB, HMX/HTPB, BCHMX/HTPB and CL20/HTPB were 157.1, 203.1, 190.0 and 176.8 kJ mol-1 respectively. Model fitting method proved that the mechanism of thermal decomposition of BCHMX/HTPB is controlled by the nucleation model while all the other studied PBXs are controlled by the diffusion models. A linear relationship between the ignition temperatures and the activation energies was observed. BCHMX/HTPB is interesting new PBX in the research stage.
Simple derivation of the Lindblad equation
NASA Astrophysics Data System (ADS)
Pearle, Philip
2012-07-01
The Lindblad equation is an evolution equation for the density matrix in quantum theory. It is the general linear, Markovian, form which ensures that the density matrix is Hermitian, trace 1, positive and completely positive. Some elementary examples of the Lindblad equation are given. The derivation of the Lindblad equation presented here is ‘simple’ in that all it uses is the expression of a Hermitian matrix in terms of its orthonormal eigenvectors and real eigenvalues. Thus, it is appropriate for students who have learned the algebra of quantum theory. Where helpful, arguments are first given in a two-dimensional Hilbert space.
Recursive inverse factorization.
Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N
2008-03-14
A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.
CP decomposition approach to blind separation for DS-CDMA system using a new performance index
NASA Astrophysics Data System (ADS)
Rouijel, Awatif; Minaoui, Khalid; Comon, Pierre; Aboutajdine, Driss
2014-12-01
In this paper, we present a canonical polyadic (CP) tensor decomposition isolating the scaling matrix. This has two major implications: (i) the problem conditioning shows up explicitly and could be controlled through a constraint on the so-called coherences and (ii) a performance criterion concerning the factor matrices can be exactly calculated and is more realistic than performance metrics used in the literature. Two new algorithms optimizing the CP decomposition based on gradient descent are proposed. This decomposition is illustrated by an application to direct-sequence code division multiplexing access (DS-CDMA) systems; computer simulations are provided and demonstrate the good behavior of these algorithms, compared to others in the literature.
Shock chemistry in SX358 foams
NASA Astrophysics Data System (ADS)
Maerzke, Katie; Coe, Joshua; Fredenburg, Anthony; Lang, John; Dattelbaum, Dana
2017-06-01
We have developed new equation of state models for SX358, a cross-linked PDMS polymer. Recent experiments on SX358 over a range of initial densities (0-65% porous) have yielded new data that allow for a more thorough calibration of the equations of state. SX358 chemically decomposes under shock compression, as evidenced by a cusp in the shock locus. We therefore treat this material using two equations of state, specifically a SESAME model for the unreacted material and a free energy minimization assuming full chemical and thermodynamic equilibrium for the decomposition products. The shock locus of porous SX358 is found to be ``anomalous'' in that the decomposition reaction causes a volume expansion, rather than a volume collapse. Similar behavior has been observed in other polymer foams, notably polyurethane.
NASA Astrophysics Data System (ADS)
Lezina, Natalya; Agoshkov, Valery
2017-04-01
Domain decomposition method (DDM) allows one to present a domain with complex geometry as a set of essentially simpler subdomains. This method is particularly applied for the hydrodynamics of oceans and seas. In each subdomain the system of thermo-hydrodynamic equations in the Boussinesq and hydrostatic approximations is solved. The problem of obtaining solution in the whole domain is that it is necessary to combine solutions in subdomains. For this purposes iterative algorithm is created and numerical experiments are conducted to investigate an effectiveness of developed algorithm using DDM. For symmetric operators in DDM, Poincare-Steklov's operators [1] are used, but for the problems of the hydrodynamics, it is not suitable. In this case for the problem, adjoint equation method [2] and inverse problem theory are used. In addition, it is possible to create algorithms for the parallel calculations using DDM on multiprocessor computer system. DDM for the model of the Baltic Sea dynamics is numerically studied. The results of numerical experiments using DDM are compared with the solution of the system of hydrodynamic equations in the whole domain. The work was supported by the Russian Science Foundation (project 14-11-00609, the formulation of the iterative process and numerical experiments). [1] V.I. Agoshkov, Domain Decompositions Methods in the Mathematical Physics Problem // Numerical processes and systems, No 8, Moscow, 1991 (in Russian). [2] V.I. Agoshkov, Optimal Control Approaches and Adjoint Equations in the Mathematical Physics Problem, Institute of Numerical Mathematics, RAS, Moscow, 2003 (in Russian).
The Existence of the Solution to One Kind of Algebraic Riccati Equation
NASA Astrophysics Data System (ADS)
Liu, Jianming
2018-03-01
The matrix equation ATX + XA + XRX + Q = O is called algebraic Riccati equation, which is very important in the fields of automatic control and other engineering applications. Many researchers have studied the solutions to various algebraic Riccati equations and most of them mainly applied the matrix methods, while few used the functional analysis theories. This paper mainly studies the existence of the solution to the following kind of algebraic Riccati equation from the functional view point: ATX + XA + XRX ‑λX + Q = O Here, X, A, R, Q ∈ n×n , Q is a symmetric matrix, and R is a positive or negative semi-definite matrix, λ is arbitrary constants. This paper uses functional approach such as fixed point theorem and contraction mapping thinking so as to provide two sufficient conditions for the solvability about this kind of Riccati equation and to arrive at some relevant conclusions.
Shanableh, A
2005-01-01
The main objective of this study was to develop generalized first-order kinetic models to represent hydrothermal decomposition and oxidation of biosolids within a wide range of temperatures (200-450 degrees C). A lumping approach was used in which oxidation of the various organic ingredients was characterized by the chemical oxygen demand (COD), and decomposition was characterized by the particulate (i.e., nonfilterable) chemical oxygen demand (PCOD). Using the Arrhenius equation (k = k(o)e(-Ea/RT)), activation energy (Ea) levels were derived from 42 continuous-flow hydrothermal treatment experiments conducted at temperatures in the range of 200-450 degrees C. Using predetermined values for k(o) in the Arrhenius equation, the activation energies of the various organic ingredients were separated into 42 values for oxidation and a similar number for decomposition. The activation energy values were then classified into levels representing the relative ease at which the organic ingredients of the biosolids were oxidized or decomposed. The resulting simple first-order kinetic models adequately represented, within the experimental data range, hydrothermal decomposition of the organic particles as measured by PCOD and oxidation of the organic content as measured by COD. The modeling approach presented in the paper provide a simple and general framework suitable for assessing the relative reaction rates of the various organic ingredients of biosolids.
Cai, Andong; Liang, Guopeng; Zhang, Xubo; Zhang, Wenju; Li, Ling; Rui, Yichao; Xu, Minggang; Luo, Yiqi
2018-05-01
Understanding drivers of straw decomposition is essential for adopting appropriate management practice to improve soil fertility and promote carbon (C) sequestration in agricultural systems. However, predicting straw decomposition and characteristics is difficult because of the interactions between many factors related to straw properties, soil properties, and climate, especially under future climate change conditions. This study investigated the driving factors of straw decomposition of six types of crop straw including wheat, maize, rice, soybean, rape, and other straw by synthesizing 1642 paired data from 98 published papers at spatial and temporal scales across China. All the data derived from the field experiments using little bags over twelve years. Overall, despite large differences in climatic and soil properties, the remaining straw carbon (C, %) could be accurately represented by a three-exponent equation with thermal time (accumulative temperature). The lignin/nitrogen and lignin/phosphorus ratios of straw can be used to define the size of labile, intermediate, and recalcitrant C pool. The remaining C for an individual type of straw in the mild-temperature zone was higher than that in the warm-temperature and subtropical zone within one calendar year. The remaining straw C after one thermal year was 40.28%, 37.97%, 37.77%, 34.71%, 30.87%, and 27.99% for rice, soybean, rape, wheat, maize, and other straw, respectively. Soil available nitrogen and phosphorus influenced the remaining straw C at different decomposition stages. For one calendar year, the total amount of remaining straw C was estimated to be 29.41 Tg and future temperature increase of 2 °C could reduce the remaining straw C by 1.78 Tg. These findings confirmed the long-term straw decomposition could be mainly driven by temperature and straw quality, and quantitatively predicted by thermal time with the three-exponent equation for a wide array of straw types at spatial and temporal scales in agro-ecosystems of China. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hu, Jie; Luo, Meng; Jiang, Feng; Xu, Rui-Xue; Yan, YiJing
2011-06-01
Padé spectrum decomposition is an optimal sum-over-poles expansion scheme of Fermi function and Bose function [J. Hu, R. X. Xu, and Y. J. Yan, J. Chem. Phys. 133, 101106 (2010)], 10.1063/1.3484491. In this work, we report two additional members to this family, from which the best among all sum-over-poles methods could be chosen for different cases of application. Methods are developed for determining these three Padé spectrum decomposition expansions at machine precision via simple algorithms. We exemplify the applications of present development with optimal construction of hierarchical equations-of-motion formulations for nonperturbative quantum dissipation and quantum transport dynamics. Numerical demonstrations are given for two systems. One is the transient transport current to an interacting quantum-dots system, together with the involved high-order co-tunneling dynamics. Another is the non-Markovian dynamics of a spin-boson system.
NASA Astrophysics Data System (ADS)
Xie, Dexuan
2014-10-01
The Poisson-Boltzmann equation (PBE) is one widely-used implicit solvent continuum model in the calculation of electrostatic potential energy for biomolecules in ionic solvent, but its numerical solution remains a challenge due to its strong singularity and nonlinearity caused by its singular distribution source terms and exponential nonlinear terms. To effectively deal with such a challenge, in this paper, new solution decomposition and minimization schemes are proposed, together with a new PBE analysis on solution existence and uniqueness. Moreover, a PBE finite element program package is developed in Python based on the FEniCS program library and GAMer, a molecular surface and volumetric mesh generation program package. Numerical tests on proteins and a nonlinear Born ball model with an analytical solution validate the new solution decomposition and minimization schemes, and demonstrate the effectiveness and efficiency of the new PBE finite element program package.
NASA Technical Reports Server (NTRS)
Morino, L.
1986-01-01
Using the decomposition for the infinite-space, the issue of the nonuniqueness of the Helmholtz decomposition for the problem of the three-dimensional unsteady incompressible flow around a body is considered. A representation for the velocity that is valid for both the fluid region and the region inside the boundary surface is employed, and the motion of the boundary is described as the limiting case of a sequence of impulsive accelerations. At each instant of velocity discontinuity, vorticity is shown to be generated by the boundary condition on the normal component of the velocity, for both inviscid and viscous flows. In viscous flows, the vorticity is shown to diffuse into the surroundings, and the no-slip conditions are automatically satisfied. A trailing edge condition must be satisfied for the solution to the Euler equations to be the limit of the solution of the Navier-Stokes equations.
NASA Astrophysics Data System (ADS)
Teal, Paul D.; Eccles, Craig
2015-04-01
The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.
A New Ferroelectric Varactor from Water Based Inorganic Precursors
2003-04-03
See Equation 2). 0 0 4 OH +-)4T H13< Equation 2. Idealized Reaction of Titanium Isopropoxide with 2-ethylhexanoic acid. Inconsistent results with the...Equation 3). Equation 3. Reaction of 2-ethylhexanoic anhydride with Titanium Isopropoxide We have made over one hundred batches of both BST and SBTN MOD...aliphatic acids used in the more common MOD precursors. Equation 4 shows a comparison of the decomposition products of Titanium MOD precursors made from 2
TE/TM decomposition of electromagnetic sources
NASA Technical Reports Server (NTRS)
Lindell, Ismo V.
1988-01-01
Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.
Computational mechanics analysis tools for parallel-vector supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning
1993-01-01
Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.
NASA Astrophysics Data System (ADS)
Pan, Xiao-Min; Wei, Jian-Gong; Peng, Zhen; Sheng, Xin-Qing
2012-02-01
The interpolative decomposition (ID) is combined with the multilevel fast multipole algorithm (MLFMA), denoted by ID-MLFMA, to handle multiscale problems. The ID-MLFMA first generates ID levels by recursively dividing the boxes at the finest MLFMA level into smaller boxes. It is specifically shown that near-field interactions with respect to the MLFMA, in the form of the matrix vector multiplication (MVM), are efficiently approximated at the ID levels. Meanwhile, computations on far-field interactions at the MLFMA levels remain unchanged. Only a small portion of matrix entries are required to approximate coupling among well-separated boxes at the ID levels, and these submatrices can be filled without computing the complete original coupling matrix. It follows that the matrix filling in the ID-MLFMA becomes much less expensive. The memory consumed is thus greatly reduced and the MVM is accelerated as well. Several factors that may influence the accuracy, efficiency and reliability of the proposed ID-MLFMA are investigated by numerical experiments. Complex targets are calculated to demonstrate the capability of the ID-MLFMA algorithm.
Emergent causality and the N-photon scattering matrix in waveguide QED
NASA Astrophysics Data System (ADS)
Sánchez-Burillo, E.; Cadarso, A.; Martín-Moreno, L.; García-Ripoll, J. J.; Zueco, D.
2018-01-01
In this work we discuss the emergence of approximate causality in a general setup from waveguide QED—i.e. a one-dimensional propagating field interacting with a scatterer. We prove that this emergent causality translates into a structure for the N-photon scattering matrix. Our work builds on the derivation of a Lieb-Robinson-type bound for continuous models and for all coupling strengths, as well as on several intermediate results, of which we highlight: (i) the asymptotic independence of space-like separated wave packets, (ii) the proper definition of input and output scattering states, and (iii) the characterization of the ground state and correlations in the model. We illustrate our formal results by analyzing the two-photon scattering from a quantum impurity in the ultrastrong coupling regime, verifying the cluster decomposition and ground-state nature. Besides, we generalize the cluster decomposition if inelastic or Raman scattering occurs, finding the structure of the S-matrix in momentum space for linear dispersion relations. In this case, we compute the decay of the fluorescence (photon-photon correlations) caused by this S-matrix.
NASA Astrophysics Data System (ADS)
Fang, Dong-Liang; Faessler, Amand; Šimkovic, Fedor
2018-04-01
In this paper, with restored isospin symmetry, we evaluated the neutrinoless double-β -decay nuclear matrix elements for 76Ge, 82Se, 130Te, 136Xe, and 150Nd for both the light and heavy neutrino mass mechanisms using the deformed quasiparticle random-phase approximation approach with realistic forces. We give detailed decompositions of the nuclear matrix elements over different intermediate states and nucleon pairs, and discuss how these decompositions are affected by the model space truncations. Compared to the spherical calculations, our results show reductions from 30 % to about 60 % of the nuclear matrix elements for the calculated isotopes mainly due to the presence of the BCS overlap factor between the initial and final ground states. The comparison between different nucleon-nucleon (NN) forces with corresponding short-range correlations shows that the choice of the NN force gives roughly 20 % deviations for the light exchange neutrino mechanism and much larger deviations for the heavy neutrino exchange mechanism.
Corrigendum: New Form of Kane's Equations of Motion for Constrained Systems
NASA Technical Reports Server (NTRS)
Roithmayr, Carlos M.; Bajodah, Abdulrahman H.; Hodges, Dewey H.; Chen, Ye-Hwa
2007-01-01
A correction to the previously published article "New Form of Kane's Equations of Motion for Constrained Systems" is presented. Misuse of the transformation matrix between time rates of change of the generalized coordinates and generalized speeds (sometimes called motion variables) resulted in a false conclusion concerning the symmetry of the generalized inertia matrix. The generalized inertia matrix (sometimes referred to as the mass matrix) is in fact symmetric and usually positive definite when one forms nonminimal Kane's equations for holonomic or simple nonholonomic systems, systems subject to nonlinear nonholonomic constraints, and holonomic or simple nonholonomic systems subject to impulsive constraints according to Refs. 1, 2, and 3, respectively. The mass matrix is of course symmetric when one forms minimal equations for holonomic or simple nonholonomic systems using Kane s method as set forth in Ref. 4.
Computationally efficient modeling and simulation of large scale systems
NASA Technical Reports Server (NTRS)
Jain, Jitesh (Inventor); Cauley, Stephen F. (Inventor); Li, Hong (Inventor); Koh, Cheng-Kok (Inventor); Balakrishnan, Venkataramanan (Inventor)
2010-01-01
A method of simulating operation of a VLSI interconnect structure having capacitive and inductive coupling between nodes thereof. A matrix X and a matrix Y containing different combinations of passive circuit element values for the interconnect structure are obtained where the element values for each matrix include inductance L and inverse capacitance P. An adjacency matrix A associated with the interconnect structure is obtained. Numerical integration is used to solve first and second equations, each including as a factor the product of the inverse matrix X.sup.1 and at least one other matrix, with first equation including X.sup.1Y, X.sup.1A, and X.sup.1P, and the second equation including X.sup.1A and X.sup.1P.
A matrix equation solution by an optimization technique
NASA Technical Reports Server (NTRS)
Johnson, M. J.; Mittra, R.
1972-01-01
The computer solution of matrix equations is often difficult to accomplish due to an ill-conditioned matrix or high noise levels. Two methods of solution are compared for matrices of various degrees of ill-conditioning and for various noise levels in the right hand side vector. One method employs the usual Gaussian elimination. The other solves the equation by an optimization technique and employs a function minimization subroutine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sechin, Ivan, E-mail: shnbuz@gmail.com, E-mail: zotov@mi.ras.ru; ITEP, B. Cheremushkinskaya Str. 25, Moscow 117218; Zotov, Andrei, E-mail: shnbuz@gmail.com, E-mail: zotov@mi.ras.ru
In this paper we propose versions of the associative Yang-Baxter equation and higher order R-matrix identities which can be applied to quantum dynamical R-matrices. As is known quantum non-dynamical R-matrices of Baxter-Belavin type satisfy this equation. Together with unitarity condition and skew-symmetry it provides the quantum Yang-Baxter equation and a set of identities useful for different applications in integrable systems. The dynamical R-matrices satisfy the Gervais-Neveu-Felder (or dynamical Yang-Baxter) equation. Relation between the dynamical and non-dynamical cases is described by the IRF (interaction-round-a-face)-Vertex transformation. An alternative approach to quantum (semi-)dynamical R-matrices and related quantum algebras was suggested by Arutyunov, Chekhov,more » and Frolov (ACF) in their study of the quantum Ruijsenaars-Schneider model. The purpose of this paper is twofold. First, we prove that the ACF elliptic R-matrix satisfies the associative Yang-Baxter equation with shifted spectral parameters. Second, we directly prove a simple relation of the IRF-Vertex type between the Baxter-Belavin and the ACF elliptic R-matrices predicted previously by Avan and Rollet. It provides the higher order R-matrix identities and an explanation of the obtained equations through those for non-dynamical R-matrices. As a by-product we also get an interpretation of the intertwining transformation as matrix extension of scalar theta function likewise R-matrix is interpreted as matrix extension of the Kronecker function. Relations to the Gervais-Neveu-Felder equation and identities for the Felder’s elliptic R-matrix are also discussed.« less
Optical character recognition with feature extraction and associative memory matrix
NASA Astrophysics Data System (ADS)
Sasaki, Osami; Shibahara, Akihito; Suzuki, Takamasa
1998-06-01
A method is proposed in which handwritten characters are recognized using feature extraction and an associative memory matrix. In feature extraction, simple processes such as shifting and superimposing patterns are executed. A memory matrix is generated with singular value decomposition and by modifying small singular values. The method is optically implemented with two liquid crystal displays. Experimental results for the recognition of 25 handwritten alphabet characters clearly shows the effectiveness of the method.
NASA Astrophysics Data System (ADS)
Ballard, S.; Hipp, J. R.; Encarnacao, A.; Young, C. J.; Begnaud, M. L.; Phillips, W. S.
2012-12-01
Seismic event locations can be made more accurate and precise by computing predictions of seismic travel time through high fidelity 3D models of the wave speed in the Earth's interior. Given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from SALSA3D, our global, seamless 3D tomographic P-velocity model. Typical global 3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.
Tensor Decompositions for Learning Latent Variable Models
2012-12-08
and eigenvectors of tensors is generally significantly more complicated than their matrix counterpart (both algebraically [Qi05, CS11, Lim05] and...The reduction First, let W ∈ Rd×k be a linear transformation such that M2(W,W ) = W M2W = I where I is the k × k identity matrix (i.e., W whitens ...approximate the whitening matrix W ∈ Rd×k from second-moment matrix M2 ∈ Rd×d. To do this, one first multiplies M2 by a random matrix R ∈ Rd×k′ for some k′ ≥ k
Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko
2015-01-01
We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the decomposition rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the decomposition rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a decomposition rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the decomposition rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the decomposition rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the decomposition rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the decomposition process of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris decomposition process. PMID:26110605
Efficient implementation of a 3-dimensional ADI method on the iPSC/860
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van der Wijngaart, R.F.
1993-12-31
A comparison is made between several domain decomposition strategies for the solution of three-dimensional partial differential equations on a MIMD distributed memory parallel computer. The grids used are structured, and the numerical algorithm is ADI. Important implementation issues regarding load balancing, storage requirements, network latency, and overlap of computations and communications are discussed. Results of the solution of the three-dimensional heat equation on the Intel iPSC/860 are presented for the three most viable methods. It is found that the Bruno-Cappello decomposition delivers optimal computational speed through an almost complete elimination of processor idle time, while providing good memory efficiency.
A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrett, C. Kristopher; Hauck, Cory D.
In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less
A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework
Garrett, C. Kristopher; Hauck, Cory D.
2018-04-05
In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less
Quantum spectral curve for ( q, t)-matrix model
NASA Astrophysics Data System (ADS)
Zenkevich, Yegor
2018-02-01
We derive quantum spectral curve equation for ( q, t)-matrix model, which turns out to be a certain difference equation. We show that in Nekrasov-Shatashvili limit this equation reproduces the Baxter TQ equation for the quantum XXZ spin chain. This chain is spectral dual to the Seiberg-Witten integrable system associated with the AGT dual gauge theory.
Preconditioned conjugate residual methods for the solution of spectral equations
NASA Technical Reports Server (NTRS)
Wong, Y. S.; Zang, T. A.; Hussaini, M. Y.
1986-01-01
Conjugate residual methods for the solution of spectral equations are described. An inexact finite-difference operator is introduced as a preconditioner in the iterative procedures. Application of these techniques is limited to problems for which the symmetric part of the coefficient matrix is positive definite. Although the spectral equation is a very ill-conditioned and full matrix problem, the computational effort of the present iterative methods for solving such a system is comparable to that for the sparse matrix equations obtained from the application of either finite-difference or finite-element methods to the same problems. Numerical experiments are shown for a self-adjoint elliptic partial differential equation with Dirichlet boundary conditions, and comparison with other solution procedures for spectral equations is presented.
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Stouffer, Donald C.
1998-01-01
Recently applications have exposed polymer matrix composite materials to very high strain rate loading conditions, requiring an ability to understand and predict the material behavior under these extreme conditions. In this first paper of a two part report, background information is presented, along with the constitutive equations which will be used to model the rate dependent nonlinear deformation response of the polymer matrix. Strain rate dependent inelastic constitutive models which were originally developed to model the viscoplastic deformation of metals have been adapted to model the nonlinear viscoelastic deformation of polymers. The modified equations were correlated by analyzing the tensile/ compressive response of both 977-2 toughened epoxy matrix and PEEK thermoplastic matrix over a variety of strain rates. For the cases examined, the modified constitutive equations appear to do an adequate job of modeling the polymer deformation response. A second follow-up paper will describe the implementation of the polymer deformation model into a composite micromechanical model, to allow for the modeling of the nonlinear, rate dependent deformation response of polymer matrix composites.
Final Report, DE-FG01-06ER25718 Domain Decomposition and Parallel Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widlund, Olof B.
2015-06-09
The goal of this project is to develop and improve domain decomposition algorithms for a variety of partial differential equations such as those of linear elasticity and electro-magnetics.These iterative methods are designed for massively parallel computing systems and allow the fast solution of the very large systems of algebraic equations that arise in large scale and complicated simulations. A special emphasis is placed on problems arising from Maxwell's equation. The approximate solvers, the preconditioners, are combined with the conjugate gradient method and must always include a solver of a coarse model in order to have a performance which is independentmore » of the number of processors used in the computer simulation. A recent development allows for an adaptive construction of this coarse component of the preconditioner.« less
Hamiltonian formulation of the KdV equation
NASA Astrophysics Data System (ADS)
Nutku, Y.
1984-06-01
We consider the canonical formulation of Whitham's variational principle for the KdV equation. This Lagrangian is degenerate and we have found it necessary to use Dirac's theory of constrained systems in constructing the Hamiltonian. Earlier discussions of the Hamiltonian structure of the KdV equation were based on various different decompositions of the field which is avoided by this new approach.
A parallel algorithm for nonlinear convection-diffusion equations
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.
1990-01-01
A parallel algorithm for the efficient solution of nonlinear time-dependent convection-diffusion equations with small parameter on the diffusion term is presented. The method is based on a physically motivated domain decomposition that is dictated by singular perturbation analysis. The analysis is used to determine regions where certain reduced equations may be solved in place of the full equation. The method is suitable for the solution of problems arising in the simulation of fluid dynamics. Experimental results for a nonlinear equation in two-dimensions are presented.
NASA Astrophysics Data System (ADS)
Maier-Paape, Stanislaus; Wanner, Thomas
This paper is the first in a series of two papers addressing the phenomenon of spinodal decomposition for the Cahn-Hilliard equation
Spinodal Decomposition for theCahn-Hilliard Equation in Higher Dimensions:Nonlinear Dynamics
NASA Astrophysics Data System (ADS)
Maier-Paape, Stanislaus; Wanner, Thomas
This paper addresses the phenomenon of spinodal decomposition for the Cahn-Hilliard equation
Computational mechanics analysis tools for parallel-vector supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.
1993-01-01
Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.
Thermal decomposition of pyrazole to vinylcarbene + N 2: A first principles/RRKM study
NASA Astrophysics Data System (ADS)
da Silva, Gabriel
2009-05-01
Thermal decomposition of pyrazole, a five-membered nitrogen-containing heterocycle, has been studied using ab initio G3X theory and RRKM rate theory. The decomposition mechanism involves an intramolecular hydrogen shift to 3 H-pyrazole, followed by ring opening to 3-diazo-1-propene and dissociation to vinylcarbene (CH 2CHCH) + N 2. At 1 atm the calculated rate equation k [s -1] = 1.26 × 10 50T-10.699e -41200/T is obtained, which agrees with the results of flash vacuum pyrolysis experiments. The pyrazole decomposition product vinylcarbene is expected to rearrange to propyne, making pyrazole decomposition essentially thermoneutral. It is hypothesized that at high concentrations vinylcarbene could undergo a self-reaction to 1,3- and 1,4-cyclohexadiene.
NASA Astrophysics Data System (ADS)
Klein, Ole; Cirpka, Olaf A.; Bastian, Peter; Ippisch, Olaf
2017-04-01
In the geostatistical inverse problem of subsurface hydrology, continuous hydraulic parameter fields, in most cases hydraulic conductivity, are estimated from measurements of dependent variables, such as hydraulic heads, under the assumption that the parameter fields are autocorrelated random space functions. Upon discretization, the continuous fields become large parameter vectors with O (104 -107) elements. While cokriging-like inversion methods have been shown to be efficient for highly resolved parameter fields when the number of measurements is small, they require the calculation of the sensitivity of each measurement with respect to all parameters, which may become prohibitive with large sets of measured data such as those arising from transient groundwater flow. We present a Preconditioned Conjugate Gradient method for the geostatistical inverse problem, in which a single adjoint equation needs to be solved to obtain the gradient of the objective function. Using the autocovariance matrix of the parameters as preconditioning matrix, expensive multiplications with its inverse can be avoided, and the number of iterations is significantly reduced. We use a randomized spectral decomposition of the posterior covariance matrix of the parameters to perform a linearized uncertainty quantification of the parameter estimate. The feasibility of the method is tested by virtual examples of head observations in steady-state and transient groundwater flow. These synthetic tests demonstrate that transient data can reduce both parameter uncertainty and time spent conducting experiments, while the presented methods are able to handle the resulting large number of measurements.
Reduced order feedback control equations for linear time and frequency domain analysis
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1981-01-01
An algorithm was developed which can be used to obtain the equations. In a more general context, the algorithm computes a real nonsingular similarity transformation matrix which reduces a real nonsymmetric matrix to block diagonal form, each block of which is a real quasi upper triangular matrix. The algorithm works with both defective and derogatory matrices and when and if it fails, the resultant output can be used as a guide for the reformulation of the mathematical equations that lead up to the ill conditioned matrix which could not be block diagonalized.
A new lumped-parameter approach to simulating flow processes in unsaturated dual-porosity media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, R.W.; Hadgu, T.; Bodvarsson, G.S.
We have developed a new lumped-parameter dual-porosity approach to simulating unsaturated flow processes in fractured rocks. Fluid flow between the fracture network and the matrix blocks is described by a nonlinear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. This equation is a generalization of the Warren-Root equation, but unlike the Warren-Root equation, is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into a computational module, compatible with the TOUGH simulator, to serve as a source/sink term for fracture elements.more » The new approach achieves accuracy comparable to simulations in which the matrix blocks are discretized, but typically requires an order of magnitude less computational time.« less
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2017-04-01
Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.
NASA Technical Reports Server (NTRS)
Lakin, W. D.
1981-01-01
The use of integrating matrices in solving differential equations associated with rotating beam configurations is examined. In vibration problems, by expressing the equations of motion of the beam in matrix notation, utilizing the integrating matrix as an operator, and applying the boundary conditions, the spatial dependence is removed from the governing partial differential equations and the resulting ordinary differential equations can be cast into standard eigenvalue form. Integrating matrices are derived based on two dimensional rectangular grids with arbitrary grid spacings allowed in one direction. The derivation of higher dimensional integrating matrices is the initial step in the generalization of the integrating matrix methodology to vibration and stability problems involving plates and shells.
A projection method for low speed flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colella, P.; Pao, K.
The authors propose a decomposition applicable to low speed, inviscid flows of all Mach numbers less than 1. By using the Hodge decomposition, they may write the velocity field as the sum of a divergence-free vector field and a gradient of a scalar function. Evolution equations for these parts are presented. A numerical procedure based on this decomposition is designed, using projection methods for solving the incompressible variables and a backward-Euler method for solving the potential variables. Numerical experiments are included to illustrate various aspects of the algorithm.
A novel iterative scheme and its application to differential equations.
Khan, Yasir; Naeem, F; Šmarda, Zdeněk
2014-01-01
The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.
A New Domain Decomposition Approach for the Gust Response Problem
NASA Technical Reports Server (NTRS)
Scott, James R.; Atassi, Hafiz M.; Susan-Resiga, Romeo F.
2002-01-01
A domain decomposition method is developed for solving the aerodynamic/aeroacoustic problem of an airfoil in a vortical gust. The computational domain is divided into inner and outer regions wherein the governing equations are cast in different forms suitable for accurate computations in each region. Boundary conditions which ensure continuity of pressure and velocity are imposed along the interface separating the two regions. A numerical study is presented for reduced frequencies ranging from 0.1 to 3.0. It is seen that the domain decomposition approach in providing robust and grid independent solutions.
Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland
2009-04-21
Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.
NASA Astrophysics Data System (ADS)
Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland
2009-04-01
Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.
Numerical solutions for Helmholtz equations using Bernoulli polynomials
NASA Astrophysics Data System (ADS)
Bicer, Kubra Erdem; Yalcinbas, Salih
2017-07-01
This paper reports a new numerical method based on Bernoulli polynomials for the solution of Helmholtz equations. The method uses matrix forms of Bernoulli polynomials and their derivatives by means of collocation points. Aim of this paper is to solve Helmholtz equations using this matrix relations.
Lossless and Sufficient - Invariant Decomposition of Deterministic Target
NASA Astrophysics Data System (ADS)
Paladini, Riccardo; Ferro Famil, Laurent; Pottier, Eric; Martorella, Marco; Berizzi, Fabrizio
2011-03-01
The symmetric radar scattering matrix of a reciprocal target is projected on the circular polarization basis and is decomposed into four orientation invariant parameters, relative phase and relative orientation. The physical interpretation of this results is found in the wave-particle nature of radar scattering due to the circular polarization nature of elemental packets of energy. The proposed decomposition, is based on left orthogonal to left Special Unitary basis, providing the target description in term of a unitary vector. A comparison between the proposed CTD and Cameron, Kennaugh and Krogager decompositions is also pointed out. A validation by the use of both anechoic chamber data and airborne EMISAR data of DTU is used to show the effectiveness of this decomposition for the analysis of coherent targets. In the second paper we will show the application of the rotation group U(3) for the decomposition of distributed targets into nine meaningful parameters.
Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred
Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles ofmore » graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.« less
NASA Astrophysics Data System (ADS)
Mleczko, M.
2014-12-01
Polarimetric SAR data is not widely used in practice, because it is not yet available operationally from the satellites. Currently we can distinguish two approaches in POL - In - SAR technology: alternating polarization imaging (Alt - POL) and fully polarimetric (QuadPol). The first represents a subset of another and is more operational, while the second is experimental because classification of this data requires polarimetric decomposition of scattering matrix in the first stage. In the literature decomposition process is divided in two types: the coherent and incoherent decomposition. In this paper the decomposition methods have been tested using data from the high resolution airborne F - SAR system. Results of classification have been interpreted in the context of the land cover mapping capabilities
DOT National Transportation Integrated Search
2016-09-01
We consider the problem of solving mixed random linear equations with k components. This is the noiseless setting of mixed linear regression. The goal is to estimate multiple linear models from mixed samples in the case where the labels (which sample...
Application of symbolic/numeric matrix solution techniques to the NASTRAN program
NASA Technical Reports Server (NTRS)
Buturla, E. M.; Burroughs, S. H.
1977-01-01
The matrix solving algorithm of any finite element algorithm is extremely important since solution of the matrix equations requires a large amount of elapse time due to null calculations and excessive input/output operations. An alternate method of solving the matrix equations is presented. A symbolic processing step followed by numeric solution yields the solution very rapidly and is especially useful for nonlinear problems.
Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media
NASA Astrophysics Data System (ADS)
Jakobsen, Morten; Tveit, Svenn
2018-05-01
We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.
Parallelization of the Physical-Space Statistical Analysis System (PSAS)
NASA Technical Reports Server (NTRS)
Larson, J. W.; Guo, J.; Lyster, P. M.
1999-01-01
Atmospheric data assimilation is a method of combining observations with model forecasts to produce a more accurate description of the atmosphere than the observations or forecast alone can provide. Data assimilation plays an increasingly important role in the study of climate and atmospheric chemistry. The NASA Data Assimilation Office (DAO) has developed the Goddard Earth Observing System Data Assimilation System (GEOS DAS) to create assimilated datasets. The core computational components of the GEOS DAS include the GEOS General Circulation Model (GCM) and the Physical-space Statistical Analysis System (PSAS). The need for timely validation of scientific enhancements to the data assimilation system poses computational demands that are best met by distributed parallel software. PSAS is implemented in Fortran 90 using object-based design principles. The analysis portions of the code solve two equations. The first of these is the "innovation" equation, which is solved on the unstructured observation grid using a preconditioned conjugate gradient (CG) method. The "analysis" equation is a transformation from the observation grid back to a structured grid, and is solved by a direct matrix-vector multiplication. Use of a factored-operator formulation reduces the computational complexity of both the CG solver and the matrix-vector multiplication, rendering the matrix-vector multiplications as a successive product of operators on a vector. Sparsity is introduced to these operators by partitioning the observations using an icosahedral decomposition scheme. PSAS builds a large (approx. 128MB) run-time database of parameters used in the calculation of these operators. Implementing a message passing parallel computing paradigm into an existing yet developing computational system as complex as PSAS is nontrivial. One of the technical challenges is balancing the requirements for computational reproducibility with the need for high performance. The problem of computational reproducibility is well known in the parallel computing community. It is a requirement that the parallel code perform calculations in a fashion that will yield identical results on different configurations of processing elements on the same platform. In some cases this problem can be solved by sacrificing performance. Meeting this requirement and still achieving high performance is very difficult. Topics to be discussed include: current PSAS design and parallelization strategy; reproducibility issues; load balance vs. database memory demands, possible solutions to these problems.
Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.
Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin
2017-11-15
Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.
NASA Astrophysics Data System (ADS)
Beretta, Gian Paolo; Rivadossi, Luca; Janbozorgi, Mohammad
2018-04-01
Rate-Controlled Constrained-Equilibrium (RCCE) modeling of complex chemical kinetics provides acceptable accuracies with much fewer differential equations than for the fully Detailed Kinetic Model (DKM). Since its introduction by James C. Keck, a drawback of the RCCE scheme has been the absence of an automatable, systematic procedure to identify the constraints that most effectively warrant a desired level of approximation for a given range of initial, boundary, and thermodynamic conditions. An optimal constraint identification has been recently proposed. Given a DKM with S species, E elements, and R reactions, the procedure starts by running a probe DKM simulation to compute an S-vector that we call overall degree of disequilibrium (ODoD) because its scalar product with the S-vector formed by the stoichiometric coefficients of any reaction yields its degree of disequilibrium (DoD). The ODoD vector evolves in the same (S-E)-dimensional stoichiometric subspace spanned by the R stoichiometric S-vectors. Next we construct the rank-(S-E) matrix of ODoD traces obtained from the probe DKM numerical simulation and compute its singular value decomposition (SVD). By retaining only the first C largest singular values of the SVD and setting to zero all the others we obtain the best rank-C approximation of the matrix of ODoD traces whereby its columns span a C-dimensional subspace of the stoichiometric subspace. This in turn yields the best approximation of the evolution of the ODoD vector in terms of only C parameters that we call the constraint potentials. The resulting order-C RCCE approximate model reduces the number of independent differential equations related to species, mass, and energy balances from S+2 to C+E+2, with substantial computational savings when C ≪ S-E.
NASA Technical Reports Server (NTRS)
Leone, Frank A., Jr.
2015-01-01
A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.
Solving large tomographic linear systems: size reduction and error estimation
NASA Astrophysics Data System (ADS)
Voronin, Sergey; Mikesell, Dylan; Slezak, Inna; Nolet, Guust
2014-10-01
We present a new approach to reduce a sparse, linear system of equations associated with tomographic inverse problems. We begin by making a modification to the commonly used compressed sparse-row format, whereby our format is tailored to the sparse structure of finite-frequency (volume) sensitivity kernels in seismic tomography. Next, we cluster the sparse matrix rows to divide a large matrix into smaller subsets representing ray paths that are geographically close. Singular value decomposition of each subset allows us to project the data onto a subspace associated with the largest eigenvalues of the subset. After projection we reject those data that have a signal-to-noise ratio (SNR) below a chosen threshold. Clustering in this way assures that the sparse nature of the system is minimally affected by the projection. Moreover, our approach allows for a precise estimation of the noise affecting the data while also giving us the ability to identify outliers. We illustrate the method by reducing large matrices computed for global tomographic systems with cross-correlation body wave delays, as well as with surface wave phase velocity anomalies. For a massive matrix computed for 3.7 million Rayleigh wave phase velocity measurements, imposing a threshold of 1 for the SNR, we condensed the matrix size from 1103 to 63 Gbyte. For a global data set of multiple-frequency P wave delays from 60 well-distributed deep earthquakes we obtain a reduction to 5.9 per cent. This type of reduction allows one to avoid loss of information due to underparametrizing models. Alternatively, if data have to be rejected to fit the system into computer memory, it assures that the most important data are preserved.
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
2017-09-04
In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numericalmore » experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
Chen, Hongmei; Oram, Natalie J; Barry, Kathryn E; Mommer, Liesje; van Ruijven, Jasper; de Kroon, Hans; Ebeling, Anne; Eisenhauer, Nico; Fischer, Christine; Gleixner, Gerd; Gessler, Arthur; González Macé, Odette; Hacker, Nina; Hildebrandt, Anke; Lange, Markus; Scherer-Lorenzen, Michael; Scheu, Stefan; Oelmann, Yvonne; Wagg, Cameron; Wilcke, Wolfgang; Wirth, Christian; Weigelt, Alexandra
2017-11-01
Plant diversity influences many ecosystem functions including root decomposition. However, due to the presence of multiple pathways via which plant diversity may affect root decomposition, our mechanistic understanding of their relationships is limited. In a grassland biodiversity experiment, we simultaneously assessed the effects of three pathways-root litter quality, soil biota, and soil abiotic conditions-on the relationships between plant diversity (in terms of species richness and the presence/absence of grasses and legumes) and root decomposition using structural equation modeling. Our final structural equation model explained 70% of the variation in root mass loss. However, different measures of plant diversity included in our model operated via different pathways to alter root mass loss. Plant species richness had a negative effect on root mass loss. This was partially due to increased Oribatida abundance, but was weakened by enhanced root potassium (K) concentration in more diverse mixtures. Equally, grass presence negatively affected root mass loss. This effect of grasses was mostly mediated via increased root lignin concentration and supported via increased Oribatida abundance and decreased root K concentration. In contrast, legume presence showed a net positive effect on root mass loss via decreased root lignin concentration and increased root magnesium concentration, both of which led to enhanced root mass loss. Overall, the different measures of plant diversity had contrasting effects on root decomposition. Furthermore, we found that root chemistry and soil biota but not root morphology or soil abiotic conditions mediated these effects of plant diversity on root decomposition.
NASA Astrophysics Data System (ADS)
Bai, Xue-Mei; Liu, Tie; Liu, De-Long; Wei, Yong-Ju
2018-02-01
A chemometrics-assisted excitation-emission matrix (EEM) fluorescence method was proposed for simultaneous determination of α-asarone and β-asarone in Acorus tatarinowii. Using the strategy of combining EEM data with chemometrics methods, the simultaneous determination of α-asarone and β-asarone in the complex Traditional Chinese medicine system was achieved successfully, even in the presence of unexpected interferents. The physical or chemical separation step was avoided due to the use of ;mathematical separation;. Six second-order calibration methods were used including parallel factor analysis (PARAFAC), alternating trilinear decomposition (ATLD), alternating penalty trilinear decomposition (APTLD), self-weighted alternating trilinear decomposition (SWATLD), the unfolded partial least-squares (U-PLS) and multidimensional partial least-squares (N-PLS) with residual bilinearization (RBL). In addition, HPLC method was developed to further validate the presented strategy. Consequently, for the validation samples, the analytical results obtained by six second-order calibration methods were almost accurate. But for the Acorus tatarinowii samples, the results indicated a slightly better predictive ability of N-PLS/RBL procedure over other methods.
Equation of State and Shock-Driven Decomposition of 'Soft' Materials
Coe, Joshua Damon; Dattelbaum, Dana Mcgraw
2017-12-01
Equation of state (EOS) efforts at National Nuclear Security Administration (NNSA) national laboratories tend to focus heavily on metals, and rightly so given their obvious primacy in nuclear weapons. Our focus here, however, is on the EOS of 'soft' matter such as polymers and their derived foams, which present a number of challenges distinct from those of other material classes. This brief description will cover only one aspect of polymer EOS modeling: treatment of shock-driven decomposition. Here, these interesting (and sometimes neglected) materials exhibit a number of other challenging features— glass transitions, complex thermal behavior, response that is both viscousmore » and elastic—each warranting additional discussions of their own.« less
Equation of State and Shock-Driven Decomposition of 'Soft' Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coe, Joshua Damon; Dattelbaum, Dana Mcgraw
Equation of state (EOS) efforts at National Nuclear Security Administration (NNSA) national laboratories tend to focus heavily on metals, and rightly so given their obvious primacy in nuclear weapons. Our focus here, however, is on the EOS of 'soft' matter such as polymers and their derived foams, which present a number of challenges distinct from those of other material classes. This brief description will cover only one aspect of polymer EOS modeling: treatment of shock-driven decomposition. Here, these interesting (and sometimes neglected) materials exhibit a number of other challenging features— glass transitions, complex thermal behavior, response that is both viscousmore » and elastic—each warranting additional discussions of their own.« less
Three Interpretations of the Matrix Equation Ax = b
ERIC Educational Resources Information Center
Larson, Christine; Zandieh, Michelle
2013-01-01
Many of the central ideas in an introductory undergraduate linear algebra course are closely tied to a set of interpretations of the matrix equation Ax = b (A is a matrix, x and b are vectors): linear combination interpretations, systems interpretations, and transformation interpretations. We consider graphic and symbolic representations for each,…
Minimal parameter solution of the orthogonal matrix differential equation
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Markley, F. Landis
1990-01-01
As demonstrated in this work, all orthogonal matrices solve a first order differential equation. The straightforward solution of this equation requires n sup 2 integrations to obtain the element of the nth order matrix. There are, however, only n(n-1)/2 independent parameters which determine an orthogonal matrix. The questions of choosing them, finding their differential equation and expressing the orthogonal matrix in terms of these parameters are considered. Several possibilities which are based on attitude determination in three dimensions are examined. It is shown that not all 3-D methods have useful extensions to higher dimensions. It is also shown why the rate of change of the matrix elements, which are the elements of the angular rate vector in 3-D, are the elements of a tensor of the second rank (dyadic) in spaces other than three dimensional. It is proven that the 3-D Gibbs vector (or Cayley Parameters) are extendable to other dimensions. An algorithm is developed emplying the resulting parameters, which are termed Extended Rodrigues Parameters, and numerical results are presented of the application of the algorithm to a fourth order matrix.
Minimal parameter solution of the orthogonal matrix differential equation
NASA Technical Reports Server (NTRS)
Baritzhack, Itzhack Y.; Markley, F. Landis
1988-01-01
As demonstrated in this work, all orthogonal matrices solve a first order differential equation. The straightforward solution of this equation requires n sup 2 integrations to obtain the element of the nth order matrix. There are, however, only n(n-1)/2 independent parameters which determine an orthogonal matrix. The questions of choosing them, finding their differential equation and expressing the orthogonal matrix in terms of these parameters are considered. Several possibilities which are based on attitude determination in three dimensions are examined. It is shown that not all 3-D methods have useful extensions to higher dimensions. It is also shown why the rate of change of the matrix elements, which are the elements of the angular rate vector in 3-D, are the elements of a tensor of the second rank (dyadic) in spaces other than three dimensional. It is proven that the 3-D Gibbs vector (or Cayley Parameters) are extendable to other dimensions. An algorithm is developed employing the resulting parameters, which are termed Extended Rodrigues Parameters, and numerical results are presented of the application of the algorithm to a fourth order matrix.
NASA Astrophysics Data System (ADS)
Özdemir, Gizem; Demiralp, Metin
2015-12-01
In this work, Enhanced Multivariance Products Representation (EMPR) approach which is a Demiralp-and-his- group extension to the Sobol's High Dimensional Model Representation (HDMR) has been used as the basic tool. Their discrete form have also been developed and used in practice by Demiralp and his group in addition to some other authors for the decomposition of the arrays like vectors, matrices, or multiway arrays. This work specifically focuses on the decomposition of infinite matrices involving denumerable infinitely many rows and columns. To this end the target matrix is first decomposed to the sum of certain outer products and then each outer product is treated by Tridiagonal Matrix Enhanced Multivariance Products Representation (TMEMPR) which has been developed by Demiralp and his group. The result is a three-matrix- factor-product whose kernel (the middle factor) is an arrowheaded matrix while the pre and post factors are invertable matrices decomposed of the support vectors of TMEMPR. This new method is called as Arrowheaded Enhanced Multivariance Products Representation for Matrices. The general purpose is approximation of denumerably infinite matrices with the new method.
Singular value decomposition for collaborative filtering on a GPU
NASA Astrophysics Data System (ADS)
Kato, Kimikazu; Hosino, Tikara
2010-06-01
A collaborative filtering predicts customers' unknown preferences from known preferences. In a computation of the collaborative filtering, a singular value decomposition (SVD) is needed to reduce the size of a large scale matrix so that the burden for the next phase computation will be decreased. In this application, SVD means a roughly approximated factorization of a given matrix into smaller sized matrices. Webb (a.k.a. Simon Funk) showed an effective algorithm to compute SVD toward a solution of an open competition called "Netflix Prize". The algorithm utilizes an iterative method so that the error of approximation improves in each step of the iteration. We give a GPU version of Webb's algorithm. Our algorithm is implemented in the CUDA and it is shown to be efficient by an experiment.
Fourier decomposition of payoff matrix for symmetric three-strategy games.
Szabó, György; Bodó, Kinga S; Allen, Benjamin; Nowak, Martin A
2014-10-01
In spatial evolutionary games the payoff matrices are used to describe pair interactions among neighboring players located on a lattice. Now we introduce a way how the payoff matrices can be built up as a sum of payoff components reflecting basic symmetries. For the two-strategy games this decomposition reproduces interactions characteristic to the Ising model. For the three-strategy symmetric games the Fourier components can be classified into four types representing games with self-dependent and cross-dependent payoffs, variants of three-strategy coordinations, and the rock-scissors-paper (RSP) game. In the absence of the RSP component the game is a potential game. The resultant potential matrix has been evaluated. The general features of these systems are analyzed when the game is expressed by the linear combinations of these components.
Unitary irreducible representations of SL(2,C) in discrete and continuous SU(1,1) bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conrady, Florian; Hnybida, Jeff; Department of Physics, University of Waterloo, Waterloo, Ontario
2011-01-15
We derive the matrix elements of generators of unitary irreducible representations of SL(2,C) with respect to basis states arising from a decomposition into irreducible representations of SU(1,1). This is done with regard to a discrete basis diagonalized by J{sup 3} and a continuous basis diagonalized by K{sup 1}, and for both the discrete and continuous series of SU(1,1). For completeness, we also treat the more conventional SU(2) decomposition as a fifth case. The derivation proceeds in a functional/differential framework and exploits the fact that state functions and differential operators have a similar structure in all five cases. The states aremore » defined explicitly and related to SU(1,1) and SU(2) matrix elements.« less
Algebraic methods for the solution of some linear matrix equations
NASA Technical Reports Server (NTRS)
Djaferis, T. E.; Mitter, S. K.
1979-01-01
The characterization of polynomials whose zeros lie in certain algebraic domains (and the unification of the ideas of Hermite and Lyapunov) is the basis for developing finite algorithms for the solution of linear matrix equations. Particular attention is given to equations PA + A'P = Q (the Lyapunov equation) and P - A'PA = Q the (discrete Lyapunov equation). The Lyapunov equation appears in several areas of control theory such as stability theory, optimal control (evaluation of quadratic integrals), stochastic control (evaluation of covariance matrices) and in the solution of the algebraic Riccati equation using Newton's method.
EvolQG - An R package for evolutionary quantitative genetics
Melo, Diogo; Garcia, Guilherme; Hubbe, Alex; Assis, Ana Paula; Marroig, Gabriel
2016-01-01
We present an open source package for performing evolutionary quantitative genetics analyses in the R environment for statistical computing. Evolutionary theory shows that evolution depends critically on the available variation in a given population. When dealing with many quantitative traits this variation is expressed in the form of a covariance matrix, particularly the additive genetic covariance matrix or sometimes the phenotypic matrix, when the genetic matrix is unavailable and there is evidence the phenotypic matrix is sufficiently similar to the genetic matrix. Given this mathematical representation of available variation, the \\textbf{EvolQG} package provides functions for calculation of relevant evolutionary statistics; estimation of sampling error; corrections for this error; matrix comparison via correlations, distances and matrix decomposition; analysis of modularity patterns; and functions for testing evolutionary hypotheses on taxa diversification. PMID:27785352
FPGA-based coprocessor for matrix algorithms implementation
NASA Astrophysics Data System (ADS)
Amira, Abbes; Bensaali, Faycal
2003-03-01
Matrix algorithms are important in many types of applications including image and signal processing. These areas require enormous computing power. A close examination of the algorithms used in these, and related, applications reveals that many of the fundamental actions involve matrix operations such as matrix multiplication which is of O (N3) on a sequential computer and O (N3/p) on a parallel system with p processors complexity. This paper presents an investigation into the design and implementation of different matrix algorithms such as matrix operations, matrix transforms and matrix decompositions using an FPGA based environment. Solutions for the problem of processing large matrices have been proposed. The proposed system architectures are scalable, modular and require less area and time complexity with reduced latency when compared with existing structures.
NASA Astrophysics Data System (ADS)
Petrishcheva, E.; Abart, R.
2012-04-01
We address mathematical modeling and computer simulations of phase decomposition in a multicomponent system. As opposed to binary alloys with one common diffusion parameter, our main concern is phase decomposition in real geological systems under influence of strongly different interdiffusion coefficients, as it is frequently encountered in mineral solid solutions with coupled diffusion on different sub-lattices. Our goal is to explain deviations from equilibrium element partitioning which are often observed in nature, e.g., in a cooled ternary feldspar. To this end we first adopt the standard Cahn-Hilliard model to the multicomponent diffusion problem and account for arbitrary diffusion coefficients. This is done by using Onsager's approach such that flux of each component results from the combined action of chemical potentials of all components. In a second step the generalized Cahn-Hilliard equation is solved numerically using finite-elements approach. We introduce and investigate several decomposition scenarios that may produce systematic deviations from the equilibrium element partitioning. Both ideal solutions and ternary feldspar are considered. Typically, the slowest component is initially "frozen" and the decomposition effectively takes place only for two "fast" components. At this stage the deviations from the equilibrium element partitioning are indeed observed. These deviations may became "frozen" under conditions of cooling. The final equilibration of the system occurs on a considerably slower time scale. Therefore the system may indeed remain unaccomplished at the observation point. Our approach reveals the intrinsic reasons for the specific phase separation path and rigorously describes it by direct numerical solution of the generalized Cahn-Hilliard equation.
Radiative albedo from a linearly fibered half-space
NASA Astrophysics Data System (ADS)
Grzesik, J. A.
2018-05-01
A growing acceptance of fiber-reinforced composite materials imparts some relevance to exploring the effects which a predominantly linear scattering lattice may have upon interior radiative transport. Indeed, a central feature of electromagnetic wave propagation within such a lattice, if sufficiently dilute, is ray confinement to cones whose half-angles are set by that between lattice and the incident ray. When such propagation is subordinated to a viewpoint of an unpolarized intensity transport, one arrives at a somewhat simplified variant of the Boltzmann equation with spherical scattering demoted to its cylindrical counterpart. With a view to initiating a hopefully wider discussion of such phenomena, we follow through in detail the half-space albedo problem. This is done first along canonical lines that harness the Wiener-Hopf technique, and then once more in a discrete ordinates setting via flux decomposition along the eigenbasis of the underlying attenuation/scattering matrix. Good agreement is seen to prevail. We further suggest that the Case singular eigenfunction apparatus could likewise be evolved here in close analogy to its original, spherical scattering model. A cursory contact with related problems in the astrophysical literature suggests, in addition, that the basic physical fidelity of our scalar radiative transfer equation (RTE) remains open to improvement by passage to a (4×1) Stokes vector, (4×4) matricial setting.
Tsuchimoto, Masashi; Tanimura, Yoshitaka
2015-08-11
A system with many energy states coupled to a harmonic oscillator bath is considered. To study quantum non-Markovian system-bath dynamics numerically rigorously and nonperturbatively, we developed a computer code for the reduced hierarchy equations of motion (HEOM) for a graphics processor unit (GPU) that can treat the system as large as 4096 energy states. The code employs a Padé spectrum decomposition (PSD) for a construction of HEOM and the exponential integrators. Dynamics of a quantum spin glass system are studied by calculating the free induction decay signal for the cases of 3 × 2 to 3 × 4 triangular lattices with antiferromagnetic interactions. We found that spins relax faster at lower temperature due to transitions through a quantum coherent state, as represented by the off-diagonal elements of the reduced density matrix, while it has been known that the spins relax slower due to suppression of thermal activation in a classical case. The decay of the spins are qualitatively similar regardless of the lattice sizes. The pathway of spin relaxation is analyzed under a sudden temperature drop condition. The Compute Unified Device Architecture (CUDA) based source code used in the present calculations is provided as Supporting Information .
Parallel CE/SE Computations via Domain Decomposition
NASA Technical Reports Server (NTRS)
Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung
2000-01-01
This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.
Identification and modification of dominant noise sources in diesel engines
NASA Astrophysics Data System (ADS)
Hayward, Michael D.
Determination of dominant noise sources in diesel engines is an integral step in the creation of quiet engines, but is a process which can involve an extensive series of expensive, time-consuming fired and motored tests. The goal of this research is to determine dominant noise source characteristics of a diesel engine in the near and far-fields with data from fewer tests than is currently required. Pre-conditioning and use of numerically robust methods to solve a set of cross-spectral density equations results in accurate calculation of the transfer paths between the near- and far-field measurement points. Application of singular value decomposition to an input cross-spectral matrix determines the spectral characteristics of a set of independent virtual sources, that, when scaled and added, result in the input cross spectral matrix. Each virtual source power spectral density is a singular value resulting from the decomposition performed over a range of frequencies. The complex relationship between virtual and physical sources is estimated through determination of virtual source contributions to each input measurement power spectral density. The method is made more user-friendly through use of a percentage contribution color plotting technique, where different normalizations can be used to help determine the presence of sources and the strengths of their contributions. Convolution of input measurements with the estimated path impulse responses results in a set of far-field components, to which the same singular value contribution plotting technique can be applied, thus allowing dominant noise source characteristics in the far-field to also be examined. Application of the methods presented results in determination of the spectral characteristics of dominant noise sources both in the near- and far-fields from one fired test, which significantly reduces the need for extensive fired and motored testing. Finally, it is shown that the far-field noise time history of a physically altered engine can be simulated through modification of singular values and recalculation of transfer paths between input and output measurements of previously recorded data.
Using Strassen's algorithm to accelerate the solution of linear systems
NASA Technical Reports Server (NTRS)
Bailey, David H.; Lee, King; Simon, Horst D.
1990-01-01
Strassen's algorithm for fast matrix-matrix multiplication has been implemented for matrices of arbitrary shapes on the CRAY-2 and CRAY Y-MP supercomputers. Several techniques have been used to reduce the scratch space requirement for this algorithm while simultaneously preserving a high level of performance. When the resulting Strassen-based matrix multiply routine is combined with some routines from the new LAPACK library, LU decomposition can be performed with rates significantly higher than those achieved by conventional means. We succeeded in factoring a 2048 x 2048 matrix on the CRAY Y-MP at a rate equivalent to 325 MFLOPS.
Release from or through a wax matrix system. I. Basic release properties of the wax matrix system.
Yonezawa, Y; Ishida, S; Sunada, H
2001-11-01
Release properties from a wax matrix tablet was examined. To obtain basic release properties, the wax matrix tablet was prepared from a physical mixture of drug and wax powder (hydrogenated caster oil) at a fixed mixing ratio. Properties of release from the single flat-faced surface or curved side surface of the wax matrix tablet were examined. The applicability of the square-root time law and of Higuchi equations was confirmed. The release rate constant obtained as g/min(1/2) changed with the release direction. However, the release rate constant obtained as g/cm2 x min(1/2) was almost the same. Hence it was suggested that the release property was almost the same and the wax matrix structure was uniform independent of release surface or direction at a fixed mixing ratio. However, these equations could not explain the entire release process. The applicability of a semilogarithmic equation was not as good compared with the square-root time law or Higuchi equation. However, it was revealed that the semilogarithmic equation was available to simulate the entire release process, even though the fit was somewhat poor. Hence it was suggested that the semilogarithmic equation was sufficient to describe the release process. The release rate constant was varied with release direction. However, these release rate constants were expressed by a function of the effective surface area and initial amount, independent of the release direction.
Mlyniec, A; Ekiert, M; Morawska-Chochol, A; Uhl, T
2016-06-01
In this work, we investigate the influence of the surrounding environment and the initial density on the decomposition kinetics of polylactide (PLA). The decomposition of the amorphous PLA was investigated by means of reactive molecular dynamics simulations. A computational model simulates the decomposition of PLA polymer inside the bulk, due to the assumed lack of removal of reaction products from the polymer matrix. We tracked the temperature dependency of the water and carbon monoxide production to extract the activation energy of thermal decomposition of PLA. We found that an increased density results in decreased activation energy of decomposition by about 50%. Moreover, initiation of decomposition of the amorphous PLA is followed by a rapid decline in activation energy caused by reaction products which accelerates the hydrolysis of esters. The addition of water molecules decreases initial energy of activation as well as accelerates the decomposition process. Additionally, we have investigated the dependency of density on external loading. Comparison of pressures needed to obtain assumed densities shows that this relationship is bilinear and the slope changes around a density equal to 1.3g/cm(3). The conducted analyses provide an insight into the thermal decomposition process of the amorphous phase of PLA, which is particularly susceptible to decomposition in amorphous and semi-crystalline PLA polymers. Copyright © 2016 Elsevier Inc. All rights reserved.
Domain decomposition for a mixed finite element method in three dimensions
Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.
2003-01-01
We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.
Using dynamic mode decomposition for real-time background/foreground separation in video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven
The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.
A fast efficient implicit scheme for the gasdynamic equations using a matrix reduction technique
NASA Technical Reports Server (NTRS)
Barth, T. J.; Steger, J. L.
1985-01-01
An efficient implicit finite-difference algorithm for the gasdynamic equations utilizing matrix reduction techniques is presented. A significant reduction in arithmetic operations is achieved without loss of the stability characteristics generality found in the Beam and Warming approximate factorization algorithm. Steady-state solutions to the conservative Euler equations in generalized coordinates are obtained for transonic flows and used to show that the method offers computational advantages over the conventional Beam and Warming scheme. Existing Beam and Warming codes can be retrofit with minimal effort. The theoretical extension of the matrix reduction technique to the full Navier-Stokes equations in Cartesian coordinates is presented in detail. Linear stability, using a Fourier stability analysis, is demonstrated and discussed for the one-dimensional Euler equations.
NASA Astrophysics Data System (ADS)
Lee, Gibbeum; Cho, Yeunwoo
2018-01-01
A new semi-analytical approach is presented to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of direct numerical approach to this matrix eigenvalue problem, which may suffer from the computational inaccuracy for big data, a pair of integral and differential equations are considered, which are related to the so-called prolate spheroidal wave functions (PSWF). First, the PSWF is expressed as a summation of a small number of the analytical Legendre functions. After substituting them into the PSWF differential equation, a much smaller size matrix eigenvalue problem is obtained than the direct numerical K-L matrix eigenvalue problem. By solving this with a minimal numerical effort, the PSWF and the associated eigenvalue of the PSWF differential equation are obtained. Then, the eigenvalue of the PSWF integral equation is analytically expressed by the functional values of the PSWF and the eigenvalues obtained in the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data such as ordinary irregular waves. It is found that, with the same accuracy, the required memory size of the present method is smaller than that of the direct numerical K-L representation and the computation time of the present method is shorter than that of the semi-analytical method based on the sinusoidal functions.
A two-stage linear discriminant analysis via QR-decomposition.
Ye, Jieping; Li, Qi
2005-06-01
Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.
Constraint elimination in dynamical systems
NASA Technical Reports Server (NTRS)
Singh, R. P.; Likins, P. W.
1989-01-01
Large space structures (LSSs) and other dynamical systems of current interest are often extremely complex assemblies of rigid and flexible bodies subjected to kinematical constraints. A formulation is presented for the governing equations of constrained multibody systems via the application of singular value decomposition (SVD). The resulting equations of motion are shown to be of minimum dimension.
Vectorial finite elements for solving the radiative transfer equation
NASA Astrophysics Data System (ADS)
Badri, M. A.; Jolivet, P.; Rousseau, B.; Le Corre, S.; Digonnet, H.; Favennec, Y.
2018-06-01
The discrete ordinate method coupled with the finite element method is often used for the spatio-angular discretization of the radiative transfer equation. In this paper we attempt to improve upon such a discretization technique. Instead of using standard finite elements, we reformulate the radiative transfer equation using vectorial finite elements. In comparison to standard finite elements, this reformulation yields faster timings for the linear system assemblies, as well as for the solution phase when using scattering media. The proposed vectorial finite element discretization for solving the radiative transfer equation is cross-validated against a benchmark problem available in literature. In addition, we have used the method of manufactured solutions to verify the order of accuracy for our discretization technique within different absorbing, scattering, and emitting media. For solving large problems of radiation on parallel computers, the vectorial finite element method is parallelized using domain decomposition. The proposed domain decomposition method scales on large number of processes, and its performance is unaffected by the changes in optical thickness of the medium. Our parallel solver is used to solve a large scale radiative transfer problem of the Kelvin-cell radiation.
Koopman decomposition of Burgers' equation: What can we learn?
NASA Astrophysics Data System (ADS)
Page, Jacob; Kerswell, Rich
2017-11-01
Burgers' equation is a well known 1D model of the Navier-Stokes equations and admits a selection of equilibria and travelling wave solutions. A series of Burgers' trajectories are examined with Dynamic Mode Decomposition (DMD) to probe the capability of the method to extract coherent structures from ``run-down'' simulations. The performance of the method depends critically on the choice of observable. We use the Cole-Hopf transformation to derive an observable which has linear, autonomous dynamics and for which the DMD modes overlap exactly with Koopman modes. This observable can accurately predict the flow evolution beyond the time window of the data used in the DMD, and in that sense outperforms other observables motivated by the nonlinearity in the governing equation. The linearizing observable also allows us to make informed decisions about often ambiguous choices in nonlinear problems, such as rank truncation and snapshot spacing. A number of rules of thumb for connecting DMD with the Koopman operator for nonlinear PDEs are distilled from the results. Related problems in low Reynolds number fluid turbulence are also discussed.
NASA Astrophysics Data System (ADS)
Liao, Haitao; Wu, Wenwang; Fang, Daining
2018-07-01
A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.
Hössjer, Ola; Tyvand, Peder A; Miloh, Touvia
2016-02-01
The classical Kimura solution of the diffusion equation is investigated for a haploid random mating (Wright-Fisher) model, with one-way mutations and initial-value specified by the founder population. The validity of the transient diffusion solution is checked by exact Markov chain computations, using a Jordan decomposition of the transition matrix. The conclusion is that the one-way diffusion model mostly works well, although the rate of convergence depends on the initial allele frequency and the mutation rate. The diffusion approximation is poor for mutation rates so low that the non-fixation boundary is regular. When this happens we perturb the diffusion solution around the non-fixation boundary and obtain a more accurate approximation that takes quasi-fixation of the mutant allele into account. The main application is to quantify how fast a specific genetic variant of the infinite alleles model is lost. We also discuss extensions of the quasi-fixation approach to other models with small mutation rates. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Marx, Alain; Lütjens, Hinrich
2017-03-01
A hybrid MPI/OpenMP parallel version of the XTOR-2F code [Lütjens and Luciani, J. Comput. Phys. 229 (2010) 8130] solving the two-fluid MHD equations in full tokamak geometry by means of an iterative Newton-Krylov matrix-free method has been developed. The present work shows that the code has been parallelized significantly despite the numerical profile of the problem solved by XTOR-2F, i.e. a discretization with pseudo-spectral representations in all angular directions, the stiffness of the two-fluid stability problem in tokamaks, and the use of a direct LU decomposition to invert the physical pre-conditioner at every Krylov iteration of the solver. The execution time of the parallelized version is an order of magnitude smaller than the sequential one for low resolution cases, with an increasing speedup when the discretization mesh is refined. Moreover, it allows to perform simulations with higher resolutions, previously forbidden because of memory limitations.
Phase retrieval in annulus sector domain by non-iterative methods
NASA Astrophysics Data System (ADS)
Wang, Xiao; Mao, Heng; Zhao, Da-zun
2008-03-01
Phase retrieval could be achieved by solving the intensity transport equation (ITE) under the paraxial approximation. For the case of uniform illumination, Neumann boundary condition is involved and it makes the solving process more complicated. The primary mirror is usually designed segmented in the telescope with large aperture, and the shape of a segmented piece is often like an annulus sector. Accordingly, It is necessary to analyze the phase retrieval in the annulus sector domain. Two non-iterative methods are considered for recovering the phase. The matrix method is based on the decomposition of the solution into a series of orthogonalized polynomials, while the frequency filtering method depends on the inverse computation process of ITE. By the simulation, it is found that both methods can eliminate the effect of Neumann boundary condition, save a lot of computation time and recover the distorted phase well. The wavefront error (WFE) RMS can be less than 0.05 wavelength, even when some noise is added.
Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques
2012-09-01
The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Encarnacao, A.; Ballard, S.; Young, C. J.; Phillips, W. S.; Begnaud, M. L.
2011-12-01
Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P-velocity model (SALSA3D) that provides superior first P travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we show a methodology for accomplishing this by exploiting the full model covariance matrix. Our model has on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiply methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix we solve for the travel-time covariance associated with arbitrary ray-paths by integrating the model covariance along both ray paths. Setting the paths equal gives variance for that path. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Ballard, S.; Begnaud, M. L.; Encarnacao, A. V.; Young, C. J.; Phillips, W. S.
2015-12-01
Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P- and S-velocity model (SALSA3D) that provides superior first P and first S travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from our latest tomographic model. Typical global 3D SALSA3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes a prior model covariance constraint) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.
Response of a Rotating Propeller to Aerodynamic Excitation
NASA Technical Reports Server (NTRS)
Arnoldi, Walter E.
1949-01-01
The flexural vibration of a rotating propeller blade with clamped shank is analyzed with the object of presenting, in matrix form, equations for the elastic bending moments in forced vibration resulting from aerodynamic forces applied at a fixed multiple of rotational speed. Matrix equations are also derived which define the critical speeds end mode shapes for any excitation order and the relation between critical speed and blade angle. Reference is given to standard works on the numerical solution of matrix equations of the forms derived. The use of a segmented blade as an approximation to a continuous blade provides a simple means for obtaining the matrix solution from the integral equation of equilibrium, so that, in the numerical application of the method presented, the several matrix arrays of the basic physical characteristics of the propeller blade are of simple form, end their simplicity is preserved until, with the solution in sight, numerical manipulations well-known in matrix algebra yield the desired critical speeds and mode shapes frame which the vibration at any operating condition may be synthesized. A close correspondence between the familiar Stodola method and the matrix method is pointed out, indicating that any features of novelty are characteristic not of the analytical procedure but only of the abbreviation, condensation, and efficient organization of the numerical procedure made possible by the use of classical matrix theory.
Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows, I: Basic Theory
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, H. C.
2003-01-01
The objective of this paper is to extend our recently developed highly parallelizable nonlinear stable high order schemes for complex multiscale hydrodynamic applications to the viscous MHD equations. These schemes employed multiresolution wavelets as adaptive numerical dissipation controls t o limit the amount of and to aid the selection and/or blending of the appropriate types of dissipation to be used. The new scheme is formulated for both the conservative and non-conservative form of the MHD equations in curvilinear grids. The four advantages of the present approach over existing MHD schemes reported in the open literature are as follows. First, the scheme is constructed for long-time integrations of shock/turbulence/combustion MHD flows. Available schemes are too diffusive for long-time integrations and/or turbulence/combustion problems. Second, unlike exist- ing schemes for the conservative MHD equations which suffer from ill-conditioned eigen- decompositions, the present scheme makes use of a well-conditioned eigen-decomposition obtained from a minor modification of the eigenvectors of the non-conservative MHD equations t o solve the conservative form of the MHD equations. Third, this approach of using the non-conservative eigensystem when solving the conservative equations also works well in the context of standard shock-capturing schemes for the MHD equations. Fourth, a new approach to minimize the numerical error of the divergence-free magnetic condition for high order schemes is introduced. Numerical experiments with typical MHD model problems revealed the applicability of the newly developed schemes for the MHD equations.
Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products
Dong, Ming; Ren, Ming; Ye, Rixin
2017-01-01
Sulfur hexafluoride (SF6) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations. PMID:29140268
Calibration methods influence quantitative material decomposition in photon-counting spectral CT
NASA Astrophysics Data System (ADS)
Curtis, Tyler E.; Roeder, Ryan K.
2017-03-01
Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.
A Chebyshev matrix method for spatial modes of the Orr-Sommerfeld equation
NASA Technical Reports Server (NTRS)
Danabasoglu, G.; Biringen, S.
1989-01-01
The Chebyshev matrix collocation method is applied to obtain the spatial modes of the Orr-Sommerfeld equation for Poiseuille flow and the Blausius boundary layer. The problem is linearized by the companion matrix technique for semi-infinite domain using a mapping transformation. The method can be easily adapted to problems with different boundary conditions requiring different transformations.
Thermal Decomposition Mechanism of Butyraldehyde
NASA Astrophysics Data System (ADS)
Hatten, Courtney D.; Warner, Brian; Wright, Emily; Kaskey, Kevin; McCunn, Laura R.
2013-06-01
The thermal decomposition of butyraldehyde, CH_3CH_2CH_2C(O)H, has been studied in a resistively heated SiC tubular reactor. Products of pyrolysis were identified via matrix-isolation FTIR spectroscopy and photoionization mass spectrometry in separate experiments. Carbon monoxide, ethene, acetylene, water and ethylketene were among the products detected. To unravel the mechanism of decomposition, pyrolysis of a partially deuterated sample of butyraldehyde was studied. Also, the concentration of butyraldehyde in the carrier gas was varied in experiments to determine the presence of bimolecular reactions. The results of these experiments can be compared to the dissociation pathways observed in similar aldehydes and are relevant to the processing of biomass, foods, and tobacco.
NASA Astrophysics Data System (ADS)
Dyrdin, V. V.; Smirnov, V. G.; Kim, T. L.; Manakov, A. Yu.; Fofanov, A. A.; Kartopolova, I. S.
2017-06-01
The physical processes occurring in the coal - natural gas system under the gas pressure release were studied experimentally. The possibility of gas hydrates presence in the inner space of natural coal was shown, which decomposition leads to an increase in the amount of gas passing into the free state. The decomposition of gas hydrates can be caused either by the seam temperature increase or the pressure decrease to lower than the gas hydrates equilibrium curve. The contribution of methane released during gas hydrates decomposition should be taken into account in the design of safe mining technologies for coal seams prone to gas dynamic phenomena.
Matrix elements and duality for type 2 unitary representations of the Lie superalgebra gl(m|n)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werry, Jason L.; Gould, Mark D.; Isaac, Phillip S.
The characteristic identity formalism discussed in our recent articles is further utilized to derive matrix elements of type 2 unitary irreducible gl(m|n) modules. In particular, we give matrix element formulae for all gl(m|n) generators, including the non-elementary generators, together with their phases on finite dimensional type 2 unitary irreducible representations which include the contravariant tensor representations and an additional class of essentially typical representations. Remarkably, we find that the type 2 unitary matrix element equations coincide with the type 1 unitary matrix element equations for non-vanishing matrix elements up to a phase.
Isotropic matrix elements of the collision integral for the Boltzmann equation
NASA Astrophysics Data System (ADS)
Ender, I. A.; Bakaleinikov, L. A.; Flegontova, E. Yu.; Gerasimenko, A. B.
2017-09-01
We have proposed an algorithm for constructing matrix elements of the collision integral for the nonlinear Boltzmann equation isotropic in velocities. These matrix elements have been used to start the recurrent procedure for calculating matrix elements of the velocity-nonisotropic collision integral described in our previous publication. In addition, isotropic matrix elements are of independent interest for calculating isotropic relaxation in a number of physical kinetics problems. It has been shown that the coefficients of expansion of isotropic matrix elements in Ω integrals are connected by the recurrent relations that make it possible to construct the procedure of their sequential determination.
NASA Technical Reports Server (NTRS)
Kroll, R. I.; Clemmons, R. E.
1979-01-01
The equations of motion program L217 formulates the matrix coefficients for a set of second order linear differential equations that describe the motion of an airplane relative to its level equilibrium flight condition. Aerodynamic data from FLEXSTAB or Doublet Lattice (L216) programs can be used to derive the equations for quasi-steady or full unsteady aerodynamics. The data manipulation and the matrix coefficient formulation are described.
Estimated Satellite Cluster Elements in Near Circular Orbit
1988-12-01
cluster is investigated. TheAon-board estimator is the U-D covariance factor’xzatiion’filter with dynamics based on the Clohessy - Wiltshire equations...Appropriate values for the velocity vector vi can be found irom the Clohessy - Wiltshire equations [9] (these equations will be explained in detail in the...explained in this text is the f matrix. The state transition matrix was developed from the Clohessy - Wiltshire equations of motion [9:page 3] as i - 2qý
Analysis of network clustering behavior of the Chinese stock market
NASA Astrophysics Data System (ADS)
Chen, Huan; Mai, Yong; Li, Sai-Ping
2014-11-01
Random Matrix Theory (RMT) and the decomposition of correlation matrix method are employed to analyze spatial structure of stocks interactions and collective behavior in the Shanghai and Shenzhen stock markets in China. The result shows that there exists prominent sector structures, with subsectors including the Real Estate (RE), Commercial Banks (CB), Pharmaceuticals (PH), Distillers&Vintners (DV) and Steel (ST) industries. Furthermore, the RE and CB subsectors are mostly anti-correlated. We further study the temporal behavior of the dataset and find that while the sector structures are relatively stable from 2007 through 2013, the correlation between the real estate and commercial bank stocks shows large variations. By employing the ensemble empirical mode decomposition (EEMD) method, we show that this anti-correlation behavior is closely related to the monetary and austerity policies of the Chinese government during the period of study.
FaCSI: A block parallel preconditioner for fluid-structure interaction in hemodynamics
NASA Astrophysics Data System (ADS)
Deparis, Simone; Forti, Davide; Grandperrin, Gwenol; Quarteroni, Alfio
2016-12-01
Modeling Fluid-Structure Interaction (FSI) in the vascular system is mandatory to reliably compute mechanical indicators in vessels undergoing large deformations. In order to cope with the computational complexity of the coupled 3D FSI problem after discretizations in space and time, a parallel solution is often mandatory. In this paper we propose a new block parallel preconditioner for the coupled linearized FSI system obtained after space and time discretization. We name it FaCSI to indicate that it exploits the Factorized form of the linearized FSI matrix, the use of static Condensation to formally eliminate the interface degrees of freedom of the fluid equations, and the use of a SIMPLE preconditioner for saddle-point problems. FaCSI is built upon a block Gauss-Seidel factorization of the FSI Jacobian matrix and it uses ad-hoc preconditioners for each physical component of the coupled problem, namely the fluid, the structure and the geometry. In the fluid subproblem, after operating static condensation of the interface fluid variables, we use a SIMPLE preconditioner on the reduced fluid matrix. Moreover, to efficiently deal with a large number of processes, FaCSI exploits efficient single field preconditioners, e.g., based on domain decomposition or the multigrid method. We measure the parallel performances of FaCSI on a benchmark cylindrical geometry and on a problem of physiological interest, namely the blood flow through a patient-specific femoropopliteal bypass. We analyze the dependence of the number of linear solver iterations on the cores count (scalability of the preconditioner) and on the mesh size (optimality).
Koopman Mode Decomposition Methods in Dynamic Stall: Reduced Order Modeling and Control
2015-11-10
the flow phenomena by separating them into individual modes. The technique of Proper Orthogonal Decomposition (POD), see [ Holmes : 1998] is a popular...sampled values h(k), k = 0,…,2M-1, of the exponential sum 1. Solve the following linear system where 2. Compute all zeros zj D, j = 1,…,M...of the Prony polynomial i.e., calculate all eigenvalues of the associated companion matrix and form fj = log zj for j = 1,…,M, where log is the
A Study of Strong Stability of Distributed Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Cataltepe, Tayfun
1989-01-01
The strong stability of distributed systems is studied and the problem of characterizing strongly stable semigroups of operators associated with distributed systems is addressed. Main emphasis is on contractive systems. Three different approaches to characterization of strongly stable contractive semigroups are developed. The first one is an operator theoretical approach. Using the theory of dilations, it is shown that every strongly stable contractive semigroup is related to the left shift semigroup on an L(exp 2) space. Then, a decomposition for the state space which identifies strongly stable and unstable states is introduced. Based on this decomposition, conditions for a contractive semigroup to be strongly stable are obtained. Finally, extensions of Lyapunov's equation for distributed parameter systems are investigated. Sufficient conditions for weak and strong stabilities of uniformly bounded semigroups are obtained by relaxing the equivalent norm condition on the right hand side of the Lyanupov equation. These characterizations are then applied to the problem of feedback stabilization. First, it is shown via the state space decomposition that under certain conditions a contractive system (A,B) can be strongly stabilized by the feedback -B(*). Then, application of the extensions of the Lyapunov equation results in sufficient conditions for weak, strong, and exponential stabilizations of contractive systems by the feedback -B(*). Finally, it is shown that for a contractive system, the first derivative of x with respect to time = Ax + Bu (where B is any linear bounded operator), there is a related linear quadratic regulator problem and a corresponding steady state Riccati equation which always has a bounded nonnegative solution.
NASA Astrophysics Data System (ADS)
Ghoraani, Behnaz; Krishnan, Sridhar
2009-12-01
The number of people affected by speech problems is increasing as the modern world places increasing demands on the human voice via mobile telephones, voice recognition software, and interpersonal verbal communications. In this paper, we propose a novel methodology for automatic pattern classification of pathological voices. The main contribution of this paper is extraction of meaningful and unique features using Adaptive time-frequency distribution (TFD) and nonnegative matrix factorization (NMF). We construct Adaptive TFD as an effective signal analysis domain to dynamically track the nonstationarity in the speech and utilize NMF as a matrix decomposition (MD) technique to quantify the constructed TFD. The proposed method extracts meaningful and unique features from the joint TFD of the speech, and automatically identifies and measures the abnormality of the signal. Depending on the abnormality measure of each signal, we classify the signal into normal or pathological. The proposed method is applied on the Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database which consists of 161 pathological and 51 normal speakers, and an overall classification accuracy of 98.6% was achieved.
A new lumped-parameter model for flow in unsaturated dual-porosity media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, Robert W.; Hadgu, Teklu; Bodvarsson, Gudmundur S.
A new lumped-parameter approach to simulating unsaturated flow processes in dual-porosity media such as fractured rocks or aggregated soils is presented. Fluid flow between the fracture network and the matrix blocks is described by a non-linear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. Unlike a Warren-Root-type equation, this equation is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into an existing unsaturated flow simulator, to serve as a source/sink term for fracture gridblocks. Flow processes are then simulated usingmore » only fracture gridblocks in the computational grid. This new lumped-parameter approach has been tested on two problems involving transient flow in fractured/porous media, and compared with simulations performed using explicit discretization of the matrix blocks. The new procedure seems to accurately simulate flow processes in unsaturated fractured rocks, and typically requires an order of magnitude less computational time than do simulations using fully-discretized matrix blocks. [References: 37]« less
Pfeifle, Mark; Ma, Yong-Tao; Jasper, Ahren W; Harding, Lawrence B; Hase, William L; Klippenstein, Stephen J
2018-05-07
Ozonolysis produces chemically activated carbonyl oxides (Criegee intermediates, CIs) that are either stabilized or decompose directly. This branching has an important impact on atmospheric chemistry. Prior theoretical studies have employed statistical models for energy partitioning to the CI arising from dissociation of the initially formed primary ozonide (POZ). Here, we used direct dynamics simulations to explore this partitioning for decomposition of c-C 2 H 4 O 3 , the POZ in ethylene ozonolysis. A priori estimates for the overall stabilization probability were then obtained by coupling the direct dynamics results with master equation simulations. Trajectories were initiated at the concerted cycloreversion transition state, as well as the second transition state of a stepwise dissociation pathway, both leading to a CI (H 2 COO) and formaldehyde (H 2 CO). The resulting CI energy distributions were incorporated in master equation simulations of CI decomposition to obtain channel-specific stabilized CI (sCI) yields. Master equation simulations of POZ formation and decomposition, based on new high-level electronic structure calculations, were used to predict yields for the different POZ decomposition channels. A non-negligible contribution of stepwise POZ dissociation was found, and new mechanistic aspects of this pathway were elucidated. By combining the trajectory-based channel-specific sCI yields with the channel branching fractions, an overall sCI yield of (48 ± 5)% was obtained. Non-statistical energy release was shown to measurably affect sCI formation, with statistical models predicting significantly lower overall sCI yields (∼30%). Within the range of experimental literature values (35%-54%), our trajectory-based calculations favor those clustered at the upper end of the spectrum.
NASA Astrophysics Data System (ADS)
Pfeifle, Mark; Ma, Yong-Tao; Jasper, Ahren W.; Harding, Lawrence B.; Hase, William L.; Klippenstein, Stephen J.
2018-05-01
Ozonolysis produces chemically activated carbonyl oxides (Criegee intermediates, CIs) that are either stabilized or decompose directly. This branching has an important impact on atmospheric chemistry. Prior theoretical studies have employed statistical models for energy partitioning to the CI arising from dissociation of the initially formed primary ozonide (POZ). Here, we used direct dynamics simulations to explore this partitioning for decomposition of c-C2H4O3, the POZ in ethylene ozonolysis. A priori estimates for the overall stabilization probability were then obtained by coupling the direct dynamics results with master equation simulations. Trajectories were initiated at the concerted cycloreversion transition state, as well as the second transition state of a stepwise dissociation pathway, both leading to a CI (H2COO) and formaldehyde (H2CO). The resulting CI energy distributions were incorporated in master equation simulations of CI decomposition to obtain channel-specific stabilized CI (sCI) yields. Master equation simulations of POZ formation and decomposition, based on new high-level electronic structure calculations, were used to predict yields for the different POZ decomposition channels. A non-negligible contribution of stepwise POZ dissociation was found, and new mechanistic aspects of this pathway were elucidated. By combining the trajectory-based channel-specific sCI yields with the channel branching fractions, an overall sCI yield of (48 ± 5)% was obtained. Non-statistical energy release was shown to measurably affect sCI formation, with statistical models predicting significantly lower overall sCI yields (˜30%). Within the range of experimental literature values (35%-54%), our trajectory-based calculations favor those clustered at the upper end of the spectrum.
MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion
NASA Astrophysics Data System (ADS)
Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong
This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.
SAR matrices: automated extraction of information-rich SAR tables from large compound data sets.
Wassermann, Anne Mai; Haebel, Peter; Weskamp, Nils; Bajorath, Jürgen
2012-07-23
We introduce the SAR matrix data structure that is designed to elucidate SAR patterns produced by groups of structurally related active compounds, which are extracted from large data sets. SAR matrices are systematically generated and sorted on the basis of SAR information content. Matrix generation is computationally efficient and enables processing of large compound sets. The matrix format is reminiscent of SAR tables, and SAR patterns revealed by different categories of matrices are easily interpretable. The structural organization underlying matrix formation is more flexible than standard R-group decomposition schemes. Hence, the resulting matrices capture SAR information in a comprehensive manner.
Gravitational lensing by eigenvalue distributions of random matrix models
NASA Astrophysics Data System (ADS)
Martínez Alonso, Luis; Medina, Elena
2018-05-01
We propose to use eigenvalue densities of unitary random matrix ensembles as mass distributions in gravitational lensing. The corresponding lens equations reduce to algebraic equations in the complex plane which can be treated analytically. We prove that these models can be applied to describe lensing by systems of edge-on galaxies. We illustrate our analysis with the Gaussian and the quartic unitary matrix ensembles.
Blumthaler, Ingrid; Oberst, Ulrich
2012-03-01
Control design belongs to the most important and difficult tasks of control engineering and has therefore been treated by many prominent researchers and in many textbooks, the systems being generally described by their transfer matrices or by Rosenbrock equations and more recently also as behaviors. Our approach to controller design uses, in addition to the ideas of our predecessors on coprime factorizations of transfer matrices and on the parametrization of stabilizing compensators, a new mathematical technique which enables simpler design and also new theorems in spite of the many outstanding results of the literature: (1) We use an injective cogenerator signal module ℱ over the polynomial algebra [Formula: see text] (F an infinite field), a saturated multiplicatively closed set T of stable polynomials and its quotient ring [Formula: see text] of stable rational functions. This enables the simultaneous treatment of continuous and discrete systems and of all notions of stability, called T-stability. We investigate stabilizing control design by output feedback of input/output (IO) behaviors and study the full feedback IO behavior, especially its autonomous part and not only its transfer matrix. (2) The new technique is characterized by the permanent application of the injective cogenerator quotient signal module [Formula: see text] and of quotient behaviors [Formula: see text] of [Formula: see text]-behaviors B. (3) For the control tasks of tracking, disturbance rejection, model matching, and decoupling and not necessarily proper plants we derive necessary and sufficient conditions for the existence of proper stabilizing compensators with proper and stable closed loop behaviors, parametrize all such compensators as IO behaviors and not only their transfer matrices and give new algorithms for their construction. Moreover we solve the problem of pole placement or spectral assignability for the complete feedback behavior. The properness of the full feedback behavior ensures the absence of impulsive solutions in the continuous case, and that of the compensator enables its realization by Kalman state space equations or elementary building blocks. We note that every behavior admits an IO decomposition with proper transfer matrix, but that most of these decompositions do not have this property, and therefore we do not assume the properness of the plant. (4) The new technique can also be applied to more general control interconnections according to Willems, in particular to two-parameter feedback compensators and to the recent tracking framework of Fiaz/Takaba/Trentelman. In contrast to these authors, however, we pay special attention to the properness of all constructed transfer matrices which requires more subtle algorithms.
Notes on implementation of Coulomb friction in coupled dynamical simulations
NASA Technical Reports Server (NTRS)
Vandervoort, R. J.; Singh, R. P.
1987-01-01
A coupled dynamical system is defined as an assembly of rigid/flexible bodies that may be coupled by kinematic connections. The interfaces between bodies are modeled using hinges having 0 to 6 degrees of freedom. The equations of motion are presented for a mechanical system of n flexible bodies in a topological tree configuration. The Lagrange form of the D'Alembert principle was employed to derive the equations. The equations of motion are augmented by the kinematic constraint equations. This augmentation is accomplished via the method of singular value decomposition.
Tensor-GMRES method for large sparse systems of nonlinear equations
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1994-01-01
This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.
Ogienko, Andrey G; Tkacz, Marek; Manakov, Andrey Yu; Lipkowski, Janusz
2007-11-08
Pressure-temperature (P-T) conditions of the decomposition reaction of the structure H high-pressure methane hydrate to the cubic structure I methane hydrate and fluid methane were studied with a piston-cylinder apparatus at room temperature. For the first time, volume changes accompanying this reaction were determined. With the use of the Clausius-Clapeyron equation the enthalpies of the decomposition reaction of the structure H high-pressure methane hydrate to the cubic structure I methane hydrate and fluid methane have been calculated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manthe, Uwe, E-mail: uwe.manthe@uni-bielefeld.de; Ellerbrock, Roman, E-mail: roman.ellerbrock@uni-bielefeld.de
2016-05-28
A new approach for the quantum-state resolved analysis of polyatomic reactions is introduced. Based on the singular value decomposition of the S-matrix, energy-dependent natural reaction channels and natural reaction probabilities are defined. It is shown that the natural reaction probabilities are equal to the eigenvalues of the reaction probability operator [U. Manthe and W. H. Miller, J. Chem. Phys. 99, 3411 (1993)]. Consequently, the natural reaction channels can be interpreted as uniquely defined pathways through the transition state of the reaction. The analysis can efficiently be combined with reactive scattering calculations based on the propagation of thermal flux eigenstates. Inmore » contrast to a decomposition based straightforwardly on thermal flux eigenstates, it does not depend on the choice of the dividing surface separating reactants from products. The new approach is illustrated studying a prototypical example, the H + CH{sub 4} → H{sub 2} + CH{sub 3} reaction. The natural reaction probabilities and the contributions of the different vibrational states of the methyl product to the natural reaction channels are calculated and discussed. The relation between the thermal flux eigenstates and the natural reaction channels is studied in detail.« less
Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui
2017-03-27
Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K -nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction.
Cluster-enriched Yang-Baxter equation from SUSY gauge theories
NASA Astrophysics Data System (ADS)
Yamazaki, Masahito
2018-04-01
We propose a new generalization of the Yang-Baxter equation, where the R-matrix depends on cluster y-variables in addition to the spectral parameters. We point out that we can construct solutions to this new equation from the recently found correspondence between Yang-Baxter equations and supersymmetric gauge theories. The S^2 partition function of a certain 2d N=(2,2) quiver gauge theory gives an R-matrix, whereas its FI parameters can be identified with the cluster y-variables.
Bian, Xihui; Li, Shujuan; Lin, Ligang; Tan, Xiaoyao; Fan, Qingjie; Li, Ming
2016-06-21
Accurate prediction of the model is fundamental to the successful analysis of complex samples. To utilize abundant information embedded over frequency and time domains, a novel regression model is presented for quantitative analysis of hydrocarbon contents in the fuel oil samples. The proposed method named as high and low frequency unfolded PLSR (HLUPLSR), which integrates empirical mode decomposition (EMD) and unfolded strategy with partial least squares regression (PLSR). In the proposed method, the original signals are firstly decomposed into a finite number of intrinsic mode functions (IMFs) and a residue by EMD. Secondly, the former high frequency IMFs are summed as a high frequency matrix and the latter IMFs and residue are summed as a low frequency matrix. Finally, the two matrices are unfolded to an extended matrix in variable dimension, and then the PLSR model is built between the extended matrix and the target values. Coupled with Ultraviolet (UV) spectroscopy, HLUPLSR has been applied to determine hydrocarbon contents of light gas oil and diesel fuels samples. Comparing with single PLSR and other signal processing techniques, the proposed method shows superiority in prediction ability and better model interpretation. Therefore, HLUPLSR method provides a promising tool for quantitative analysis of complex samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Exact solution of some linear matrix equations using algebraic methods
NASA Technical Reports Server (NTRS)
Djaferis, T. E.; Mitter, S. K.
1977-01-01
A study is done of solution methods for Linear Matrix Equations including Lyapunov's equation, using methods of modern algebra. The emphasis is on the use of finite algebraic procedures which are easily implemented on a digital computer and which lead to an explicit solution to the problem. The action f sub BA is introduced a Basic Lemma is proven. The equation PA + BP = -C as well as the Lyapunov equation are analyzed. Algorithms are given for the solution of the Lyapunov and comment is given on its arithmetic complexity. The equation P - A'PA = Q is studied and numerical examples are given.
A review of the matrix-exponential formalism in radiative transfer
NASA Astrophysics Data System (ADS)
Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian
2017-07-01
This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.
Direct Solve of Electrically Large Integral Equations for Problem Sizes to 1M Unknowns
NASA Technical Reports Server (NTRS)
Shaeffer, John
2008-01-01
Matrix methods for solving integral equations via direct solve LU factorization are presently limited to weeks to months of very expensive supercomputer time for problems sizes of several hundred thousand unknowns. This report presents matrix LU factor solutions for electromagnetic scattering problems for problem sizes to one million unknowns with thousands of right hand sides that run in mere days on PC level hardware. This EM solution is accomplished by utilizing the numerical low rank nature of spatially blocked unknowns using the Adaptive Cross Approximation for compressing the rank deficient blocks of the system Z matrix, the L and U factors, the right hand side forcing function and the final current solution. This compressed matrix solution is applied to a frequency domain EM solution of Maxwell's equations using standard Method of Moments approach. Compressed matrix storage and operations count leads to orders of magnitude reduction in memory and run time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Petrongolo, M; Wang, T
Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less
NASA Astrophysics Data System (ADS)
Van de Put, Maarten L.; Sorée, Bart; Magnus, Wim
2017-12-01
The Wigner-Liouville equation is reformulated using a spectral decomposition of the classical force field instead of the potential energy. The latter is shown to simplify the Wigner-Liouville kernel both conceptually and numerically as the spectral force Wigner-Liouville equation avoids the numerical evaluation of the highly oscillatory Wigner kernel which is nonlocal in both position and momentum. The quantum mechanical evolution is instead governed by a term local in space and non-local in momentum, where the non-locality in momentum has only a limited range. An interpretation of the time evolution in terms of two processes is presented; a classical evolution under the influence of the averaged driving field, and a probability-preserving quantum-mechanical generation and annihilation term. Using the inherent stability and reduced complexity, a direct deterministic numerical implementation using Chebyshev and Fourier pseudo-spectral methods is detailed. For the purpose of illustration, we present results for the time-evolution of a one-dimensional resonant tunneling diode driven out of equilibrium.
Integral representations of solutions of the wave equation based on relativistic wavelets
NASA Astrophysics Data System (ADS)
Perel, Maria; Gorodnitskiy, Evgeny
2012-09-01
A representation of solutions of the wave equation with two spatial coordinates in terms of localized elementary ones is presented. Elementary solutions are constructed from four solutions with the help of transformations of the affine Poincaré group, i.e. with the help of translations, dilations in space and time and Lorentz transformations. The representation can be interpreted in terms of the initial-boundary value problem for the wave equation in a half-plane. It gives the solution as an integral representation of two types of solutions: propagating localized solutions running away from the boundary under different angles and packet-like surface waves running along the boundary and exponentially decreasing away from the boundary. Properties of elementary solutions are discussed. A numerical investigation of coefficients of the decomposition is carried out. An example of the decomposition of the field created by sources moving along a line with different speeds is considered, and the dependence of coefficients on speeds of sources is discussed.
NASA Astrophysics Data System (ADS)
Latyshev, A. V.; Gordeeva, N. M.
2017-09-01
We obtain an analytic solution of the boundary problem for the behavior (fluctuations) of an electron plasma with an arbitrary degree of degeneracy of the electron gas in the conductive layer in an external electric field. We use the kinetic Vlasov-Boltzmann equation with the Bhatnagar-Gross-Krook collision integral and the Maxwell equation for the electric field. We use the mirror boundary conditions for the reflections of electrons from the layer boundary. The boundary problem reduces to a one-dimensional problem with a single velocity. For this, we use the method of consecutive approximations, linearization of the equations with respect to the absolute distribution of the Fermi-Dirac electrons, and the conservation law for the number of particles. Separation of variables then helps reduce the problem equations to a characteristic system of equations. In the space of generalized functions, we find the eigensolutions of the initial system, which correspond to the continuous spectrum (Van Kampen mode). Solving the dispersion equation, we then find the eigensolutions corresponding to the adjoint and discrete spectra (Drude and Debye modes). We then construct the general solution of the boundary problem by decomposing it into the eigensolutions. The coefficients of the decomposition are given by the boundary conditions. This allows obtaining the decompositions of the distribution function and the electric field in explicit form.
Theory of biaxial graded-index optical fiber. M.S. Thesis
NASA Technical Reports Server (NTRS)
Kawalko, Stephen F.
1990-01-01
A biaxial graded-index fiber with a homogeneous cladding is studied. Two methods, wave equation and matrix differential equation, of formulating the problem and their respective solutions are discussed. For the wave equation formulation of the problem it is shown that for the case of a diagonal permittivity tensor the longitudinal electric and magnetic fields satisfy a pair of coupled second-order differential equations. Also, a generalized dispersion relation is derived in terms of the solutions for the longitudinal electric and magnetic fields. For the case of a step-index fiber, either isotropic or uniaxial, these differential equations can be solved exactly in terms of Bessel functions. For the cases of an istropic graded-index and a uniaxial graded-index fiber, a solution using the Wentzel, Krammers and Brillouin (WKB) approximation technique is shown. Results for some particular permittivity profiles are presented. Also the WKB solutions is compared with the vector solution found by Kurtz and Streifer. For the matrix formulation it is shown that the tangential components of the electric and magnetic fields satisfy a system of four first-order differential equations which can be conveniently written in matrix form. For the special case of meridional modes, the system of equations splits into two systems of two equations. A general iterative technique, asymptotic partitioning of systems of equations, for solving systems of differential equations is presented. As a simple example, Bessel's differential equation is written in matrix form and is solved using this asymptotic technique. Low order solutions for particular examples of a biaxial and uniaxial graded-index fiber are presented. Finally numerical results obtained using the asymptotic technique are presented for particular examples of isotropic and uniaxial step-index fibers and isotropic, uniaxial and biaxial graded-index fibers.
Yi, Sun; Nelson, Patrick W; Ulsoy, A Galip
2007-04-01
In a turning process modeled using delay differential equations (DDEs), we investigate the stability of the regenerative machine tool chatter problem. An approach using the matrix Lambert W function for the analytical solution to systems of delay differential equations is applied to this problem and compared with the result obtained using a bifurcation analysis. The Lambert W function, known to be useful for solving scalar first-order DDEs, has recently been extended to a matrix Lambert W function approach to solve systems of DDEs. The essential advantages of the matrix Lambert W approach are not only the similarity to the concept of the state transition matrix in lin ear ordinary differential equations, enabling its use for general classes of linear delay differential equations, but also the observation that we need only the principal branch among an infinite number of roots to determine the stability of a system of DDEs. The bifurcation method combined with Sturm sequences provides an algorithm for determining the stability of DDEs without restrictive geometric analysis. With this approach, one can obtain the critical values of delay, which determine the stability of a system and hence the preferred operating spindle speed without chatter. We apply both the matrix Lambert W function and the bifurcation analysis approach to the problem of chatter stability in turning, and compare the results obtained to existing methods. The two new approaches show excellent accuracy and certain other advantages, when compared to traditional graphical, computational and approximate methods.
Anisotropic Damage Mechanics Modeling in Metal Matrix Composites
1993-05-15
conducted on a titanium aluminide SiC-reinforced metal matrix composite. Center-cracked plates with laminate layups of (0/90) and (±45). were tested... interfacial damage mechanisms as debonding or delamination. Equations (2.14) and (2.15) represent the damage transformation equations for the stress... titanium aluminide SiC 46 continuous reinforced metal matrix composite. As a means of enforcing quality assurance, all manufacturing and cutting of the
NASA Astrophysics Data System (ADS)
Ender, I. A.; Bakaleinikov, L. A.; Flegontova, E. Yu.; Gerasimenko, A. B.
2017-08-01
We have proposed an algorithm for the sequential construction of nonisotropic matrix elements of the collision integral, which are required to solve the nonlinear Boltzmann equation using the moments method. The starting elements of the matrix are isotropic and assumed to be known. The algorithm can be used for an arbitrary law of interactions for any ratio of the masses of colliding particles.
NASA Astrophysics Data System (ADS)
Hong, Seok Bin; Ahn, Yong San; Jang, Joon Hyeok; Kim, Jin-Gyun; Goo, Nam Seo; Yu, Woong-Ryeol
2016-04-01
Shape memory polymer (SMP) is one of smart polymers which exhibit shape memory effect upon external stimuli. Reinforcements as carbon fiber had been used for making shape memory polymer composite (CF-SMPC). This study investigated a possibility of designing self-deployable structures in harsh space condition using CF-SMPCs and analyzed their shape memory behaviors with constitutive equation model.CF-SMPCs were prepared using woven carbon fabrics and a thermoset epoxy based SMP to obtain their basic mechanical properties including actuation in harsh environment. The mechanical and shape memory properties of SMP and CF-SMPCs were characterized using dynamic mechanical analysis (DMA) and universal tensile machine (UTM) with an environmental chamber. The mechanical properties such as flexural strength and tensile strength of SMP and CF-SMPC were measured with simple tensile/bending test and time dependent shape memory behavior was characterized with designed shape memory bending test. For mechanical analysis of CF-SMPCs, a 3D constitutive equation of SMP, which had been developed using multiplicative decomposition of the deformation gradient and shape memory strains, was used with material parameters determined from CF-SMPCs. Carbon fibers in composites reinforced tensile and flexural strength of SMP and acted as strong elastic springs in rheology based equation models. The actuation behavior of SMP matrix and CF-SMPCs was then simulated as 3D shape memory bending cases. Fiber bundle property was imbued with shell model for more precise analysis and it would be used for prediction of deploying behavior in self-deployable hinge structure.
NASA Astrophysics Data System (ADS)
Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue
2018-01-01
An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.
Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation
NASA Astrophysics Data System (ADS)
Jamelot, Erell; Ciarlet, Patrick
2013-05-01
Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.
Numerical solution of quadratic matrix equations for free vibration analysis of structures
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1975-01-01
This paper is concerned with the efficient and accurate solution of the eigenvalue problem represented by quadratic matrix equations. Such matrix forms are obtained in connection with the free vibration analysis of structures, discretized by finite 'dynamic' elements, resulting in frequency-dependent stiffness and inertia matrices. The paper presents a new numerical solution procedure of the quadratic matrix equations, based on a combined Sturm sequence and inverse iteration technique enabling economical and accurate determination of a few required eigenvalues and associated vectors. An alternative procedure based on a simultaneous iteration procedure is also described when only the first few modes are the usual requirement. The employment of finite dynamic elements in conjunction with the presently developed eigenvalue routines results in a most significant economy in the dynamic analysis of structures.
Matrix approaches to assess terrestrial nitrogen scheme in CLM4.5
NASA Astrophysics Data System (ADS)
Du, Z.
2017-12-01
Terrestrial carbon (C) and nitrogen (N) cycles have been commonly represented by a series of balance equations to track their influxes into and effluxes out of individual pools in earth system models (ESMs). This representation matches our understanding of C and N cycle processes well but makes it difficult to track model behaviors. To overcome these challenges, we developed a matrix approach, which reorganizes the series of terrestrial C and N balance equations in the CLM4.5 into two matrix equations based on original representation of C and N cycle processes and mechanisms. The matrix approach would consequently help improve the comparability of models and data, evaluate impacts of additional model components, facilitate benchmark analyses, model intercomparisons, and data-model fusion, and improve model predictive power.
Planck constant as spectral parameter in integrable systems and KZB equations
NASA Astrophysics Data System (ADS)
Levin, A.; Olshanetsky, M.; Zotov, A.
2014-10-01
We construct special rational gl N Knizhnik-Zamolodchikov-Bernard (KZB) equations with Ñ punctures by deformation of the corresponding quantum gl N rational R-matrix. They have two parameters. The limit of the first one brings the model to the ordinary rational KZ equation. Another one is τ. At the level of classical mechanics the deformation parameter τ allows to extend the previously obtained modified Gaudin models to the modified Schlesinger systems. Next, we notice that the identities underlying generic (elliptic) KZB equations follow from some additional relations for the properly normalized R-matrices. The relations are noncommutative analogues of identities for (scalar) elliptic functions. The simplest one is the unitarity condition. The quadratic (in R matrices) relations are generated by noncommutative Fay identities. In particular, one can derive the quantum Yang-Baxter equations from the Fay identities. The cubic relations provide identities for the KZB equations as well as quadratic relations for the classical r-matrices which can be treated as halves of the classical Yang-Baxter equation. At last we discuss the R-matrix valued linear problems which provide gl Ñ CM models and Painlevé equations via the above mentioned identities. The role of the spectral parameter plays the Planck constant of the quantum R-matrix. When the quantum gl N R-matrix is scalar ( N = 1) the linear problem reproduces the Krichever's ansatz for the Lax matrices with spectral parameter for the gl Ñ CM models. The linear problems for the quantum CM models generalize the KZ equations in the same way as the Lax pairs with spectral parameter generalize those without it.
Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun
2017-01-01
This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method. PMID:28448431
Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun
2017-04-27
This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method.
Image compression using singular value decomposition
NASA Astrophysics Data System (ADS)
Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.
2017-11-01
We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.
Kinetics of the cellular decomposition of supersaturated solid solutions
NASA Astrophysics Data System (ADS)
Ivanov, M. A.; Naumuk, A. Yu.
2014-09-01
A consistent description of the kinetics of the cellular decomposition of supersaturated solid solutions with the development of a spatially periodic structure of lamellar (platelike) type, which consists of alternating phases of precipitates on the basis of the impurity component and depleted initial solid solution, is given. One of the equations, which determines the relationship between the parameters that describe the process of decomposition, has been obtained from a comparison of two approaches in order to determine the rate of change in the free energy of the system. The other kinetic parameters can be described with the use of a variational method, namely, by the maximum velocity of motion of the decomposition boundary at a given temperature. It is shown that the mutual directions of growth of the lamellae of different phases are determined by the minimum value of the interphase surface energy. To determine the parameters of the decomposition, a simple thermodynamic model of states with a parabolic dependence of the free energy on the concentrations has been used. As a result, expressions that describe the decomposition rate, interlamellar distance, and the concentration of impurities in the phase that remain after the decomposition have been derived. This concentration proves to be equal to the half-sum of the initial concentration and the equilibrium concentration corresponding to the decomposition temperature.
Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion
NASA Astrophysics Data System (ADS)
Jakobsen, M.; Wu, R. S.
2016-12-01
Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.
An efficient implementation of a high-order filter for a cubed-sphere spectral element model
NASA Astrophysics Data System (ADS)
Kang, Hyun-Gyu; Cheong, Hyeong-Bin
2017-03-01
A parallel-scalable, isotropic, scale-selective spatial filter was developed for the cubed-sphere spectral element model on the sphere. The filter equation is a high-order elliptic (Helmholtz) equation based on the spherical Laplacian operator, which is transformed into cubed-sphere local coordinates. The Laplacian operator is discretized on the computational domain, i.e., on each cell, by the spectral element method with Gauss-Lobatto Lagrange interpolating polynomials (GLLIPs) as the orthogonal basis functions. On the global domain, the discrete filter equation yielded a linear system represented by a highly sparse matrix. The density of this matrix increases quadratically (linearly) with the order of GLLIP (order of the filter), and the linear system is solved in only O (Ng) operations, where Ng is the total number of grid points. The solution, obtained by a row reduction method, demonstrated the typical accuracy and convergence rate of the cubed-sphere spectral element method. To achieve computational efficiency on parallel computers, the linear system was treated by an inverse matrix method (a sparse matrix-vector multiplication). The density of the inverse matrix was lowered to only a few times of the original sparse matrix without degrading the accuracy of the solution. For better computational efficiency, a local-domain high-order filter was introduced: The filter equation is applied to multiple cells, and then the central cell was only used to reconstruct the filtered field. The parallel efficiency of applying the inverse matrix method to the global- and local-domain filter was evaluated by the scalability on a distributed-memory parallel computer. The scale-selective performance of the filter was demonstrated on Earth topography. The usefulness of the filter as a hyper-viscosity for the vorticity equation was also demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Wei; Wang, Jin, E-mail: jin.wang.1@stonybrook.edu; State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, 130022 Changchun, China and College of Physics, Jilin University, 130021 Changchun
We have established a general non-equilibrium thermodynamic formalism consistently applicable to both spatially homogeneous and, more importantly, spatially inhomogeneous systems, governed by the Langevin and Fokker-Planck stochastic dynamics with multiple state transition mechanisms, using the potential-flux landscape framework as a bridge connecting stochastic dynamics with non-equilibrium thermodynamics. A set of non-equilibrium thermodynamic equations, quantifying the relations of the non-equilibrium entropy, entropy flow, entropy production, and other thermodynamic quantities, together with their specific expressions, is constructed from a set of dynamical decomposition equations associated with the potential-flux landscape framework. The flux velocity plays a pivotal role on both the dynamic andmore » thermodynamic levels. On the dynamic level, it represents a dynamic force breaking detailed balance, entailing the dynamical decomposition equations. On the thermodynamic level, it represents a thermodynamic force generating entropy production, manifested in the non-equilibrium thermodynamic equations. The Ornstein-Uhlenbeck process and more specific examples, the spatial stochastic neuronal model, in particular, are studied to test and illustrate the general theory. This theoretical framework is particularly suitable to study the non-equilibrium (thermo)dynamics of spatially inhomogeneous systems abundant in nature. This paper is the second of a series.« less
Proper Orthogonal Decomposition in Optimal Control of Fluids
NASA Technical Reports Server (NTRS)
Ravindran, S. S.
1999-01-01
In this article, we present a reduced order modeling approach suitable for active control of fluid dynamical systems based on proper orthogonal decomposition (POD). The rationale behind the reduced order modeling is that numerical simulation of Navier-Stokes equations is still too costly for the purpose of optimization and control of unsteady flows. We examine the possibility of obtaining reduced order models that reduce computational complexity associated with the Navier-Stokes equations while capturing the essential dynamics by using the POD. The POD allows extraction of certain optimal set of basis functions, perhaps few, from a computational or experimental data-base through an eigenvalue analysis. The solution is then obtained as a linear combination of these optimal set of basis functions by means of Galerkin projection. This makes it attractive for optimal control and estimation of systems governed by partial differential equations. We here use it in active control of fluid flows governed by the Navier-Stokes equations. We show that the resulting reduced order model can be very efficient for the computations of optimization and control problems in unsteady flows. Finally, implementational issues and numerical experiments are presented for simulations and optimal control of fluid flow through channels.
Absolute continuity for operator valued completely positive maps on C∗-algebras
NASA Astrophysics Data System (ADS)
Gheondea, Aurelian; Kavruk, Ali Şamil
2009-02-01
Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.
Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei
2013-07-01
Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.
Stability of cyanocobalamin in sugar-coated tablets.
Ohmori, Shinji; Kataoka, Masumi; Koyama, Hiroyoshi
2007-06-07
The purpose of this study was to clarify the stability of cyanocobalamin (VB(12)-CN) in sugar-coated tablets containing fursultiamine hydrochloride (TTFD-HCl), riboflavin (VB(2)), and pyridoxine hydrochloride (VB(6)), and to identify the factors affecting the stability of VB(12)-CN in these sugar-coated tablets. The stability of VB(12)-CN was investigated using high-performance liquid chromatography while decomposition was evaluated kinetically. The decomposition of VB(12)-CN in sugar-coated tablets with high equilibrium relative humidity (more than 60%) under closed conditions showed complex kinetics and followed an Avrami-Erofe'ev equation, which expresses a random nucleation (two-dimensional growth of nuclei) model. We showed that equilibrium relative humidity, the incorporation of VB(2) and VB(6), and sugar coating, are the main factors influencing decomposition and that these factors cause the complex decomposition kinetics.
Singular value decomposition utilizing parallel algorithms on graphical processors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotas, Charlotte W; Barhen, Jacob
2011-01-01
One of the current challenges in underwater acoustic array signal processing is the detection of quiet targets in the presence of noise. In order to enable robust detection, one of the key processing steps requires data and replica whitening. This, in turn, involves the eigen-decomposition of the sample spectral matrix, Cx = 1/K xKX(k)XH(k) where X(k) denotes a single frequency snapshot with an element for each element of the array. By employing the singular value decomposition (SVD) method, the eigenvectors and eigenvalues can be determined directly from the data without computing the sample covariance matrix, reducing the computational requirements formore » a given level of accuracy (van Trees, Optimum Array Processing). (Recall that the SVD of a complex matrix A involves determining V, , and U such that A = U VH where U and V are orthonormal and is a positive, real, diagonal matrix containing the singular values of A. U and V are the eigenvectors of AAH and AHA, respectively, while the singular values are the square roots of the eigenvalues of AAH.) Because it is desirable to be able to compute these quantities in real time, an efficient technique for computing the SVD is vital. In addition, emerging multicore processors like graphical processing units (GPUs) are bringing parallel processing capabilities to an ever increasing number of users. Since the computational tasks involved in array signal processing are well suited for parallelization, it is expected that these computations will be implemented using GPUs as soon as users have the necessary computational tools available to them. Thus, it is important to have an SVD algorithm that is suitable for these processors. This work explores the effectiveness of two different parallel SVD implementations on an NVIDIA Tesla C2050 GPU (14 multiprocessors, 32 cores per multiprocessor, 1.15 GHz clock - peed). The first algorithm is based on a two-step algorithm which bidiagonalizes the matrix using Householder transformations, and then diagonalizes the intermediate bidiagonal matrix through implicit QR shifts. This is similar to that implemented for real matrices by Lahabar and Narayanan ("Singular Value Decomposition on GPU using CUDA", IEEE International Parallel Distributed Processing Symposium 2009). The implementation is done in a hybrid manner, with the bidiagonalization stage done using the GPU while the diagonalization stage is done using the CPU, with the GPU used to update the U and V matrices. The second algorithm is based on a one-sided Jacobi scheme utilizing a sequence of pair-wise column orthogonalizations such that A is replaced by AV until the resulting matrix is sufficiently orthogonal (that is, equal to U ). V is obtained from the sequence of orthogonalizations, while can be found from the square root of the diagonal elements of AH A and, once is known, U can be found from column scaling the resulting matrix. These implementations utilize CUDA Fortran and NVIDIA's CUB LAS library. The primary goal of this study is to quantify the comparative performance of these two techniques against themselves and other standard implementations (for example, MATLAB). Considering that there is significant overhead associated with transferring data to the GPU and with synchronization between the GPU and the host CPU, it is also important to understand when it is worthwhile to use the GPU in terms of the matrix size and number of concurrent SVDs to be calculated.« less
Multi-scale Methods in Quantum Field Theory
NASA Astrophysics Data System (ADS)
Polyzou, W. N.; Michlin, Tracie; Bulut, Fatih
2018-05-01
Daubechies wavelets are used to make an exact multi-scale decomposition of quantum fields. For reactions that involve a finite energy that take place in a finite volume, the number of relevant quantum mechanical degrees of freedom is finite. The wavelet decomposition has natural resolution and volume truncations that can be used to isolate the relevant degrees of freedom. The application of flow equation methods to construct effective theories that decouple coarse and fine scale degrees of freedom is examined.
Calculating Relativistic Transition Matrix Elements for Hydrogenic Atoms Using Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Alexander, Steven; Coldwell, R. L.
2015-03-01
The nonrelativistic transition matrix elements for hydrogen atoms can be computed exactly and these expressions are given in a number of classic textbooks. The relativistic counterparts of these equations can also be computed exactly but these expressions have been described in only a few places in the literature. In part, this is because the relativistic equations lack the elegant simplicity of the nonrelativistic equations. In this poster I will describe how variational Monte Carlo methods can be used to calculate the energy and properties of relativistic hydrogen atoms and how the wavefunctions for these systems can be used to calculate transition matrix elements.
MagIC: Fluid dynamics in a spherical shell simulator
NASA Astrophysics Data System (ADS)
Wicht, J.; Gastine, T.; Barik, A.; Putigny, B.; Yadav, R.; Duarte, L.; Dintrans, B.
2017-09-01
MagIC simulates fluid dynamics in a spherical shell. It solves for the Navier-Stokes equation including Coriolis force, optionally coupled with an induction equation for Magneto-Hydro Dynamics (MHD), a temperature (or entropy) equation and an equation for chemical composition under both the anelastic and the Boussinesq approximations. MagIC uses either Chebyshev polynomials or finite differences in the radial direction and spherical harmonic decomposition in the azimuthal and latitudinal directions. The time-stepping scheme relies on a semi-implicit Crank-Nicolson for the linear terms of the MHD equations and a Adams-Bashforth scheme for the non-linear terms and the Coriolis force.
Bains, William; Xiao, Yao; Yu, Changyong
2015-01-01
The components of life must survive in a cell long enough to perform their function in that cell. Because the rate of attack by water increases with temperature, we can, in principle, predict a maximum temperature above which an active terrestrial metabolism cannot function by analysis of the decomposition rates of the components of life, and comparison of those rates with the metabolites’ minimum metabolic half-lives. The present study is a first step in this direction, providing an analytical framework and method, and analyzing the stability of 63 small molecule metabolites based on literature data. Assuming that attack by water follows a first order rate equation, we extracted decomposition rate constants from literature data and estimated their statistical reliability. The resulting rate equations were then used to give a measure of confidence in the half-life of the metabolite concerned at different temperatures. There is little reliable data on metabolite decomposition or hydrolysis rates in the literature, the data is mostly confined to a small number of classes of chemicals, and the data available are sometimes mutually contradictory because of varying reaction conditions. However, a preliminary analysis suggests that terrestrial biochemistry is limited to environments below ~150–180 °C. We comment briefly on why pressure is likely to have a small effect on this. PMID:25821932
Electrochemical Test Method for Evaluating Long-Term Propellant-Material Compatibility
1978-12-01
matrix of test conditions is illustrated in Fig. 13. A statistically designed test matrix (Graeco-Latin Cube) could not be used because of passivation...ears simulated time results in a findl decomposition level of 0.753 mg/cm The data was examined using statistical techniqves to evaluate the relative...metals. The compatibility of all nine metals was evaluated in hydrazine containing water and chloride. The results of the statistical analy(is
Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (3).
Murase, Kenya
2016-01-01
In this issue, simultaneous differential equations were introduced. These differential equations are often used in the field of medical physics. The methods for solving them were also introduced, which include Laplace transform and matrix methods. Some examples were also introduced, in which Laplace transform and matrix methods were applied to solving simultaneous differential equations derived from a three-compartment kinetic model for analyzing the glucose metabolism in tissues and Bloch equations for describing the behavior of the macroscopic magnetization in magnetic resonance imaging.In the next (final) issue, partial differential equations and various methods for solving them will be introduced together with some examples in medical physics.
Frainer, André; McKie, Brendan G; Malmqvist, Björn
2014-03-01
Despite ample experimental evidence indicating that biodiversity might be an important driver of ecosystem processes, its role in the functioning of real ecosystems remains unclear. In particular, the understanding of which aspects of biodiversity are most important for ecosystem functioning, their importance relative to other biotic and abiotic drivers, and the circumstances under which biodiversity is most likely to influence functioning in nature, is limited. We conducted a field study that focussed on a guild of insect detritivores in streams, in which we quantified variation in the process of leaf decomposition across two habitats (riffles and pools) and two seasons (autumn and spring). The study was conducted in six streams, and the same locations were sampled in the two seasons. With the aid of structural equations modelling, we assessed spatiotemporal variation in the roles of three key biotic drivers in this process: functional diversity, quantified based on a species trait matrix, consumer density and biomass. Our models also accounted for variability related to different litter resources, and other sources of biotic and abiotic variability among streams. All three of our focal biotic drivers influenced leaf decomposition, but none was important in all habitats and seasons. Functional diversity had contrasting effects on decomposition between habitats and seasons. A positive relationship was observed in pool habitats in spring, associated with high trait dispersion, whereas a negative relationship was observed in riffle habitats during autumn. Our results demonstrate that functional biodiversity can be as significant for functioning in natural ecosystems as other important biotic drivers. In particular, variation in the role of functional diversity between seasons highlights the importance of fluctuations in the relative abundances of traits for ecosystem process rates in real ecosystems. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.
The Replicator Equation on Graphs
Ohtsuki, Hisashi; Nowak, Martin A.
2008-01-01
We study evolutionary games on graphs. Each player is represented by a vertex of the graph. The edges denote who meets whom. A player can use any one of n strategies. Players obtain a payoff from interaction with all their immediate neighbors. We consider three different update rules, called ‘birth-death’, ‘death-birth’ and ‘imitation’. A fourth update rule, ‘pairwise comparison’, is shown to be equivalent to birth-death updating in our model. We use pair-approximation to describe the evolutionary game dynamics on regular graphs of degree k. In the limit of weak selection, we can derive a differential equation which describes how the average frequency of each strategy on the graph changes over time. Remarkably, this equation is a replicator equation with a transformed payoff matrix. Therefore, moving a game from a well-mixed population (the complete graph) onto a regular graph simply results in a transformation of the payoff matrix. The new payoff matrix is the sum of the original payoff matrix plus another matrix, which describes the local competition of strategies. We discuss the application of our theory to four particular examples, the Prisoner’s Dilemma, the Snow-Drift game, a coordination game and the Rock-Scissors-Paper game. PMID:16860343
The spectral applications of Beer-Lambert law for some biological and dosimetric materials
NASA Astrophysics Data System (ADS)
Içelli, Orhan; Yalçin, Zeynel; Karakaya, Vatan; Ilgaz, Işıl P.
2014-08-01
The aim of this study is to conduct quantitative and qualitative analysis of biological and dosimetric materials which contain organic and inorganic materials and to make the determination by using the spectral theorem Beer-Lambert law. Beer-Lambert law is a system of linear equations for the spectral theory. It is possible to solve linear equations with a non-zero coefficient matrix determinant forming linear equations. Characteristic matrix of the linear equation with zero determinant is called point spectrum at the spectral theory.
New insights into thermal decomposition of polycyclic aromatic hydrocarbon oxyradicals.
Liu, Peng; Lin, He; Yang, Yang; Shao, Can; Gu, Chen; Huang, Zhen
2014-12-04
Thermal decompositions of polycyclic aromatic hydrocarbon (PAH) oxyradicals on various surface sites including five-membered ring, free-edge, zigzag, and armchair have been systematically investigated by using ab initio density functional theory B3LYP/6-311+G(d,p) basis set. The calculation based on Hückel theory indicates that PAHs (3H-cydopenta[a]anthracene oxyradical) with oxyradicals on a five-membered ring site have high chemical reactivity. The rate coefficients of PAH oxyradical decomposition were evaluated by using Rice-Ramsperger-Kassel-Marcus theory and solving the master equations in the temperature range of 1500-2500 K and the pressure range of 0.1-10 atm. The kinetic calculations revealed that the rate coefficients of PAH oxyradical decomposition are temperature-, pressure-, and surface site-dependent, and the oxyradical on a five-membered ring is easier to decompose than that on a six-membered ring. Four-membered rings were found in decomposition of the five-membered ring, and a new reaction channel of PAH evolution involving four-membered rings is recommended.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong QIn, Ronald Davidson
2011-07-18
The Courant-Snyder (CS) theory and the Kapchinskij-Vladimirskij (KV) distribution for high-intensity beams in a uncoupled focusing lattice are generalized to the case of coupled transverse dynamics. The envelope function is generalized to an envelope matrix, and the envelope equation becomes a matrix envelope equation with matrix operations that are non-commutative. In an uncoupled lattice, the KV distribution function, first analyzed in 1959, is the only known exact solution of the nonlinear Vlasov-Maxwell equations for high-intensity beams including self-fields in a self-consistent manner. The KV solution is generalized to high-intensity beams in a coupled transverse lattice using the generalized CS invariant.more » This solution projects to a rotating, pulsating elliptical beam in transverse configuration space. The fully self-consistent solution reduces the nonlinear Vlasov-Maxwell equations to a nonlinear matrix ordinary differential equation for the envelope matrix, which determines the geometry of the pulsating and rotating beam ellipse. These results provide us with a new theoretical tool to investigate the dynamics of high-intensity beams in a coupled transverse lattice. A strongly coupled lattice, a so-called N-rolling lattice, is studied as an example. It is found that strong coupling does not deteriorate the beam quality. Instead, the coupling induces beam rotation, and reduces beam pulsation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin Hong; Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026; Davidson, Ronald C.
2011-05-15
The Courant-Snyder (CS) theory and the Kapchinskij-Vladimirskij (KV) distribution for high-intensity beams in an uncoupled focusing lattice are generalized to the case of coupled transverse dynamics. The envelope function is generalized to an envelope matrix, and the envelope equation becomes a matrix envelope equation with matrix operations that are noncommutative. In an uncoupled lattice, the KV distribution function, first analyzed in 1959, is the only known exact solution of the nonlinear Vlasov-Maxwell equations for high-intensity beams including self-fields in a self-consistent manner. The KV solution is generalized to high-intensity beams in a coupled transverse lattice using the generalized CS invariant.more » This solution projects to a rotating, pulsating elliptical beam in transverse configuration space. The fully self-consistent solution reduces the nonlinear Vlasov-Maxwell equations to a nonlinear matrix ordinary differential equation for the envelope matrix, which determines the geometry of the pulsating and rotating beam ellipse. These results provide us with a new theoretical tool to investigate the dynamics of high-intensity beams in a coupled transverse lattice. A strongly coupled lattice, a so-called N-rolling lattice, is studied as an example. It is found that strong coupling does not deteriorate the beam quality. Instead, the coupling induces beam rotation and reduces beam pulsation.« less
Lax representations for matrix short pulse equations
NASA Astrophysics Data System (ADS)
Popowicz, Z.
2017-10-01
The Lax representation for different matrix generalizations of Short Pulse Equations (SPEs) is considered. The four-dimensional Lax representations of four-component Matsuno, Feng, and Dimakis-Müller-Hoissen-Matsuno equations are obtained. The four-component Feng system is defined by generalization of the two-dimensional Lax representation to the four-component case. This system reduces to the original Feng equation, to the two-component Matsuno equation, or to the Yao-Zang equation. The three-component version of the Feng equation is presented. The four-component version of the Matsuno equation with its Lax representation is given. This equation reduces the new two-component Feng system. The two-component Dimakis-Müller-Hoissen-Matsuno equations are generalized to the four-parameter family of the four-component SPE. The bi-Hamiltonian structure of this generalization, for special values of parameters, is defined. This four-component SPE in special cases reduces to the new two-component SPE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hindmarsh, A.C.; Sloan, L.J.; Dubois, P.F.
1978-12-01
This report supersedes the original version, dated June 1976. It describes four versions of a pair of subroutines for solving N x N systems of linear algebraic equations. In each case, the first routine, DEC, performs an LU decomposition of the matrix with partial pivoting, and the second, SOL, computes the solution vector by back-substitution. The first version is in Fortran IV, and is derived from routines DECOMP and SOLVE written by C.B. Moler. The second is a version for the CDC 7600 computer using STACKLIB. The third is a hand-coded (Compass) version for the 7600. The fourth is amore » vectorized version for the CDC STAR, renamed DECST and SOLST. Comparative tests on these routines are also described. The Compass version is faster than the others on the 7600 by factors of up to 5. The major revisions to the original report, and to the subroutines described, are an updated description of the availability of each version of DEC/SOL; correction of some errors in the Compass version, as altered so as to be compatible with FTN; and a new STAR version, which runs much faster than the earlier one. The standard Fortran version, the Fortran/STACKLIB version, and the object code generated from the Compass version and available in STACKLIB have not been changed.« less
A Flexible CUDA LU-based Solver for Small, Batched Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Gawande, Nitin A.; Villa, Oreste
This chapter presents the implementation of a batched CUDA solver based on LU factorization for small linear systems. This solver may be used in applications such as reactive flow transport models, which apply the Newton-Raphson technique to linearize and iteratively solve the sets of non linear equations that represent the reactions for ten of thousands to millions of physical locations. The implementation exploits somewhat counterintuitive GPGPU programming techniques: it assigns the solution of a matrix (representing a system) to a single CUDA thread, does not exploit shared memory and employs dynamic memory allocation on the GPUs. These techniques enable ourmore » implementation to simultaneously solve sets of systems with over 100 equations and to employ LU decomposition with complete pivoting, providing the higher numerical accuracy required by certain applications. Other currently available solutions for batched linear solvers are limited by size and only support partial pivoting, although they may result faster in certain conditions. We discuss the code of our implementation and present a comparison with the other implementations, discussing the various tradeoffs in terms of performance and flexibility. This work will enable developers that need batched linear solvers to choose whichever implementation is more appropriate to the features and the requirements of their applications, and even to implement dynamic switching approaches that can choose the best implementation depending on the input data.« less
NASA Astrophysics Data System (ADS)
Castro, Manuel J.; Gallardo, José M.; Marquina, Antonio
2017-10-01
We present recent advances in PVM (Polynomial Viscosity Matrix) methods based on internal approximations to the absolute value function, and compare them with Chebyshev-based PVM solvers. These solvers only require a bound on the maximum wave speed, so no spectral decomposition is needed. Another important feature of the proposed methods is that they are suitable to be written in Jacobian-free form, in which only evaluations of the physical flux are used. This is particularly interesting when considering systems for which the Jacobians involve complex expressions, e.g., the relativistic magnetohydrodynamics (RMHD) equations. On the other hand, the proposed Jacobian-free solvers have also been extended to the case of approximate DOT (Dumbser-Osher-Toro) methods, which can be regarded as simple and efficient approximations to the classical Osher-Solomon method, sharing most of it interesting features and being applicable to general hyperbolic systems. To test the properties of our schemes a number of numerical experiments involving the RMHD equations are presented, both in one and two dimensions. The obtained results are in good agreement with those found in the literature and show that our schemes are robust and accurate, running stable under a satisfactory time step restriction. It is worth emphasizing that, although this work focuses on RMHD, the proposed schemes are suitable to be applied to general hyperbolic systems.
Dynamic analysis and control of lightweight manipulators with flexible parallel link mechanisms
NASA Technical Reports Server (NTRS)
Lee, Jeh Won
1991-01-01
The flexible parallel link mechanism is designed for increased rigidity to sustain the buckling when it carries a heavy payload. Compared to a one link flexible manipulator, a two link flexible manipulator, especially the flexible parallel mechanism, has more complicated characteristics in dynamics and control. The objective of this research is the theoretical analysis and the experimental verification of dynamics and control of a two link flexible manipulator with a flexible parallel link mechanism. Nonlinear equations of motion of the lightweight manipulator are derived by the Lagrangian method in symbolic form to better understand the structure of the dynamic model. A manipulator with a flexible parallel link mechanism is a constrained dynamic system whose equations are sensitive to numerical integration error. This constrained system is solved using singular value decomposition of the constraint Jacobian matrix. The discrepancies between the analytical model and the experiment are explained using a simplified and a detailed finite element model. The step response of the analytical model and the TREETOPS model match each other well. The nonlinear dynamics is studied using a sinusoidal excitation. The actuator dynamic effect on a flexible robot was investigated. The effects are explained by the root loci and the Bode plot theoretically and experimentally. For the base performance for the advanced control scheme, a simple decoupled feedback scheme is applied.
General linear codes for fault-tolerant matrix operations on processor arrays
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Abraham, J. A.
1988-01-01
Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.
1,2-diketones promoted degradation of poly(epsilon-caprolactone)
NASA Astrophysics Data System (ADS)
Danko, Martin; Borska, Katarina; Ragab, Sherif Shaban; Janigova, Ivica; Mosnacek, Jaroslav
2012-07-01
Photochemical reactions of Benzil and Camphorquinone were used for modification of poly(ɛ-caprolactone) polymer films. Photochemistry of dopants was followed by infrared spectroscopy, changes on polymer chains of matrix were followed by gel permeation chromatography. Benzoyl peroxide was efficiently photochemically generated from benzyl in solid polymer matrix in the presence of air. Following decomposition of benzoyl peroxide led to degradation of matrix. Photochemical transformation of benzil in vacuum led to hydrogen abstraction from the polymer chains in higher extent, which resulted to chains recombination and formation of gel. Photochemical transformation of camphorquinone to corresponding camphoric peroxide was not observed. Only decrease of molecular weight of polymer matrix doped with camphorquinone was observed during the irradiation.
Matrix approach to land carbon cycle modeling: A case study with the Community Land Model.
Huang, Yuanyuan; Lu, Xingjie; Shi, Zheng; Lawrence, David; Koven, Charles D; Xia, Jianyang; Du, Zhenggang; Kluzek, Erik; Luo, Yiqi
2018-03-01
The terrestrial carbon (C) cycle has been commonly represented by a series of C balance equations to track C influxes into and effluxes out of individual pools in earth system models (ESMs). This representation matches our understanding of C cycle processes well but makes it difficult to track model behaviors. It is also computationally expensive, limiting the ability to conduct comprehensive parametric sensitivity analyses. To overcome these challenges, we have developed a matrix approach, which reorganizes the C balance equations in the original ESM into one matrix equation without changing any modeled C cycle processes and mechanisms. We applied the matrix approach to the Community Land Model (CLM4.5) with vertically-resolved biogeochemistry. The matrix equation exactly reproduces litter and soil organic carbon (SOC) dynamics of the standard CLM4.5 across different spatial-temporal scales. The matrix approach enables effective diagnosis of system properties such as C residence time and attribution of global change impacts to relevant processes. We illustrated, for example, the impacts of CO 2 fertilization on litter and SOC dynamics can be easily decomposed into the relative contributions from C input, allocation of external C into different C pools, nitrogen regulation, altered soil environmental conditions, and vertical mixing along the soil profile. In addition, the matrix tool can accelerate model spin-up, permit thorough parametric sensitivity tests, enable pool-based data assimilation, and facilitate tracking and benchmarking of model behaviors. Overall, the matrix approach can make a broad range of future modeling activities more efficient and effective. © 2017 John Wiley & Sons Ltd.
A general framework of noise suppression in material decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less
Solid-state reaction kinetics of neodymium doped magnesium hydrogen phosphate system
NASA Astrophysics Data System (ADS)
Gupta, Rashmi; Slathia, Goldy; Bamzai, K. K.
2018-05-01
Neodymium doped magnesium hydrogen phosphate (NdMHP) crystals were grown by using gel encapsulation technique. Structural characterization of the grown crystals has been carried out by single crystal X-ray diffraction (XRD) and it revealed that NdMHP crystals crystallize in orthorhombic crystal system with space group Pbca. Kinetics of the decomposition of the grown crystals has been studied by non-isothermal analysis. The estimation of decomposition temperatures and weight loss has been made from the thermogravimetric/differential thermo analytical (TG/DTA) in conjuncture with DSC studies. The various steps involved in the thermal decomposition of the material have been analysed using Horowitz-Metzger, Coats-Redfern and Piloyan-Novikova equations for evaluating various kinetic parameters.
An optimization approach for fitting canonical tensor decompositions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less
On the singular perturbations for fractional differential equation.
Atangana, Abdon
2014-01-01
The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method.
Vortex breakdown simulation - A circumspect study of the steady, laminar, axisymmetric model
NASA Technical Reports Server (NTRS)
Salas, M. D.; Kuruvila, G.
1989-01-01
The incompressible axisymmetric steady Navier-Stokes equations are written using the streamfunction-vorticity formulation. The resulting equations are discretized using a second-order central-difference scheme. The discretized equations are linearized and then solved using an exact LU decomposition, Gaussian elimination, and Newton iteration. Solutions are presented for Reynolds numbers (based on vortex core radius) 100-1800 and swirl parameter 0.9-1.1. The effects of inflow boundary conditions, the location of farfield and outflow boundaries, and mesh refinement are examined. Finally, the stability of the steady solutions is investigated by solving the time-dependent equations.
The predictive power of singular value decomposition entropy for stock market dynamics
NASA Astrophysics Data System (ADS)
Caraiani, Petre
2014-01-01
We use a correlation-based approach to analyze financial data from the US stock market, both daily and monthly observations from the Dow Jones. We compute the entropy based on the singular value decomposition of the correlation matrix for the components of the Dow Jones Industrial Index. Based on a moving window, we derive time varying measures of entropy for both daily and monthly data. We find that the entropy has a predictive ability with respect to stock market dynamics as indicated by the Granger causality tests.
More on Chemical Reaction Balancing.
ERIC Educational Resources Information Center
Swinehart, D. F.
1985-01-01
A previous article stated that only the matrix method was powerful enough to balance a particular chemical equation. Shows how this equation can be balanced without using the matrix method. The approach taken involves writing partial mathematical reactions and redox half-reactions, and combining them to yield the final balanced reaction. (JN)
Improvements in sparse matrix operations of NASTRAN
NASA Technical Reports Server (NTRS)
Harano, S.
1980-01-01
A "nontransmit" packing routine was added to NASTRAN to allow matrix data to be refered to directly from the input/output buffer. Use of the packing routine permits various routines for matrix handling to perform a direct reference to the input/output buffer if data addresses have once been received. The packing routine offers a buffer by buffer backspace feature for efficient backspacing in sequential access. Unlike a conventional backspacing that needs twice back record for a single read of one record (one column), this feature omits overlapping of READ operation and back record. It eliminates the necessity of writing, in decomposition of a symmetric matrix, of a portion of the matrix to its upper triangular matrix from the last to the first columns of the symmetric matrix, thus saving time for generating the upper triangular matrix. Only a lower triangular matrix must be written onto the secondary storage device, bringing 10 to 30% reduction in use of the disk space of the storage device.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvartatskyi, O. I., E-mail: alex.chvartatskyy@gmail.com; Sydorenko, Yu. M., E-mail: y-sydorenko@franko.lviv.ua
We introduce a new bidirectional generalization of (2+1)-dimensional k-constrained Kadomtsev-Petviashvili (KP) hierarchy ((2+1)-BDk-cKPH). This new hierarchy generalizes (2+1)-dimensional k-cKP hierarchy, (t{sub A}, τ{sub B}) and (γ{sub A}, σ{sub B}) matrix hierarchies. (2+1)-BDk-cKPH contains a new matrix (1+1)-k-constrained KP hierarchy. Some members of (2+1)-BDk-cKPH are also listed. In particular, it contains matrix generalizations of Davey-Stewartson (DS) systems, (2+1)-dimensional modified Korteweg-de Vries equation and the Nizhnik equation. (2+1)-BDk-cKPH also includes new matrix (2+1)-dimensional generalizations of the Yajima-Oikawa and Melnikov systems. Binary Darboux Transformation Dressing Method is also proposed for construction of exact solutions for equations from (2+1)-BDk-cKPH. As an example the exactmore » form of multi-soliton solutions for vector generalization of the DS system is given.« less
NASA Technical Reports Server (NTRS)
Morino, L.
1980-01-01
Recent developments of the Green's function method and the computer program SOUSSA (Steady, Oscillatory, and Unsteady Subsonic and Supersonic Aerodynamics) are reviewed and summarized. Applying the Green's function method to the fully unsteady (transient) potential equation yields an integro-differential-delay equation. With spatial discretization by the finite-element method, this equation is approximated by a set of differential-delay equations in time. Time solution by Laplace transform yields a matrix relating the velocity potential to the normal wash. Premultiplying and postmultiplying by the matrices relating generalized forces to the potential and the normal wash to the generalized coordinates one obtains the matrix of the generalized aerodynamic forces. The frequency and mode-shape dependence of this matrix makes the program SOUSSA useful for multiple frequency and repeated mode-shape evaluations.
An improved V-Lambda solution of the matrix Riccati equation
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Markley, F. Landis
1988-01-01
The authors present an improved algorithm for computing the V-Lambda solution of the matrix Riccati equation. The improvement is in the reduction of the computational load, results from the orthogonality of the eigenvector matrix that has to be solved for. The orthogonality constraint reduces the number of independent parameters which define the matrix from n-squared to n (n - 1)/2. The authors show how to specify the parameters, how to solve for them and how to form from them the needed eigenvector matrix. In the search for suitable parameters, the analogy between the present problem and the problem of attitude determination is exploited, resulting in the choice of Rodrigues parameters.
Application of thermogravimetric studies for optimization of lithium hexafluorophosphate production
NASA Astrophysics Data System (ADS)
Smagin, A. A.; Matyukha, V. A.; Korobtsev, V. P.
Lithium hexafluorophosphate, isolated from hydrogen fluoride solution (anhydrous) by decanting and filtering, is an adduct of composition LiPF 6*HF. By thermogravimetric investigations the dynamics of HF removal from LiPF 6 by LiPF 6*HF thermal decomposition was studied. Based on the experimental data the constants entering into the equations as C = C0*exp( t*K0* exp(- E/RT)) were calculated, explaining the thermal decomposition processes of LiPF 6*HF and LiPF 6.
Euler and Navier-Stokes equations on the hyperbolic plane.
Khesin, Boris; Misiolek, Gerard
2012-11-06
We show that nonuniqueness of the Leray-Hopf solutions of the Navier-Stokes equation on the hyperbolic plane (2) observed by Chan and Czubak is a consequence of the Hodge decomposition. We show that this phenomenon does not occur on (n) whenever n ≥ 3. We also describe the corresponding general Hamiltonian framework of hydrodynamics on complete Riemannian manifolds, which includes the hyperbolic setting.
Sorokin, Sergey V
2011-03-01
Helical springs serve as vibration isolators in virtually any suspension system. Various exact and approximate methods may be employed to determine the eigenfrequencies of vibrations of these structural elements and their dynamic transfer functions. The method of boundary integral equations is a meaningful alternative to obtain exact solutions of problems of the time-harmonic dynamics of elastic springs in the framework of Bernoulli-Euler beam theory. In this paper, the derivations of the Green's matrix, of the Somigliana's identities, and of the boundary integral equations are presented. The vibrational power transmission in an infinitely long spring is analyzed by means of the Green's matrix. The eigenfrequencies and the dynamic transfer functions are found by solving the boundary integral equations. In the course of analysis, the essential features and advantages of the method of boundary integral equations are highlighted. The reported analytical results may be used to study the time-harmonic motion in any wave guide governed by a system of linear differential equations in a single spatial coordinate along its axis. © 2011 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Keefe, Laurence
2016-11-01
Parabolized acoustic propagation in transversely inhomogeneous media is described by the operator update equation U (x , y , z + Δz) =eik0 (- 1 +√{ 1 + Z }) U (x , y , z) for evolution of the envelope of a wavetrain solution to the original Helmholtz equation. Here the operator, Z =∇T2 + (n2 - 1) , involves the transverse Laplacian and the refractive index distribution. Standard expansion techniques (on the assumption Z << 1)) produce pdes that approximate, to greater or lesser extent, the full dispersion relation of the original Helmholtz equation, except that none of them describe evanescent/damped waves without special modifications to the expansion coefficients. Alternatively, a discretization of both the envelope and the operator converts the operator update equation into a matrix multiply, and existing theorems on matrix functions demonstrate that the complete (discrete) Helmholtz dispersion relation, including evanescent/damped waves, is preserved by this discretization. Propagation-constant/damping-rates contour comparisons for the operator equation and various approximations demonstrate this point, and how poorly the lowest-order, textbook, parabolized equation describes propagation in lined ducts.
Sensitivity analysis for aeroacoustic and aeroelastic design of turbomachinery blades
NASA Technical Reports Server (NTRS)
Lorence, Christopher B.; Hall, Kenneth C.
1995-01-01
A new method for computing the effect that small changes in the airfoil shape and cascade geometry have on the aeroacoustic and aeroelastic behavior of turbomachinery cascades is presented. The nonlinear unsteady flow is assumed to be composed of a nonlinear steady flow plus a small perturbation unsteady flow that is harmonic in time. First, the full potential equation is used to describe the behavior of the nonlinear mean (steady) flow through a two-dimensional cascade. The small disturbance unsteady flow through the cascade is described by the linearized Euler equations. Using rapid distortion theory, the unsteady velocity is split into a rotational part that contains the vorticity and an irrotational part described by a scalar potential. The unsteady vorticity transport is described analytically in terms of the drift and stream functions computed from the steady flow. Hence, the solution of the linearized Euler equations may be reduced to a single inhomogeneous equation for the unsteady potential. The steady flow and small disturbance unsteady flow equations are discretized using bilinear quadrilateral isoparametric finite elements. The nonlinear mean flow solution and streamline computational grid are computed simultaneously using Newton iteration. At each step of the Newton iteration, LU decomposition is used to solve the resulting set of linear equations. The unsteady flow problem is linear, and is also solved using LU decomposition. Next, a sensitivity analysis is performed to determine the effect small changes in cascade and airfoil geometry have on the mean and unsteady flow fields. The sensitivity analysis makes use of the nominal steady and unsteady flow LU decompositions so that no additional matrices need to be factored. Hence, the present method is computationally very efficient. To demonstrate how the sensitivity analysis may be used to redesign cascades, a compressor is redesigned for improved aeroelastic stability and two different fan exit guide vanes are redesigned for reduced downstream radiated noise. In addition, a framework detailing how the two-dimensional version of the method may be used to redesign three-dimensional geometries is presented.
Effects of Solute Concentrations on Kinetic Pathways in Ni-Al-Cr Alloys
NASA Technical Reports Server (NTRS)
Booth-Morrison, Christopher; Weninger, Jessica; Sudbrack, Chantal K.; Mao, Zugang; Seidman, David N.; Noebe, Ronald D.
2008-01-01
The kinetic pathways resulting from the formation of coherent gamma'-precipitates from the gamma-matrix are studied for two Ni-Al-Cr alloys with similar gamma'-precipitate volume fractions at 873 K. The details of the phase decompositions of Ni-7.5Al-8.5Cr at.% and Ni-5.2Al-14.2Cr at.% for aging times from 1/6 to 1024 h are investigated by atom-probe tomography, and are found to differ significantly from a mean-field description of coarsening. The morphologies of the gamma'-precipitates of the alloys are similar, though the degrees of gamma'-precipitate coagulation and coalescence differ. Quantification within the framework of classical nucleation theory reveals that differences in the chemical driving forces for phase decomposition result in differences in the nucleation behavior of the two alloys. The temporal evolution of the gamma'-precipitate average radii and the gamma-matrix supersaturations follow the predictions of classical coarsening models. The compositional trajectories of the gamma-matrix phases of the alloys are found to follow approximately the equilibrium tie-lines, while the trajectories of the gamma'-precipitates do not, resulting in significant differences in the partitioning ratios of the solute elements.
A master equation for strongly interacting dipoles
NASA Astrophysics Data System (ADS)
Stokes, Adam; Nazir, Ahsan
2018-04-01
We consider a pair of dipoles such as Rydberg atoms for which direct electrostatic dipole–dipole interactions may be significantly larger than the coupling to transverse radiation. We derive a master equation using the Coulomb gauge, which naturally enables us to include the inter-dipole Coulomb energy within the system Hamiltonian rather than the interaction. In contrast, the standard master equation for a two-dipole system, which depends entirely on well-known gauge-invariant S-matrix elements, is usually derived using the multipolar gauge, wherein there is no explicit inter-dipole Coulomb interaction. We show using a generalised arbitrary-gauge light-matter Hamiltonian that this master equation is obtained in other gauges only if the inter-dipole Coulomb interaction is kept within the interaction Hamiltonian rather than the unperturbed part as in our derivation. Thus, our master equation depends on different S-matrix elements, which give separation-dependent corrections to the standard matrix elements describing resonant energy transfer and collective decay. The two master equations coincide in the large separation limit where static couplings are negligible. We provide an application of our master equation by finding separation-dependent corrections to the natural emission spectrum of the two-dipole system.
In vitro dissolution kinetic study of theophylline from hydrophilic and hydrophobic matrices.
Maswadeh, Hamzah M; Semreen, Mohammad H; Abdulhalim, Abdulatif A
2006-01-01
Oral dosage forms containing 300 mg theophylline in matrix type tablets, were prepared by direct compression method using two kinds of matrices, glycerylbehenate (hydrophobic), and (hydroxypropyl)methyl cellulose (hydrophilic). The in vitro release kinetics of these formulations were studied at pH 6.8 using the USP dissolution apparatus with the paddle assemble. The kinetics of the dissolution process were studied by analyzing the dissolution data using four kinetic equations, the zero-order equation, the first-order equation, the Higuchi square root equation and the Hixson-Crowell cube root law. The analysis of the dissolution kinetic data for the theophylline preparations in this study shows that it follows the first order kinetics and the release process involves erosion / diffusion and an alteration in the surface area and diameter of the matrix system, as well as in the diffusion path length from the matrix drug load during the dissolution process. This relation is best described by the use of both the first-order equation and the Hixson-Crowell cube root law.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, Edmond
Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.
Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong
2018-05-11
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.
Adaptive bearing estimation and tracking of multiple targets in a realistic passive sonar scenario
NASA Astrophysics Data System (ADS)
Rajagopal, R.; Challa, Subhash; Faruqi, Farhan A.; Rao, P. R.
1997-06-01
In a realistic passive sonar environment, the received signal consists of multipath arrivals from closely separated moving targets. The signals are contaminated by spatially correlated noise. The differential MUSIC has been proposed to estimate the DOAs in such a scenario. This method estimates the 'noise subspace' in order to estimate the DOAs. However, the 'noise subspace' estimate has to be updated as and when new data become available. In order to save the computational costs, a new adaptive noise subspace estimation algorithm is proposed in this paper. The salient features of the proposed algorithm are: (1) Noise subspace estimation is done by QR decomposition of the difference matrix which is formed from the data covariance matrix. Thus, as compared to standard eigen-decomposition based methods which require O(N3) computations, the proposed method requires only O(N2) computations. (2) Noise subspace is updated by updating the QR decomposition. (3) The proposed algorithm works in a realistic sonar environment. In the second part of the paper, the estimated bearing values are used to track multiple targets. In order to achieve this, the nonlinear system/linear measurement extended Kalman filtering proposed is applied. Computer simulation results are also presented to support the theory.
Harmonic analysis of electric locomotive and traction power system based on wavelet singular entropy
NASA Astrophysics Data System (ADS)
Dun, Xiaohong
2018-05-01
With the rapid development of high-speed railway and heavy-haul transport, the locomotive and traction power system has become the main harmonic source of China's power grid. In response to this phenomenon, the system's power quality issues need timely monitoring, assessment and governance. Wavelet singular entropy is an organic combination of wavelet transform, singular value decomposition and information entropy theory, which combines the unique advantages of the three in signal processing: the time-frequency local characteristics of wavelet transform, singular value decomposition explores the basic modal characteristics of data, and information entropy quantifies the feature data. Based on the theory of singular value decomposition, the wavelet coefficient matrix after wavelet transform is decomposed into a series of singular values that can reflect the basic characteristics of the original coefficient matrix. Then the statistical properties of information entropy are used to analyze the uncertainty of the singular value set, so as to give a definite measurement of the complexity of the original signal. It can be said that wavelet entropy has a good application prospect in fault detection, classification and protection. The mat lab simulation shows that the use of wavelet singular entropy on the locomotive and traction power system harmonic analysis is effective.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN
Cheng, Gang; Chen, Xihui
2018-01-01
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671
Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui
2017-01-01
Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K-nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction. PMID:28346385
Dhyani, Vaibhav; Kumar Awasthi, Mukesh; Wang, Quan; Kumar, Jitendra; Ren, Xiuna; Zhao, Junchao; Chen, Hongyu; Wang, Meijing; Bhaskar, Thallada; Zhang, Zengqiang
2018-03-01
In this work, the influence of composting on the thermal decomposition behavior and decomposition kinetics of pig manure-derived solid wastes was analyzed using thermogravimetry. Wheat straw, biochar, zeolite, and wood vinegar were added to pig manure during composting. The composting was done in the 130 L PVC reactors with 100 L effective volume for 50 days. The activation energy of pyrolysis of samples before and after composting was calculated using Friedman's method, while the pre-exponential factor was calculated using Kissinger's equation. It was observed that composting decreased the volatile content of all the samples. The additives when added together in pig manure lead to a reduction in the activation energy of decomposition, advocating the presence of simpler compounds in the compost material in comparison with the complex feedstock. Copyright © 2017 Elsevier Ltd. All rights reserved.
Exact solution of some linear matrix equations using algebraic methods
NASA Technical Reports Server (NTRS)
Djaferis, T. E.; Mitter, S. K.
1979-01-01
Algebraic methods are used to construct the exact solution P of the linear matrix equation PA + BP = - C, where A, B, and C are matrices with real entries. The emphasis of this equation is on the use of finite algebraic procedures which are easily implemented on a digital computer and which lead to an explicit solution to the problem. The paper is divided into six sections which include the proof of the basic lemma, the Liapunov equation, and the computer implementation for the rational, integer and modular algorithms. Two numerical examples are given and the entire calculation process is depicted.
Modifying a numerical algorithm for solving the matrix equation X + AX T B = C
NASA Astrophysics Data System (ADS)
Vorontsov, Yu. O.
2013-06-01
Certain modifications are proposed for a numerical algorithm solving the matrix equation X + AX T B = C. By keeping the intermediate results in storage and repeatedly using them, it is possible to reduce the total complexity of the algorithm from O( n 4) to O( n 3) arithmetic operations.
NASA Technical Reports Server (NTRS)
Packard, A. K.; Sastry, S. S.
1986-01-01
A method of solving a class of linear matrix equations over various rings is proposed, using results from linear geometric control theory. An algorithm, successfully implemented, is presented, along with non-trivial numerical examples. Applications of the method to the algebraic control system design methodology are discussed.
Matrix Solution of Coupled Differential Equations and Looped Car Following Models
ERIC Educational Resources Information Center
McCartney, Mark
2008-01-01
A simple mathematical model for the behaviour of how vehicles follow each other along a looped stretch of road is described. The resulting coupled first order differential equations are solved using appropriate matrix techniques and the physical significance of the model is discussed. A number possible classroom exercises are suggested to help…
NASA Astrophysics Data System (ADS)
Meric, Ilker; Johansen, Geir A.; Holstad, Marie B.; Mattingly, John; Gardner, Robin P.
2012-05-01
Prompt gamma-ray neutron activation analysis (PGNAA) has been and still is one of the major methods of choice for the elemental analysis of various bulk samples. This is mostly due to the fact that PGNAA offers a rapid, non-destructive and on-line means of sample interrogation. The quantitative analysis of the prompt gamma-ray data could, on the other hand, be performed either through the single peak analysis or the so-called Monte Carlo library least-squares (MCLLS) approach, of which the latter has been shown to be more sensitive and more accurate than the former. The MCLLS approach is based on the assumption that the total prompt gamma-ray spectrum of any sample is a linear combination of the contributions from the individual constituents or libraries. This assumption leads to, through the minimization of the chi-square value, a set of linear equations which has to be solved to obtain the library multipliers, a process that involves the inversion of the covariance matrix. The least-squares solution may be extremely uncertain due to the ill-conditioning of the covariance matrix. The covariance matrix will become ill-conditioned whenever, in the subsequent calculations, two or more libraries are highly correlated. The ill-conditioning will also be unavoidable whenever the sample contains trace amounts of certain elements or elements with significantly low thermal neutron capture cross-sections. In this work, a new iterative approach, which can handle the ill-conditioning of the covariance matrix, is proposed and applied to a hydrocarbon multiphase flow problem in which the parameters of interest are the separate amounts of the oil, gas, water and salt phases. The results of the proposed method are also compared with the results obtained through the implementation of a well-known regularization method, the truncated singular value decomposition. Final calculations indicate that the proposed approach would be able to treat ill-conditioned cases appropriately.
Reactive decomposition of low density PMDI foam subject to shock compression
NASA Astrophysics Data System (ADS)
Alexander, Scott; Reinhart, William; Brundage, Aaron; Peterson, David
Low density polymethylene diisocyanate (PMDI) foam with a density of 5.4 pounds per cubic foot (0.087 g/cc) was tested to determine the equation of state properties under shock compression over the pressure range of 0.58 - 3.4 GPa. This pressure range encompasses a region approximately 1.0-1.2 GPa within which the foam undergoes reactive decomposition resulting in significant volume expansion of approximately three times the volume prior to reaction. This volume expansion has a significant effect on the high pressure equation of state. Previous work on similar foam was conducted only up to the region where volume expansion occurs and extrapolation of that data to higher pressure results in a significant error. It is now clear that new models are required to account for the reactive decomposition of this class of foam. The results of plate impact tests will be presented and discussed including details of the unique challenges associated with shock compression of low density foams. Sandia National Labs is a multi-program lab managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. Dept. of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Domain decomposition: A bridge between nature and parallel computers
NASA Technical Reports Server (NTRS)
Keyes, David E.
1992-01-01
Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.
ERIC Educational Resources Information Center
Blakley, G. R.
1982-01-01
Reviews mathematical techniques for solving systems of homogeneous linear equations and demonstrates that the algebraic method of balancing chemical equations is a matter of solving a system of homogeneous linear equations. FORTRAN programs using this matrix method to chemical equation balancing are available from the author. (JN)
NASA Astrophysics Data System (ADS)
Zhang, Fei; Huang, Weizhang; Li, Xianping; Zhang, Shicheng
2018-03-01
A moving mesh finite element method is studied for the numerical solution of a phase-field model for brittle fracture. The moving mesh partial differential equation approach is employed to dynamically track crack propagation. Meanwhile, the decomposition of the strain tensor into tensile and compressive components is essential for the success of the phase-field modeling of brittle fracture but results in a non-smooth elastic energy and stronger nonlinearity in the governing equation. This makes the governing equation much more difficult to solve and, in particular, Newton's iteration often fails to converge. Three regularization methods are proposed to smooth out the decomposition of the strain tensor. Numerical examples of fracture propagation under quasi-static load demonstrate that all of the methods can effectively improve the convergence of Newton's iteration for relatively small values of the regularization parameter but without compromising the accuracy of the numerical solution. They also show that the moving mesh finite element method is able to adaptively concentrate the mesh elements around propagating cracks and handle multiple and complex crack systems.
A low dimensional dynamical system for the wall layer
NASA Technical Reports Server (NTRS)
Aubry, N.; Keefe, L. R.
1987-01-01
Low dimensional dynamical systems which model a fully developed turbulent wall layer were derived.The model is based on the optimally fast convergent proper orthogonal decomposition, or Karhunen-Loeve expansion. This decomposition provides a set of eigenfunctions which are derived from the autocorrelation tensor at zero time lag. Via Galerkin projection, low dimensional sets of ordinary differential equations in time, for the coefficients of the expansion, were derived from the Navier-Stokes equations. The energy loss to the unresolved modes was modeled by an eddy viscosity representation, analogous to Heisenberg's spectral model. A set of eigenfunctions and eigenvalues were obtained from direct numerical simulation of a plane channel at a Reynolds number of 6600, based on the mean centerline velocity and the channel width flow and compared with previous work done by Herzog. Using the new eigenvalues and eigenfunctions, a new ten dimensional set of ordinary differential equations were derived using five non-zero cross-stream Fourier modes with a periodic length of 377 wall units. The dynamical system was integrated for a range of the eddy viscosity prameter alpha. This work is encouraging.
Second level semi-degenerate fields in W_3 Toda theory: matrix element and differential equation
NASA Astrophysics Data System (ADS)
Belavin, Vladimir; Cao, Xiangyu; Estienne, Benoit; Santachiara, Raoul
2017-03-01
In a recent study we considered W_3 Toda 4-point functions that involve matrix elements of a primary field with the highest-weight in the adjoint representation of sl_3 . We generalize this result by considering a semi-degenerate primary field, which has one null vector at level two. We obtain a sixth-order Fuchsian differential equation for the conformal blocks. We discuss the presence of multiplicities, the matrix elements and the fusion rules.
NASA Technical Reports Server (NTRS)
White, Jeffery A.; Baurle, Robert A.; Passe, Bradley J.; Spiegel, Seth C.; Nishikawa, Hiroaki
2017-01-01
The ability to solve the equations governing the hypersonic turbulent flow of a real gas on unstructured grids using a spatially-elliptic, 2nd-order accurate, cell-centered, finite-volume method has been recently implemented in the VULCAN-CFD code. This paper describes the key numerical methods and techniques that were found to be required to robustly obtain accurate solutions to hypersonic flows on non-hex-dominant unstructured grids. The methods and techniques described include: an augmented stencil, weighted linear least squares, cell-average gradient method, a robust multidimensional cell-average gradient-limiter process that is consistent with the augmented stencil of the cell-average gradient method and a cell-face gradient method that contains a cell skewness sensitive damping term derived using hyperbolic diffusion based concepts. A data-parallel matrix-based symmetric Gauss-Seidel point-implicit scheme, used to solve the governing equations, is described and shown to be more robust and efficient than a matrix-free alternative. In addition, a y+ adaptive turbulent wall boundary condition methodology is presented. This boundary condition methodology is deigned to automatically switch between a solve-to-the-wall and a wall-matching-function boundary condition based on the local y+ of the 1st cell center off the wall. The aforementioned methods and techniques are then applied to a series of hypersonic and supersonic turbulent flat plate unit tests to examine the efficiency, robustness and convergence behavior of the implicit scheme and to determine the ability of the solve-to-the-wall and y+ adaptive turbulent wall boundary conditions to reproduce the turbulent law-of-the-wall. Finally, the thermally perfect, chemically frozen, Mach 7.8 turbulent flow of air through a scramjet flow-path is computed and compared with experimental data to demonstrate the robustness, accuracy and convergence behavior of the unstructured-grid solver for a realistic 3-D geometry on a non-hex-dominant grid.
NASA Astrophysics Data System (ADS)
Bigdeli, Abbas; Biglari-Abhari, Morteza; Salcic, Zoran; Tin Lai, Yat
2006-12-01
A new pipelined systolic array-based (PSA) architecture for matrix inversion is proposed. The pipelined systolic array (PSA) architecture is suitable for FPGA implementations as it efficiently uses available resources of an FPGA. It is scalable for different matrix size and as such allows employing parameterisation that makes it suitable for customisation for application-specific needs. This new architecture has an advantage of[InlineEquation not available: see fulltext.] processing element complexity, compared to the[InlineEquation not available: see fulltext.] in other systolic array structures, where the size of the input matrix is given by[InlineEquation not available: see fulltext.]. The use of the PSA architecture for Kalman filter as an implementation example, which requires different structures for different number of states, is illustrated. The resulting precision error is analysed and shown to be negligible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Platts, J.A.; Abraham, M.H.
The partitioning of organic compounds between air and foliage and between water and foliage is of considerable environmental interest. The purpose of this work is to show that partitioning into the cuticular matrix of one particular species can be satisfactorily modeled by general equations the authors have previously developed and, hence, that the same general equations could be used to model partitioning into other plant materials of the same or different species. The general equations are linear free energy relationships that employ descriptors for polarity/polarizability, hydrogen bond acidity and basicity, dispersive effects, and volume. They have been applied to themore » partition of 62 very varied organic compounds between cuticular matrix of the tomato fruit, Lycopersicon esculentum, and either air (MX{sub a}) or water (MX{sub w}). Values of log MX{sub a} covering a range of 12.4 log units are correlated with a standard deviation of 0.232 log unit, and values of log MX{sub w} covering a range of 7.6 log unit are correlated with an SD of 0.236 log unit. Possibilities are discussed for the prediction of new air-plant cuticular matrix and water-plant cuticular matrix partition values on the basis of the equations developed.« less
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1975-01-01
The equations of motion for a system of coupled flexible bodies, rigid bodies, point masses, and symmetric wheels were derived. The equations were cast into a partitioned matrix form in which certain partitions became nontrivial when the effects of flexibility were treated. The equations are shown to contract to the coupled rigid body equations or expand to the coupled flexible body equations all within the same basic framework. Furthermore, the coefficient matrix always has the computationally desirable property of symmetry. Making use of the derived equations, a comparison was made between the equations which described a flexible body model and those which described a rigid body model of the same elastic appendage attached to an arbitrary coupled body system. From the comparison, equivalence relations were developed which defined how the two modeling approaches described identical dynamic effects.
Jasik-Slęzak, Jolanta; Slęzak-Prochazka, Izabella; Slęzak, Andrzej
2014-01-01
A system of network forms of Kedem-Katchalsky (K-K) equations for ternary non-electrolyte solutions is made of eight matrix equations containing Peusner's coefficients R(ij), L(ij), H(ij), W(ij), K(ij), N(ij), S(ij) or P(ij) (i, j ∈ {1, 2, 3}). The equations are the result of symmetric or hybrid transformation of the classic form of K-K equations by the use of methods of Peusner's network thermodynamics (PNT). Calculating concentration dependences of the determinant of Peusner's coefficients matrixes R(ij), L(ij), H(ij), W(ij), S(ij), N(ij), K(ij) and P(ij) (i, j ∈ {1, 2, 3}). The material used in the experiment was a hemodialysis Nephrophan membrane with specified transport properties (L(p), σ, Ω) in aqueous glucose and ethanol solution. The method involved equations for determinants of the matrixes coefficients R(ij), L(ij), H(ij), W(ij), S(ij), N(ij), K(ij) or P(ij) (i, j ∈ {1, 2, 3}). The objective of calculations were dependences of determinants of Peusner's coeffcients matrixes R(ij), L(ij), H(ij), W(ij), S(ij), N(ij), K(ij) or P(ij) (i, j ∈ {1, 2, 3}) within the conditions of solution homogeneity upon an average concentration of one component of solution in the membrane (C1) with a determined value of the second component (C2). The method of calculating the determinants of Peusner's coeffcients matrixes R(ij), L(ij), H(ij), W(ij), S(ij), N(ij), K(ij) or P(ij) (i, j ∈ {1, 2, 3}) is a new tool that may be applicable in studies on membrane transport. Calculations showed that the coefficients are sensitive to concentration and composition of solutions separated by a polymeric membrane.
Clustering Tree-structured Data on Manifold
Lu, Na; Miao, Hongyu
2016-01-01
Tree-structured data usually contain both topological and geometrical information, and are necessarily considered on manifold instead of Euclidean space for appropriate data parameterization and analysis. In this study, we propose a novel tree-structured data parameterization, called Topology-Attribute matrix (T-A matrix), so the data clustering task can be conducted on matrix manifold. We incorporate the structure constraints embedded in data into the non-negative matrix factorization method to determine meta-trees from the T-A matrix, and the signature vector of each single tree can then be extracted by meta-tree decomposition. The meta-tree space turns out to be a cone space, in which we explore the distance metric and implement the clustering algorithm based on the concepts like Fréchet mean. Finally, the T-A matrix based clustering (TAMBAC) framework is evaluated and compared using both simulated data and real retinal images to illus trate its efficiency and accuracy. PMID:26660696
1,2-diketones promoted degradation of poly(epsilon-caprolactone)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danko, Martin; Borska, Katarina; Ragab, Sherif Shaban
2012-07-11
Photochemical reactions of Benzil and Camphorquinone were used for modification of poly({epsilon}-caprolactone) polymer films. Photochemistry of dopants was followed by infrared spectroscopy, changes on polymer chains of matrix were followed by gel permeation chromatography. Benzoyl peroxide was efficiently photochemically generated from benzyl in solid polymer matrix in the presence of air. Following decomposition of benzoyl peroxide led to degradation of matrix. Photochemical transformation of benzil in vacuum led to hydrogen abstraction from the polymer chains in higher extent, which resulted to chains recombination and formation of gel. Photochemical transformation of camphorquinone to corresponding camphoric peroxide was not observed. Only decreasemore » of molecular weight of polymer matrix doped with camphorquinone was observed during the irradiation.« less
A discrete method for modal analysis of overhead line conductor bundles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Migdalovici, M.A.; Sireteanu, T.D.; Albrecht, A.A.
The paper presents a mathematical model and a semi-analytical procedure to calculate the vibration modes and eigenfrequencies of single or bundled conductors with spacers which are needed for evaluation of the wind induced vibration of conductors and for optimization of spacer-dampers placement. The method consists in decomposition of conductors in modules and the expansion by polynomial series of unknown displacements on each module. A complete system of polynomials are deduced for this by Legendre polynomials. Each module is considered either boundary conditions at the extremity of the module or the continuity conditions between the modules and also a number ofmore » projections of module equilibrium equation on the polynomials from the expansion series of unknown displacement. The global system of the eigenmodes and eigenfrequencies is of the matrix form: A X + {omega}{sup 2} M X = 0. The theoretical considerations are exemplified on one conductor and on bundle of two conductors with spacers. From this, a method for forced vibration calculus of a single or bundled conductors is also presented.« less
Zheng, Xueying; Qin, Guoyou; Tu, Dongsheng
2017-05-30
Motivated by the analysis of quality of life data from a clinical trial on early breast cancer, we propose in this paper a generalized partially linear mean-covariance regression model for longitudinal proportional data, which are bounded in a closed interval. Cholesky decomposition of the covariance matrix for within-subject responses and generalized estimation equations are used to estimate unknown parameters and the nonlinear function in the model. Simulation studies are performed to evaluate the performance of the proposed estimation procedures. Our new model is also applied to analyze the data from the cancer clinical trial that motivated this research. In comparison with available models in the literature, the proposed model does not require specific parametric assumptions on the density function of the longitudinal responses and the probability function of the boundary values and can capture dynamic changes of time or other interested variables on both mean and covariance of the correlated proportional responses. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Resolving Some Paradoxes in the Thermal Decomposition Mechanism of Acetaldehyde
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sivaramakrishnan, Raghu; Michael, Joe V.; Harding, Lawrence B.
2015-07-16
The mechanism for the thermal decomposition of acetaldehyde has been revisited with an analysis of literature kinetics experiments using theoretical kinetics. The present modeling study was motivated by recent observations, with very sensitive diagnostics, of some unexpected products in high temperature micro-tubular reactor experiments on the thermal decomposition of CH3CHO and its deuterated analogs, CH3CDO, CD3CHO, and CD3CDO. The observations of these products prompted the authors of these studies to suggest that the enol tautomer, CH2CHOH (vinyl alcohol), is a primary intermediate in the thermal decomposition of acetaldehyde. The present modeling efforts on acetaldehyde decomposition incorporate a master equation re-analysismore » of the CH3CHO potential energy surface (PES). The lowest energy process on this PES is an isomerization of CH3CHO to CH2CHOH. However, the subsequent product channels for CH2CHOH are substantially higher in energy, and the only unimolecular process that can be thermally accessed is a re-isomerization to CH3CHO. The incorporation of these new theoretical kinetics predictions into models for selected literature experiments on CH3CHO thermal decomposition confirms our earlier experiment and theory based conclusions that the dominant decomposition process in CH3CHO at high temperatures is C-C bond fission with a minor contribution (~10-20%) from the roaming mechanism to form CH4 and CO. The present modeling efforts also incorporate a master-equation analysis of the H + CH2CHOH potential energy surface. This bimolecular reaction is the primary mechanism for removal of CH2CHOH, which can accumulate to minor amounts at high temperatures, T > 1000 K, in most lab-scale experiments that use large initial concentrations of CH3CHO. Our modeling efforts indicate that the observation of ketene, water and acetylene in the recent micro-tubular experiments are primarily due to bimolecular reactions of CH3CHO and CH2CHOH with H-atoms, and have no bearing on the unimolecular decomposition mechanism of CH3CHO. The present simulations also indicate that experiments using these micro-tubular reactors when interpreted with the aid of high-level theoretical calculations and kinetics modeling can offer insights into the chemistry of elusive intermediates in high temperature pyrolysis of organic molecules.« less
A new Newton-like method for solving nonlinear equations.
Saheya, B; Chen, Guo-Qing; Sui, Yun-Kang; Wu, Cai-Ying
2016-01-01
This paper presents an iterative scheme for solving nonline ar equations. We establish a new rational approximation model with linear numerator and denominator which has generalizes the local linear model. We then employ the new approximation for nonlinear equations and propose an improved Newton's method to solve it. The new method revises the Jacobian matrix by a rank one matrix each iteration and obtains the quadratic convergence property. The numerical performance and comparison show that the proposed method is efficient.
Computer programs for the solution of systems of linear algebraic equations
NASA Technical Reports Server (NTRS)
Sequi, W. T.
1973-01-01
FORTRAN subprograms for the solution of systems of linear algebraic equations are described, listed, and evaluated in this report. Procedures considered are direct solution, iteration, and matrix inversion. Both incore methods and those which utilize auxiliary data storage devices are considered. Some of the subroutines evaluated require the entire coefficient matrix to be in core, whereas others account for banding or sparceness of the system. General recommendations relative to equation solving are made, and on the basis of tests, specific subprograms are recommended.
Note on coefficient matrices from stochastic Galerkin methods for random diffusion equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Tao, E-mail: tzhou@lsec.cc.ac.c; Tang Tao, E-mail: ttang@hkbu.edu.h
2010-11-01
In a recent work by Xiu and Shen [D. Xiu, J. Shen, Efficient stochastic Galerkin methods for random diffusion equations, J. Comput. Phys. 228 (2009) 266-281], the Galerkin methods are used to solve stochastic diffusion equations in random media, where some properties for the coefficient matrix of the resulting system are provided. They also posed an open question on the properties of the coefficient matrix. In this work, we will provide some results related to the open question.
Numerical computation of linear instability of detonations
NASA Astrophysics Data System (ADS)
Kabanov, Dmitry; Kasimov, Aslan
2017-11-01
We propose a method to study linear stability of detonations by direct numerical computation. The linearized governing equations together with the shock-evolution equation are solved in the shock-attached frame using a high-resolution numerical algorithm. The computed results are processed by the Dynamic Mode Decomposition technique to generate dispersion relations. The method is applied to the reactive Euler equations with simple-depletion chemistry as well as more complex multistep chemistry. The results are compared with those known from normal-mode analysis. We acknowledge financial support from King Abdullah University of Science and Technology.
Killing-Yano tensors in spaces admitting a hypersurface orthogonal Killing vector
NASA Astrophysics Data System (ADS)
Garfinkle, David; Glass, E. N.
2013-03-01
Methods are presented for finding Killing-Yano tensors, conformal Killing-Yano tensors, and conformal Killing vectors in spacetimes with a hypersurface orthogonal Killing vector. These methods are similar to a method developed by the authors for finding Killing tensors. In all cases one decomposes both the tensor and the equation it satisfies into pieces along the Killing vector and pieces orthogonal to the Killing vector. Solving the separate equations that result from this decomposition requires less computing than integrating the original equation. In each case, examples are given to illustrate the method.
FINITE ELEMENT MODEL FOR TIDAL AND RESIDUAL CIRCULATION.
Walters, Roy A.
1986-01-01
Harmonic decomposition is applied to the shallow water equations, thereby creating a system of equations for the amplitude of the various tidal constituents and for the residual motions. The resulting equations are elliptic in nature, are well posed and in practice are shown to be numerically well-behaved. There are a number of strategies for choosing elements: the two extremes are to use a few high-order elements with continuous derivatives, or to use a large number of simpler linear elements. In this paper simple linear elements are used and prove effective.
Ran, Shi-Ju
2016-05-01
In this work, a simple and fundamental numeric scheme dubbed as ab initio optimization principle (AOP) is proposed for the ground states of translational invariant strongly correlated quantum lattice models. The idea is to transform a nondeterministic-polynomial-hard ground-state simulation with infinite degrees of freedom into a single optimization problem of a local function with finite number of physical and ancillary degrees of freedom. This work contributes mainly in the following aspects: (1) AOP provides a simple and efficient scheme to simulate the ground state by solving a local optimization problem. Its solution contains two kinds of boundary states, one of which play the role of the entanglement bath that mimics the interactions between a supercell and the infinite environment, and the other gives the ground state in a tensor network (TN) form. (2) In the sense of TN, a novel decomposition named as tensor ring decomposition (TRD) is proposed to implement AOP. Instead of following the contraction-truncation scheme used by many existing TN-based algorithms, TRD solves the contraction of a uniform TN in an opposite way by encoding the contraction in a set of self-consistent equations that automatically reconstruct the whole TN, making the simulation simple and unified; (3) AOP inherits and develops the ideas of different well-established methods, including the density matrix renormalization group (DMRG), infinite time-evolving block decimation (iTEBD), network contractor dynamics, density matrix embedding theory, etc., providing a unified perspective that is previously missing in this fields. (4) AOP as well as TRD give novel implications to existing TN-based algorithms: A modified iTEBD is suggested and the two-dimensional (2D) AOP is argued to be an intrinsic 2D extension of DMRG that is based on infinite projected entangled pair state. This paper is focused on one-dimensional quantum models to present AOP. The benchmark is given on a transverse Ising chain and 2D classical Ising model, showing the remarkable efficiency and accuracy of the AOP.
The Cauchy Two-Matrix Model, C-Toda Lattice and CKP Hierarchy
NASA Astrophysics Data System (ADS)
Li, Chunxia; Li, Shi-Hao
2018-06-01
This paper mainly talks about the Cauchy two-matrix model and its corresponding integrable hierarchy with the help of orthogonal polynomial theory and Toda-type equations. Starting from the symmetric reduction in Cauchy biorthogonal polynomials, we derive the Toda equation of CKP type (or the C-Toda lattice) as well as its Lax pair by introducing time flows. Then, matrix integral solutions to the C-Toda lattice are extended to give solutions to the CKP hierarchy which reveals the time-dependent partition function of the Cauchy two-matrix model is nothing but the τ -function of the CKP hierarchy. At last, the connection between the Cauchy two-matrix model and Bures ensemble is established from the point of view of integrable systems.
Euler and Navier–Stokes equations on the hyperbolic plane
Khesin, Boris; Misiołek, Gerard
2012-01-01
We show that nonuniqueness of the Leray–Hopf solutions of the Navier–Stokes equation on the hyperbolic plane ℍ2 observed by Chan and Czubak is a consequence of the Hodge decomposition. We show that this phenomenon does not occur on ℍn whenever n ≥ 3. We also describe the corresponding general Hamiltonian framework of hydrodynamics on complete Riemannian manifolds, which includes the hyperbolic setting. PMID:23091015
Acoustic 3D modeling by the method of integral equations
NASA Astrophysics Data System (ADS)
Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.
2018-02-01
This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.
Thermal behaviour properties and corrosion resistance of organoclay/polyurethane film
NASA Astrophysics Data System (ADS)
Kurniawan, O.; Soegijono, B.
2018-03-01
Organoclay/polyurethane film composite was prepared by adding organoclay with different content (1, 3, and 5 wt.%) in polyurethane as a matrix. TGA and DSC showed decomposition temperature shifted to a lower point as organoclay content change. FT-IR spectra showed chemical bonding of organoclay and polyurethane as a matrix, which means that the bonding between filler and matrix occured and the composite was stronger but less bonding occur in composite with 5 wt.% organoclay. The corrosion resistance overall increased with the increasing organoclay content. Composite with 5 wt.% organoclay had more thermal stability and corrosion resistance may probably due to exfoliation of organoclay.
Detection and identification of concealed weapons using matrix pencil
NASA Astrophysics Data System (ADS)
Adve, Raviraj S.; Thayaparan, Thayananthan
2011-06-01
The detection and identification of concealed weapons is an extremely hard problem due to the weak signature of the target buried within the much stronger signal from the human body. This paper furthers the automatic detection and identification of concealed weapons by proposing the use of an effective approach to obtain the resonant frequencies in a measurement. The technique, based on Matrix Pencil, a scheme for model based parameter estimation also provides amplitude information, hence providing a level of confidence in the results. Of specific interest is the fact that Matrix Pencil is based on a singular value decomposition, making the scheme robust against noise.
On the Singular Perturbations for Fractional Differential Equation
Atangana, Abdon
2014-01-01
The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method. PMID:24683357
Derivation of stiffness matrix in constitutive modeling of magnetorheological elastomer
NASA Astrophysics Data System (ADS)
Leng, D.; Sun, L.; Sun, J.; Lin, Y.
2013-02-01
Magnetorheological elastomers (MREs) are a class of smart materials whose mechanical properties change instantly by the application of a magnetic field. Based on the specially orthotropic, transversely isotropic stress-strain relationships and effective permeability model, the stiffness matrix of constitutive equations for deformable chain-like MRE is considered. To valid the components of shear modulus in this stiffness matrix, the magnetic-structural simulations with finite element method (FEM) are presented. An acceptable agreement is illustrated between analytical equations and numerical simulations. For the specified magnetic field, sphere particle radius, distance between adjacent particles in chains and volume fractions of ferrous particles, this constitutive equation is effective to engineering application to estimate the elastic behaviour of chain-like MRE in an external magnetic field.
Study on Kinetic Mechanism of Bastnaesite Concentrates Decomposition Using Calcium Hydroxide
NASA Astrophysics Data System (ADS)
Cen, Peng; Wu, Wenyuan; Bian, Xue
2018-06-01
The thermal decomposition of bastnaesite concentrates using calcium hydroxide was studied. Calcium hydroxide can effectively inhibit the emission of fluorine during roasting by transforming it to calcium fluoride. The decomposition rate increased with increasing reaction temperature and amount of calcium hydroxide. The decomposition kinetics were investigated. The decomposition reaction was determined to be a heterogeneous gas-solid reaction, and it followed an unreacted shrinking core model. By means of the integrated rate equation method, the reaction was proven to be kinetically first order. Different reaction models were fit to the experimental data to determine the reaction control process. The chemical reaction at the phase interface controlled the reaction rate in the temperatures ranging from 673 K to 773 K (400 °C to 500 °C) with an apparent activation energy of 82.044 kJ·mol-1. From 773 K to 973 K (500 °C to 700 °C), diffusion through the solid product's layer became the determining step, with a lower activation energy of 15.841 kJ·mol-1.
Mueller matrix imaging and analysis of cancerous cells
NASA Astrophysics Data System (ADS)
Fernández, A.; Fernández-Luna, J. L.; Moreno, F.; Saiz, J. M.
2017-08-01
Imaging polarimetry is a focus of increasing interest in diagnostic medicine because of its non-invasive nature and its potential for recognizing abnormal tissues. However, handling polarimetric images is not an easy task, and different intermediate steps have been proposed to introduce physical parameters that may be helpful to interpret results. In this work, transmission Mueller matrices (MM) corresponding to cancer cell samples have been experimentally obtained, and three different transformations have been applied: MM-Polar Decomposition, MM-Transformation and MM-Differential Decomposition. Special attention has been paid to diattenuation as a sensitive parameter to identify apoptosis processes induced by cisplatin and etoposide.
Application of modified Martinez-Silva algorithm in determination of net cover
NASA Astrophysics Data System (ADS)
Stefanowicz, Łukasz; Grobelna, Iwona
2016-12-01
In the article we present the idea of modifications of Martinez-Silva algorithm, which allows for determination of place invariants (p-invariants) of Petri net. Their generation time is important in the parallel decomposition of discrete systems described by Petri nets. Decomposition process is essential from the point of view of discrete system design, as it allows for separation of smaller sequential parts. The proposed modifications of Martinez-Silva method concern the net cover by p-invariants and are focused on two important issues: cyclic reduction of invariant matrix and cyclic checking of net cover.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonasson, O.; Karimi, F.; Knezevic, I.
2016-08-01
We derive a Markovian master equation for the single-electron density matrix, applicable to quantum cascade lasers (QCLs). The equation conserves the positivity of the density matrix, includes off-diagonal elements (coherences) as well as in-plane dynamics, and accounts for electron scattering with phonons and impurities. We use the model to simulate a terahertz-frequency QCL, and compare the results with both experiment and simulation via nonequilibrium Green's functions (NEGF). We obtain very good agreement with both experiment and NEGF when the QCL is biased for optimal lasing. For the considered device, we show that the magnitude of coherences can be a significantmore » fraction of the diagonal matrix elements, which demonstrates their importance when describing THz QCLs. We show that the in-plane energy distribution can deviate far from a heated Maxwellian distribution, which suggests that the assumption of thermalized subbands in simplified density-matrix models is inadequate. As a result, we also show that the current density and subband occupations relax towards their steady-state values on very different time scales.« less
Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform
Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart
2014-01-01
Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331
Supercritical CO2/Co-solvents Extraction of Porogen and Surfactant to Obtain
NASA Astrophysics Data System (ADS)
Lubguban, Jorge
2005-03-01
A method of pore generation by supercritical CO2 (SCCO2)/co-solvents extraction for the preparation of nanoporous organosilicate thin films for ultralow dielectric constant materials is investigated. A nanohybrid film was prepared from poly (propylene glycol) (PPG) and poly(methylsilsesquioxane) (PMSSQ) whereby the PPG porogen are entrapped within the crosslinked PMSSQ matrix. Another set of thin films was produced by liquid crystal templating whereby non-ionic (polyoxyethylene 10 stearyl ether) (Brij76) and ionic (cetyltrimethylammonium bromide) (CTAB) surfactant were used as sacrificial templates in a tetraethoxy silane (TEOS) and methyltrimethoxy silane (MTMS) based matrix. These two types of films were treated with SCCO2/co-solvents to remove porogen and surfactant templates. As a comparison, porous structures generated by thermal decomposition were also evaluated. It is found that SCCO2/co-solvents treatment produced closely comparable results with thermal decomposition. The results were evident from Fourier Transform Infrared (FT- IR) spectroscopy and optical constants data obtained from variable angle spectroscopic ellipsometry (VASE).
Hong, Xia
2006-07-01
In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.
Kong, Xiang-Zhen; Liu, Jin-Xing; Zheng, Chun-Hou; Hou, Mi-Xiao; Wang, Juan
2017-07-01
High dimensionality has become a typical feature of biomolecular data. In this paper, a novel dimension reduction method named p-norm singular value decomposition (PSVD) is proposed to seek the low-rank approximation matrix to the biomolecular data. To enhance the robustness to outliers, the Lp-norm is taken as the error function and the Schatten p-norm is used as the regularization function in the optimization model. To evaluate the performance of PSVD, the Kmeans clustering method is then employed for tumor clustering based on the low-rank approximation matrix. Extensive experiments are carried out on five gene expression data sets including two benchmark data sets and three higher dimensional data sets from the cancer genome atlas. The experimental results demonstrate that the PSVD-based method outperforms many existing methods. Especially, it is experimentally proved that the proposed method is more efficient for processing higher dimensional data with good robustness, stability, and superior time performance.
NASA Astrophysics Data System (ADS)
Vyletel, G. M.; van Aken, D. C.; Allison, J. E.
1995-12-01
The 150 °C cyclic response of peak-aged and overaged 2219/TiC/15p and 2219 Al was examined using fully reversed plastic strain-controlled testing. The cyclic response of peak-aged and overaged particle-reinforced materials showed extensive cyclic softening. This softening began at the commencement of cycling and continued until failure. At a plastic strain below 5 × 103, the unreinforced materials did not show evidence of cyclic softening until approximately 30 pct of the life was consumed. In addition, the degree of cyclic softening (†σ) was significantly lower in the unreinforced microstructures. The cyclic softening in both reinforced and unreinforced materials was attributed to the decomposition of the θ' strengthening precipitates. The extent of the precipitate decomposition was much greater in the composite materials due to the increased levels of local plastic strain in the matrix caused by constrained deformation near the TiC particles.
Application of higher order SVD to vibration-based system identification and damage detection
NASA Astrophysics Data System (ADS)
Chao, Shu-Hsien; Loh, Chin-Hsiung; Weng, Jian-Huang
2012-04-01
Singular value decomposition (SVD) is a powerful linear algebra tool. It is widely used in many different signal processing methods, such principal component analysis (PCA), singular spectrum analysis (SSA), frequency domain decomposition (FDD), subspace identification and stochastic subspace identification method ( SI and SSI ). In each case, the data is arranged appropriately in matrix form and SVD is used to extract the feature of the data set. In this study three different algorithms on signal processing and system identification are proposed: SSA, SSI-COV and SSI-DATA. Based on the extracted subspace and null-space from SVD of data matrix, damage detection algorithms can be developed. The proposed algorithm is used to process the shaking table test data of the 6-story steel frame. Features contained in the vibration data are extracted by the proposed method. Damage detection can then be investigated from the test data of the frame structure through subspace-based and nullspace-based damage indices.
Navarro, Pedro J; Fernández-Isla, Carlos; Alcover, Pedro María; Suardíaz, Juan
2016-07-27
This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed.
Variational methods for direct/inverse problems of atmospheric dynamics and chemistry
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena
2013-04-01
We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).
NASA Astrophysics Data System (ADS)
Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.
2017-12-01
This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.
Skrdla, Peter J; Robertson, Rebecca T
2005-06-02
Many solid-state reactions and phase transformations performed under isothermal conditions give rise to asymmetric, sigmoidally shaped conversion-time (x-t) profiles. The mathematical treatment of such curves, as well as their physical interpretation, is often challenging. In this work, the functional form of a Maxwell-Boltzmann (M-B) distribution is used to describe the distribution of activation energies for the reagent solids, which, when coupled with an integrated first-order rate expression, yields a novel semiempirical equation that may offer better success in the modeling of solid-state kinetics. In this approach, the Arrhenius equation is used to relate the distribution of activation energies to a corresponding distribution of rate constants for the individual molecules in the reagent solids. This distribution of molecular rate constants is then correlated to the (observable) reaction time in the derivation of the model equation. In addition to providing a versatile treatment for asymmetric, sigmoidal reaction curves, another key advantage of our equation over other models is that the start time of conversion is uniquely defined at t = 0. We demonstrate the ability of our simple, two-parameter equation to successfully model the experimental x-t data for the polymorphic transformation of a pharmaceutical compound under crystallization slurry (i.e., heterogeneous) conditions. Additionally, we use a modification of this equation to model the kinetics of a historically significant, homogeneous solid-state reaction: the thermal decomposition of AgMnO4 crystals. The potential broad applicability of our statistical (i.e., dispersive) kinetic approach makes it a potentially attractive alternative to existing models/approaches.
Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture
NASA Technical Reports Server (NTRS)
Gloersen, Per (Inventor)
2004-01-01
An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.