Easy way to determine quantitative spatial resolution distribution for a general inverse problem
NASA Astrophysics Data System (ADS)
An, M.; Feng, M.
2013-12-01
The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.
On the inversion of geodetic integrals defined over the sphere using 1-D FFT
NASA Astrophysics Data System (ADS)
García, R. V.; Alejo, C. A.
2005-08-01
An iterative method is presented which performs inversion of integrals defined over the sphere. The method is based on one-dimensional fast Fourier transform (1-D FFT) inversion and is implemented with the projected Landweber technique, which is used to solve constrained least-squares problems reducing the associated 1-D cyclic-convolution error. The results obtained are as precise as the direct matrix inversion approach, but with better computational efficiency. A case study uses the inversion of Hotine’s integral to obtain gravity disturbances from geoid undulations. Numerical convergence is also analyzed and comparisons with respect to the direct matrix inversion method using conjugate gradient (CG) iteration are presented. Like the CG method, the number of iterations needed to get the optimum (i.e., small) error decreases as the measurement noise increases. Nevertheless, for discrete data given over a whole parallel band, the method can be applied directly without implementing the projected Landweber method, since no cyclic convolution error exists.
Integration of Visual and Joint Information to Enable Linear Reaching Motions
NASA Astrophysics Data System (ADS)
Eberle, Henry; Nasuto, Slawomir J.; Hayashi, Yoshikatsu
2017-01-01
A new dynamics-driven control law was developed for a robot arm, based on the feedback control law which uses the linear transformation directly from work space to joint space. This was validated using a simulation of a two-joint planar robot arm and an optimisation algorithm was used to find the optimum matrix to generate straight trajectories of the end-effector in the work space. We found that this linear matrix can be decomposed into the rotation matrix representing the orientation of the goal direction and the joint relation matrix (MJRM) representing the joint response to errors in the Cartesian work space. The decomposition of the linear matrix indicates the separation of path planning in terms of the direction of the reaching motion and the synergies of joint coordination. Once the MJRM is numerically obtained, the feedfoward planning of reaching direction allows us to provide asymptotically stable, linear trajectories in the entire work space through rotational transformation, completely avoiding the use of inverse kinematics. Our dynamics-driven control law suggests an interesting framework for interpreting human reaching motion control alternative to the dominant inverse method based explanations, avoiding expensive computation of the inverse kinematics and the point-to-point control along the desired trajectories.
NASA Technical Reports Server (NTRS)
An, S. H.; Yao, K.
1986-01-01
Lattice algorithm has been employed in numerous adaptive filtering applications such as speech analysis/synthesis, noise canceling, spectral analysis, and channel equalization. In this paper the application to adaptive-array processing is discussed. The advantages are fast convergence rate as well as computational accuracy independent of the noise and interference conditions. The results produced by this technique are compared to those obtained by the direct matrix inverse method.
Decomposed direct matrix inversion for fast non-cartesian SENSE reconstructions.
Qian, Yongxian; Zhang, Zhenghui; Wang, Yi; Boada, Fernando E
2006-08-01
A new k-space direct matrix inversion (DMI) method is proposed here to accelerate non-Cartesian SENSE reconstructions. In this method a global k-space matrix equation is established on basic MRI principles, and the inverse of the global encoding matrix is found from a set of local matrix equations by taking advantage of the small extension of k-space coil maps. The DMI algorithm's efficiency is achieved by reloading the precalculated global inverse when the coil maps and trajectories remain unchanged, such as in dynamic studies. Phantom and human subject experiments were performed on a 1.5T scanner with a standard four-channel phased-array cardiac coil. Interleaved spiral trajectories were used to collect fully sampled and undersampled 3D raw data. The equivalence of the global k-space matrix equation to its image-space version, was verified via conjugate gradient (CG) iterative algorithms on a 2x undersampled phantom and numerical-model data sets. When applied to the 2x undersampled phantom and human-subject raw data, the decomposed DMI method produced images with small errors (< or = 3.9%) relative to the reference images obtained from the fully-sampled data, at a rate of 2 s per slice (excluding 4 min for precalculating the global inverse at an image size of 256 x 256). The DMI method may be useful for noise evaluations in parallel coil designs, dynamic MRI, and 3D sodium MRI with fixed coils and trajectories. Copyright 2006 Wiley-Liss, Inc.
Laplace-domain waveform modeling and inversion for the 3D acoustic-elastic coupled media
NASA Astrophysics Data System (ADS)
Shin, Jungkyun; Shin, Changsoo; Calandra, Henri
2016-06-01
Laplace-domain waveform inversion reconstructs long-wavelength subsurface models by using the zero-frequency component of damped seismic signals. Despite the computational advantages of Laplace-domain waveform inversion over conventional frequency-domain waveform inversion, an acoustic assumption and an iterative matrix solver have been used to invert 3D marine datasets to mitigate the intensive computing cost. In this study, we develop a Laplace-domain waveform modeling and inversion algorithm for 3D acoustic-elastic coupled media by using a parallel sparse direct solver library (MUltifrontal Massively Parallel Solver, MUMPS). We precisely simulate a real marine environment by coupling the 3D acoustic and elastic wave equations with the proper boundary condition at the fluid-solid interface. In addition, we can extract the elastic properties of the Earth below the sea bottom from the recorded acoustic pressure datasets. As a matrix solver, the parallel sparse direct solver is used to factorize the non-symmetric impedance matrix in a distributed memory architecture and rapidly solve the wave field for a number of shots by using the lower and upper matrix factors. Using both synthetic datasets and real datasets obtained by a 3D wide azimuth survey, the long-wavelength component of the P-wave and S-wave velocity models is reconstructed and the proposed modeling and inversion algorithm are verified. A cluster of 80 CPU cores is used for this study.
Visualization of x-ray computer tomography using computer-generated holography
NASA Astrophysics Data System (ADS)
Daibo, Masahiro; Tayama, Norio
1998-09-01
The theory converted from x-ray projection data to the hologram directly by combining the computer tomography (CT) with the computer generated hologram (CGH), is proposed. The purpose of this study is to offer the theory for realizing the all- electronic and high-speed seeing through 3D visualization system, which is for the application to medical diagnosis and non- destructive testing. First, the CT is expressed using the pseudo- inverse matrix which is obtained by the singular value decomposition. CGH is expressed in the matrix style. Next, `projection to hologram conversion' (PTHC) matrix is calculated by the multiplication of phase matrix of CGH with pseudo-inverse matrix of the CT. Finally, the projection vector is converted to the hologram vector directly, by multiplication of the PTHC matrix with the projection vector. Incorporating holographic analog computation into CT reconstruction, it becomes possible that the calculation amount is drastically reduced. We demonstrate the CT cross section which is reconstituted by He-Ne laser in the 3D space from the real x-ray projection data acquired by x-ray television equipment, using our direct conversion technique.
Matrix differentiation formulas
NASA Technical Reports Server (NTRS)
Usikov, D. A.; Tkhabisimov, D. K.
1983-01-01
A compact differentiation technique (without using indexes) is developed for scalar functions that depend on complex matrix arguments which are combined by operations of complex conjugation, transposition, addition, multiplication, matrix inversion and taking the direct product. The differentiation apparatus is developed in order to simplify the solution of extremum problems of scalar functions of matrix arguments.
Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media
NASA Astrophysics Data System (ADS)
Jakobsen, Morten; Tveit, Svenn
2018-05-01
We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.
Adaptive Inverse Control for Rotorcraft Vibration Reduction
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
1985-01-01
This thesis extends the Least Mean Square (LMS) algorithm to solve the mult!ple-input, multiple-output problem of alleviating N/Rev (revolutions per minute by number of blades) helicopter fuselage vibration by means of adaptive inverse control. A frequency domain locally linear model is used to represent the transfer matrix relating the higher harmonic pitch control inputs to the harmonic vibration outputs to be controlled. By using the inverse matrix as the controller gain matrix, an adaptive inverse regulator is formed to alleviate the N/Rev vibration. The stability and rate of convergence properties of the extended LMS algorithm are discussed. It is shown that the stability ranges for the elements of the stability gain matrix are directly related to the eigenvalues of the vibration signal information matrix for the learning phase, but not for the control phase. The overall conclusion is that the LMS adaptive inverse control method can form a robust vibration control system, but will require some tuning of the input sensor gains, the stability gain matrix, and the amount of control relaxation to be used. The learning curve of the controller during the learning phase is shown to be quantitatively close to that predicted by averaging the learning curves of the normal modes. For higher order transfer matrices, a rough estimate of the inverse is needed to start the algorithm efficiently. The simulation results indicate that the factor which most influences LMS adaptive inverse control is the product of the control relaxation and the the stability gain matrix. A small stability gain matrix makes the controller less sensitive to relaxation selection, and permits faster and more stable vibration reduction, than by choosing the stability gain matrix large and the control relaxation term small. It is shown that the best selections of the stability gain matrix elements and the amount of control relaxation is basically a compromise between slow, stable convergence and fast convergence with increased possibility of unstable identification. In the simulation studies, the LMS adaptive inverse control algorithm is shown to be capable of adapting the inverse (controller) matrix to track changes in the flight conditions. The algorithm converges quickly for moderate disturbances, while taking longer for larger disturbances. Perfect knowledge of the inverse matrix is not required for good control of the N/Rev vibration. However it is shown that measurement noise will prevent the LMS adaptive inverse control technique from controlling the vibration, unless the signal averaging method presented is incorporated into the algorithm.
Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion
NASA Astrophysics Data System (ADS)
Jakobsen, M.; Wu, R. S.
2016-12-01
Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.
A fast new algorithm for a robot neurocontroller using inverse QR decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, A.S.; Khemaissia, S.
2000-01-01
A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less
2000-05-01
a vector , ρ "# represents the set of voxel densities sorted into a vector , and ( )A ρ $# "# represents a 8 mapping of the voxel densities to...density vector in equation (4) suggests that solving for ρ "# by direct inversion is not possible, calling for an iterative technique beginning with...the vector of measured spectra, and D is the diagonal matrix of the inverse of the variances. The diagonal matrix provides weighting terms, which
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, T; Dong, X; Petrongolo, M
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less
NASA Astrophysics Data System (ADS)
Zhou, Xin
1990-03-01
For the direct-inverse scattering transform of the time dependent Schrödinger equation, rigorous results are obtained based on an opertor-triangular-factorization approach. By viewing the equation as a first order operator equation, similar results as for the first order n x n matrix system are obtained. The nonlocal Riemann-Hilbert problem for inverse scattering is shown to have solution.
NASA Astrophysics Data System (ADS)
Zhang, Xing; Carter, Emily A.
2018-01-01
We revisit the static response function-based Kohn-Sham (KS) inversion procedure for determining the KS effective potential that corresponds to a given target electron density within finite atomic orbital basis sets. Instead of expanding the potential in an auxiliary basis set, we directly update the potential in its matrix representation. Through numerical examples, we show that the reconstructed density rapidly converges to the target density. Preliminary results are presented to illustrate the possibility of obtaining a local potential in real space from the optimized potential in its matrix representation. We have further applied this matrix-based KS inversion approach to density functional embedding theory. A proof-of-concept study of a solvated proton transfer reaction demonstrates the method's promise.
Compton, L A; Johnson, W C
1986-05-15
Inverse circular dichroism (CD) spectra are presented for each of the five major secondary structures of proteins: alpha-helix, antiparallel and parallel beta-sheet, beta-turn, and other (random) structures. The fraction of the each secondary structure in a protein is predicted by forming the dot product of the corresponding inverse CD spectrum, expressed as a vector, with the CD spectrum of the protein digitized in the same way. We show how this method is based on the construction of the generalized inverse from the singular value decomposition of a set of CD spectra corresponding to proteins whose secondary structures are known from X-ray crystallography. These inverse spectra compute secondary structure directly from protein CD spectra without resorting to least-squares fitting and standard matrix inversion techniques. In addition, spectra corresponding to the individual secondary structures, analogous to the CD spectra of synthetic polypeptides, are generated from the five most significant CD eigenvectors.
Angle-domain inverse scattering migration/inversion in isotropic media
NASA Astrophysics Data System (ADS)
Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan
2018-07-01
The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.
A new fast direct solver for the boundary element method
NASA Astrophysics Data System (ADS)
Huang, S.; Liu, Y. J.
2017-09-01
A new fast direct linear equation solver for the boundary element method (BEM) is presented in this paper. The idea of the new fast direct solver stems from the concept of the hierarchical off-diagonal low-rank matrix. The hierarchical off-diagonal low-rank matrix can be decomposed into the multiplication of several diagonal block matrices. The inverse of the hierarchical off-diagonal low-rank matrix can be calculated efficiently with the Sherman-Morrison-Woodbury formula. In this paper, a more general and efficient approach to approximate the coefficient matrix of the BEM with the hierarchical off-diagonal low-rank matrix is proposed. Compared to the current fast direct solver based on the hierarchical off-diagonal low-rank matrix, the proposed method is suitable for solving general 3-D boundary element models. Several numerical examples of 3-D potential problems with the total number of unknowns up to above 200,000 are presented. The results show that the new fast direct solver can be applied to solve large 3-D BEM models accurately and with better efficiency compared with the conventional BEM.
Teaching Tip: When a Matrix and Its Inverse Are Stochastic
ERIC Educational Resources Information Center
Ding, J.; Rhee, N. H.
2013-01-01
A stochastic matrix is a square matrix with nonnegative entries and row sums 1. The simplest example is a permutation matrix, whose rows permute the rows of an identity matrix. A permutation matrix and its inverse are both stochastic. We prove the converse, that is, if a matrix and its inverse are both stochastic, then it is a permutation matrix.
NASA Astrophysics Data System (ADS)
Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.
2016-01-01
Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.
Approximation of reliability of direct genomic breeding values
USDA-ARS?s Scientific Manuscript database
Two methods to efficiently approximate theoretical genomic reliabilities are presented. The first method is based on the direct inverse of the left hand side (LHS) of mixed model equations. It uses the genomic relationship matrix for a small subset of individuals with the highest genomic relationshi...
NASA Astrophysics Data System (ADS)
Kordy, M. A.; Wannamaker, P. E.; Maris, V.; Cherkaev, E.; Hill, G. J.
2014-12-01
We have developed an algorithm for 3D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permits incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used for the forward solution, parameter jacobians, and model update. The forward simulator, jacobians calculations, as well as synthetic and real data inversion are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequency or small material admittivity, the E-field requires divergence correction. Using Hodge decomposition, correction may be applied after the forward solution is calculated. It allows accurate E-field solutions in dielectric air. The system matrix factorization is computed using the MUMPS library, which shows moderately good scalability through 12 processor cores but limited gains beyond that. The factored matrix is used to calculate the forward response as well as the jacobians of field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure and several topographic models. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of electromagnetic waves normal to the slopes at high frequencies. Run time tests indicate that for meshes as large as 150x150x60 elements, MT forward response and jacobians can be calculated in ~2.5 hours per frequency. For inversion, we implemented data space Gauss-Newton method, which offers reduction in memory requirement and a significant speedup of the parameter step versus model space approach. For dense matrix operations we use tiling approach of PLASMA library, which shows very good scalability. In synthetic inversions we examine the importance of including the topography in the inversion and we test different regularization schemes using weighted second norm of model gradient as well as inverting for a static distortion matrix following Miensopust/Avdeeva approach. We also apply our algorithm to invert MT data collected at Mt St Helens.
Negre, Christian F. A; Mniszewski, Susan M.; Cawkwell, Marc Jon; ...
2016-06-06
We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive iterative re nement of an initial guess Z of the inverse overlap matrix S. The initial guess of Z is obtained beforehand either by using an approximate divide and conquer technique or dynamically, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under incomplete approximate iterative re nement of Z. Linear scaling performance ismore » obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables e cient shared memory parallelization. As we show in this article using selfconsistent density functional based tight-binding MD, our approach is faster than conventional methods based on the direct diagonalization of the overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4,158 atom water-solvated polyalanine system we nd an average speedup factor of 122 for the computation of Z in each MD step.« less
NASA Astrophysics Data System (ADS)
Szabó, Norbert Péter
2018-03-01
An evolutionary inversion approach is suggested for the interpretation of nuclear and resistivity logs measured by direct-push tools in shallow unsaturated sediments. The efficiency of formation evaluation is improved by estimating simultaneously (1) the petrophysical properties that vary rapidly along a drill hole with depth and (2) the zone parameters that can be treated as constant, in one inversion procedure. In the workflow, the fractional volumes of water, air, matrix and clay are estimated in adjacent depths by linearized inversion, whereas the clay and matrix properties are updated using a float-encoded genetic meta-algorithm. The proposed inversion method provides an objective estimate of the zone parameters that appear in the tool response equations applied to solve the forward problem, which can significantly increase the reliability of the petrophysical model as opposed to setting these parameters arbitrarily. The global optimization meta-algorithm not only assures the best fit between the measured and calculated data but also gives a reliable solution, practically independent of the initial model, as laboratory data are unnecessary in the inversion procedure. The feasibility test uses engineering geophysical sounding logs observed in an unsaturated loessy-sandy formation in Hungary. The multi-borehole extension of the inversion technique is developed to determine the petrophysical properties and their estimation errors along a profile of drill holes. The genetic meta-algorithmic inversion method is recommended for hydrogeophysical logging applications of various kinds to automatically extract the volumetric ratios of rock and fluid constituents as well as the most important zone parameters in a reliable inversion procedure.
Characterizing the inverses of block tridiagonal, block Toeplitz matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boffi, Nicholas M.; Hill, Judith C.; Reuter, Matthew G.
2014-12-04
We consider the inversion of block tridiagonal, block Toeplitz matrices and comment on the behaviour of these inverses as one moves away from the diagonal. Using matrix M bius transformations, we first present an O(1) representation (with respect to the number of block rows and block columns) for the inverse matrix and subsequently use this representation to characterize the inverse matrix. There are four symmetry-distinct cases where the blocks of the inverse matrix (i) decay to zero on both sides of the diagonal, (ii) oscillate on both sides, (iii) decay on one side and oscillate on the other and (iv)more » decay on one side and grow on the other. This characterization exposes the necessary conditions for the inverse matrix to be numerically banded and may also aid in the design of preconditioners and fast algorithms. Finally, we present numerical examples of these matrix types.« less
Masuda, Y; Misztal, I; Legarra, A; Tsuruta, S; Lourenco, D A L; Fragomeni, B O; Aguilar, I
2017-01-01
This paper evaluates an efficient implementation to multiply the inverse of a numerator relationship matrix for genotyped animals () by a vector (). The computation is required for solving mixed model equations in single-step genomic BLUP (ssGBLUP) with the preconditioned conjugate gradient (PCG). The inverse can be decomposed into sparse matrices that are blocks of the sparse inverse of a numerator relationship matrix () including genotyped animals and their ancestors. The elements of were rapidly calculated with the Henderson's rule and stored as sparse matrices in memory. Implementation of was by a series of sparse matrix-vector multiplications. Diagonal elements of , which were required as preconditioners in PCG, were approximated with a Monte Carlo method using 1,000 samples. The efficient implementation of was compared with explicit inversion of with 3 data sets including about 15,000, 81,000, and 570,000 genotyped animals selected from populations with 213,000, 8.2 million, and 10.7 million pedigree animals, respectively. The explicit inversion required 1.8 GB, 49 GB, and 2,415 GB (estimated) of memory, respectively, and 42 s, 56 min, and 13.5 d (estimated), respectively, for the computations. The efficient implementation required <1 MB, 2.9 GB, and 2.3 GB of memory, respectively, and <1 sec, 3 min, and 5 min, respectively, for setting up. Only <1 sec was required for the multiplication in each PCG iteration for any data sets. When the equations in ssGBLUP are solved with the PCG algorithm, is no longer a limiting factor in the computations.
Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique
NASA Astrophysics Data System (ADS)
Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi
2013-09-01
According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.
Mode detection in turbofan inlets from near field sensor arrays.
Castres, Fabrice O; Joseph, Phillip F
2007-02-01
Knowledge of the modal content of the sound field radiated from a turbofan inlet is important for source characterization and for helping to determine noise generation mechanisms in the engine. An inverse technique for determining the mode amplitudes at the duct outlet is proposed using pressure measurements made in the near field. The radiated sound pressure from a duct is modeled by directivity patterns of cut-on modes in the near field using a model based on the Kirchhoff approximation for flanged ducts with no flow. The resulting system of equations is ill posed and it is shown that the presence of modes with eigenvalues close to a cutoff frequency results in a poorly conditioned directivity matrix. An analysis of the conditioning of this directivity matrix is carried out to assess the inversion robustness and accuracy. A physical interpretation of the singular value decomposition is given and allows us to understand the issues of ill conditioning as well as the detection performance of the radiated sound field by a given sensor array.
Santos, Hugo M; Reboiro-Jato, Miguel; Glez-Peña, Daniel; Nunes-Miranda, J D; Fdez-Riverola, Florentino; Carvallo, R; Capelo, J L
2010-09-15
The decision peptide-driven tool implements a software application for assisting the user in a protocol for accurate protein quantification based on the following steps: (1) protein separation through gel electrophoresis; (2) in-gel protein digestion; (3) direct and inverse (18)O-labeling and (4) matrix assisted laser desorption ionization time of flight mass spectrometry, MALDI analysis. The DPD software compares the MALDI results of the direct and inverse (18)O-labeling experiments and quickly identifies those peptides with paralleled loses in different sets of a typical proteomic workflow. Those peptides are used for subsequent accurate protein quantification. The interpretation of the MALDI data from direct and inverse labeling experiments is time-consuming requiring a significant amount of time to do all comparisons manually. The DPD software shortens and simplifies the searching of the peptides that must be used for quantification from a week to just some minutes. To do so, it takes as input several MALDI spectra and aids the researcher in an automatic mode (i) to compare data from direct and inverse (18)O-labeling experiments, calculating the corresponding ratios to determine those peptides with paralleled losses throughout different sets of experiments; and (ii) allow to use those peptides as internal standards for subsequent accurate protein quantification using (18)O-labeling. In this work the DPD software is presented and explained with the quantification of protein carbonic anhydrase. Copyright (c) 2010 Elsevier B.V. All rights reserved.
A direct method for unfolding the resolution function from measurements of neutron induced reactions
NASA Astrophysics Data System (ADS)
Žugec, P.; Colonna, N.; Sabate-Gilarte, M.; Vlachoudis, V.; Massimi, C.; Lerendegui-Marco, J.; Stamatopoulos, A.; Bacak, M.; Warren, S. G.; n TOF Collaboration
2017-12-01
The paper explores the numerical stability and the computational efficiency of a direct method for unfolding the resolution function from the measurements of the neutron induced reactions. A detailed resolution function formalism is laid out, followed by an overview of challenges present in a practical implementation of the method. A special matrix storage scheme is developed in order to facilitate both the memory management of the resolution function matrix, and to increase the computational efficiency of the matrix multiplication and decomposition procedures. Due to its admirable computational properties, a Cholesky decomposition is at the heart of the unfolding procedure. With the smallest but necessary modification of the matrix to be decomposed, the method is successfully applied to system of 105 × 105. However, the amplification of the uncertainties during the direct inversion procedures limits the applicability of the method to high-precision measurements of neutron induced reactions.
Magnetotelluric inversion via reverse time migration algorithm of seismic data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ha, Taeyoung; Shin, Changsoo
2007-07-01
We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less
Comparing implementations of penalized weighted least-squares sinogram restoration.
Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick
2010-11-01
A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors' previous penalized-likelihood implementation. Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes.
Analysis of modified SMI method for adaptive array weight control
NASA Technical Reports Server (NTRS)
Dilsavor, R. L.; Moses, R. L.
1989-01-01
An adaptive array is applied to the problem of receiving a desired signal in the presence of weak interference signals which need to be suppressed. A modification, suggested by Gupta, of the sample matrix inversion (SMI) algorithm controls the array weights. In the modified SMI algorithm, interference suppression is increased by subtracting a fraction F of the noise power from the diagonal elements of the estimated covariance matrix. Given the true covariance matrix and the desired signal direction, the modified algorithm is shown to maximize a well-defined, intuitive output power ratio criterion. Expressions are derived for the expected value and variance of the array weights and output powers as a function of the fraction F and the number of snapshots used in the covariance matrix estimate. These expressions are compared with computer simulation and good agreement is found. A trade-off is found to exist between the desired level of interference suppression and the number of snapshots required in order to achieve that level with some certainty. The removal of noise eigenvectors from the covariance matrix inverse is also discussed with respect to this application. Finally, the type and severity of errors which occur in the covariance matrix estimate are characterized through simulation.
A space efficient flexible pivot selection approach to evaluate determinant and inverse of a matrix.
Jafree, Hafsa Athar; Imtiaz, Muhammad; Inayatullah, Syed; Khan, Fozia Hanif; Nizami, Tajuddin
2014-01-01
This paper presents new simple approaches for evaluating determinant and inverse of a matrix. The choice of pivot selection has been kept arbitrary thus they reduce the error while solving an ill conditioned system. Computation of determinant of a matrix has been made more efficient by saving unnecessary data storage and also by reducing the order of the matrix at each iteration, while dictionary notation [1] has been incorporated for computing the matrix inverse thereby saving unnecessary calculations. These algorithms are highly class room oriented, easy to use and implemented by students. By taking the advantage of flexibility in pivot selection, one may easily avoid development of the fractions by most. Unlike the matrix inversion method [2] and [3], the presented algorithms obviate the use of permutations and inverse permutations.
Schreurs, Charlotte A; Algra, Annemijn M; Man, Sum-Che; Cannegieter, Suzanne C; van der Wall, Ernst E; Schalij, Martin J; Kors, Jan A; Swenne, Cees A
2010-01-01
The spatial QRS-T angle (SA), a predictor of sudden cardiac death, is a vectorcardiographic variable. Gold standard vertorcardiograms (VCGs) are recorded by using the Frank electrode positions. However, with the commonly available 12-lead ECG, VCGs must be synthesized by matrix multiplication (inverse Dower matrix/Kors matrix). Alternatively, Rautaharju proposed a method to calculate SA directly from the 12-lead ECG. Neither spatial angles computed by using the inverse Dower matrix (SA-D) nor by using the Kors matrix (SA-K) or by using Rautaharju's method (SA-R) have been validated with regard to the spatial angles as directly measured in the Frank VCG (SA-F). Our present study aimed to perform this essential validation. We analyzed SAs in 1220 simultaneously recorded 12-lead ECGs and VCGs, in all data, in SA-F-based tertiles, and after stratification according to pathology or sex. Linear regression of SA-K, SA-D, and SA-R on SA-F yielded offsets of 0.01 degree, 20.3 degrees, and 28.3 degrees and slopes of 0.96, 0.86, and 0.79, respectively. The bias of SA-K with respect to SA-F (mean +/- SD, -3.2 degrees +/- 13.9 degrees) was significantly (P < .001) smaller than the bias of both SA-D and SA-R with respect to SA-F (8.0 degrees +/- 18.6 degrees and 9.8 degrees +/- 24.6 degrees, respectively); tertile analysis showed a much more homogeneous behavior of the bias in SA-K than of both the bias in SA-D and in SA-R. In pathologic ECGs, there was no significant bias in SA-K; bias in men and women did not differ. SA-K resembled SA-F best. In general, when there is no specific reason either to synthesize VCGs with the inverse Dower matrix or to calculate the spatial QRS-T angle with Rautaharju's method, it seems prudent to use the Kors matrix. Copyright 2010 Elsevier Inc. All rights reserved.
Refractive index inversion based on Mueller matrix method
NASA Astrophysics Data System (ADS)
Fan, Huaxi; Wu, Wenyuan; Huang, Yanhua; Li, Zhaozhao
2016-03-01
Based on Stokes vector and Jones vector, the correlation between Mueller matrix elements and refractive index was studied with the result simplified, and through Mueller matrix way, the expression of refractive index inversion was deduced. The Mueller matrix elements, under different incident angle, are simulated through the expression of specular reflection so as to analyze the influence of the angle of incidence and refractive index on it, which is verified through the measure of the Mueller matrix elements of polished metal surface. Research shows that, under the condition of specular reflection, the result of Mueller matrix inversion is consistent with the experiment and can be used as an index of refraction of inversion method, and it provides a new way for target detection and recognition technology.
A trade-off solution between model resolution and covariance in surface-wave inversion
Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.
2010-01-01
Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.
Dynamic data integration and stochastic inversion of a confined aquifer
NASA Astrophysics Data System (ADS)
Wang, D.; Zhang, Y.; Irsa, J.; Huang, H.; Wang, L.
2013-12-01
Much work has been done in developing and applying inverse methods to aquifer modeling. The scope of this paper is to investigate the applicability of a new direct method for large inversion problems and to incorporate uncertainty measures in the inversion outcomes (Wang et al., 2013). The problem considered is a two-dimensional inverse model (50×50 grid) of steady-state flow for a heterogeneous ground truth model (500×500 grid) with two hydrofacies. From the ground truth model, decreasing number of wells (12, 6, 3) were sampled for facies types, based on which experimental indicator histograms and directional variograms were computed. These parameters and models were used by Sequential Indicator Simulation to generate 100 realizations of hydrofacies patterns in a 100×100 (geostatistical) grid, which were conditioned to the facies measurements at wells. These realizations were smoothed with Simulated Annealing, coarsened to the 50×50 inverse grid, before they were conditioned with the direct method to the dynamic data, i.e., observed heads and groundwater fluxes at the same sampled wells. A set of realizations of estimated hydraulic conductivities (Ks), flow fields, and boundary conditions were created, which centered on the 'true' solutions from solving the ground truth model. Both hydrofacies conductivities were computed with an estimation accuracy of ×10% (12 wells), ×20% (6 wells), ×35% (3 wells) of the true values. For boundary condition estimation, the accuracy was within × 15% (12 wells), 30% (6 wells), and 50% (3 wells) of the true values. The inversion system of equations was solved with LSQR (Paige et al, 1982), for which coordinate transform and matrix scaling preprocessor were used to improve the condition number (CN) of the coefficient matrix. However, when the inverse grid was refined to 100×100, Gaussian Noise Perturbation was used to limit the growth of the CN before the matrix solve. To scale the inverse problem up (i.e., without smoothing and coarsening and therefore reducing the associated estimation uncertainty), a parallel LSQR solver was written and verified. For the 50×50 grid, the parallel solver sped up the serial solution time by 14X using 4 CPUs (research on parallel performance and scaling is ongoing). A sensitivity analysis was conducted to examine the relation between the observed data and the inversion outcomes, where measurement errors of increasing magnitudes (i.e., ×1, 2, 5, 10% of the total head variation and up to ×2% of the total flux variation) were imposed on the observed data. Inversion results were stable but the accuracy of Ks and boundary estimation degraded with increasing errors, as expected. In particular, quality of the observed heads is critical to hydraulic head recovery, while quality of the observed fluxes plays a dominant role in K estimation. References: Wang, D., Y. Zhang, J. Irsa, H. Huang, and L. Wang (2013), Data integration and stochastic inversion of a confined aquifer with high performance computing, Advances in Water Resources, in preparation. Paige, C. C., and M. A. Saunders (1982), LSQR: an algorithm for sparse linear equations and sparse least squares, ACM Transactions on Mathematical Software, 8(1), 43-71.
Comparing implementations of penalized weighted least-squares sinogram restoration
Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick
2010-01-01
Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. Results: All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors’ previous penalized-likelihood implementation. Conclusions: Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes. PMID:21158306
NASA Astrophysics Data System (ADS)
Müller, Silvia; Brockmann, Jan Martin; Schuh, Wolf-Dieter
2015-04-01
The ocean's dynamic topography as the difference between the sea surface and the geoid reflects many characteristics of the general ocean circulation. Consequently, it provides valuable information for evaluating or tuning ocean circulation models. The sea surface is directly observed by satellite radar altimetry while the geoid cannot be observed directly. The satellite-based gravity field determination requires different measurement principles (satellite-to-satellite tracking (e.g. GRACE), satellite-gravity-gradiometry (GOCE)). In addition, hydrographic measurements (salinity, temperature and pressure; near-surface velocities) provide information on the dynamic topography. The observation types have different representations and spatial as well as temporal resolutions. Therefore, the determination of the dynamic topography is not straightforward. Furthermore, the integration of the dynamic topography into ocean circulation models requires not only the dynamic topography itself but also its inverse covariance matrix on the ocean model grid. We developed a rigorous combination method in which the dynamic topography is parameterized in space as well as in time. The altimetric sea surface heights are expressed as a sum of geoid heights represented in terms of spherical harmonics and the dynamic topography parameterized by a finite element method which can be directly related to the particular ocean model grid. Besides the difficult task of combining altimetry data with a gravity field model, a major aspect is the consistent combination of satellite data and in-situ observations. The particular characteristics and the signal content of the different observations must be adequately considered requiring the introduction of auxiliary parameters. Within our model the individual observation groups are combined in terms of normal equations considering their full covariance information; i.e. a rigorous variance/covariance propagation from the original measurements to the final product is accomplished. In conclusion, the developed integrated approach allows for estimating the dynamic topography and its inverse covariance matrix on arbitrary grids in space and time. The inverse covariance matrix contains the appropriate weights for model-data misfits in least-squares ocean model inversions. The focus of this study is on the North Atlantic Ocean. We will present the conceptual design and dynamic topography estimates based on time variable data from seven satellite altimeter missions (Jason-1, Jason-2, Topex/Poseidon, Envisat, ERS-2, GFO, Cryosat2) in combination with the latest GOCE gravity field model and in-situ data from the Argo floats and near-surface drifting buoys.
Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing
NASA Technical Reports Server (NTRS)
Chu, W. P.
1985-01-01
The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.
Iterative image-domain decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Tianye; Dong, Xue; Petrongolo, Michael
2014-04-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less
Deconvolution using a neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehman, S.K.
1990-11-15
Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.
Computer programs for the solution of systems of linear algebraic equations
NASA Technical Reports Server (NTRS)
Sequi, W. T.
1973-01-01
FORTRAN subprograms for the solution of systems of linear algebraic equations are described, listed, and evaluated in this report. Procedures considered are direct solution, iteration, and matrix inversion. Both incore methods and those which utilize auxiliary data storage devices are considered. Some of the subroutines evaluated require the entire coefficient matrix to be in core, whereas others account for banding or sparceness of the system. General recommendations relative to equation solving are made, and on the basis of tests, specific subprograms are recommended.
[Orthogonal Vector Projection Algorithm for Spectral Unmixing].
Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li
2015-12-01
Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.
A matrix-inversion method for gamma-source mapping from gamma-count data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adsley, Ian; Burgess, Claire; Bull, Richard K
In a previous paper it was proposed that a simple matrix inversion method could be used to extract source distributions from gamma-count maps, using simple models to calculate the response matrix. The method was tested using numerically generated count maps. In the present work a 100 kBq Co{sup 60} source has been placed on a gridded surface and the count rate measured using a NaI scintillation detector. The resulting map of gamma counts was used as input to the matrix inversion procedure and the source position recovered. A multi-source array was simulated by superposition of several single-source count maps andmore » the source distribution was again recovered using matrix inversion. The measurements were performed for several detector heights. The effects of uncertainties in source-detector distances on the matrix inversion method are also examined. The results from this work give confidence in the application of the method to practical applications, such as the segregation of highly active objects amongst fuel-element debris. (authors)« less
Suhr, Anna Catharina; Vogeser, Michael; Grimm, Stefanie H
2016-05-30
For quotable quantitative analysis of endogenous analytes in complex biological samples by isotope dilution LC-MS/MS, the creation of appropriate calibrators is a challenge, since analyte-free authentic material is in general not available. Thus, surrogate matrices are often used to prepare calibrators and controls. However, currently employed validation protocols do not include specific experiments to verify the suitability of a surrogate matrix calibration for quantification of authentic matrix samples. The aim of the study was the development of a novel validation experiment to test whether surrogate matrix based calibrators enable correct quantification of authentic matrix samples. The key element of the novel validation experiment is the inversion of nonlabelled analytes and their stable isotope labelled (SIL) counterparts in respect to their functions, i.e. SIL compound is the analyte and nonlabelled substance is employed as internal standard. As a consequence, both surrogate and authentic matrix are analyte-free regarding SIL analytes, which allows a comparison of both matrices. We called this approach Isotope Inversion Experiment. As figure of merit we defined the accuracy of inverse quality controls in authentic matrix quantified by means of a surrogate matrix calibration curve. As a proof-of-concept application a LC-MS/MS assay addressing six corticosteroids (cortisol, cortisone, corticosterone, 11-deoxycortisol, 11-deoxycorticosterone, and 17-OH-progesterone) was chosen. The integration of the Isotope Inversion Experiment in the validation protocol for the steroid assay was successfully realized. The accuracy results of the inverse quality controls were all in all very satisfying. As a consequence the suitability of a surrogate matrix calibration for quantification of the targeted steroids in human serum as authentic matrix could be successfully demonstrated. The Isotope Inversion Experiment fills a gap in the validation process for LC-MS/MS assays quantifying endogenous analytes. We consider it a valuable and convenient tool to evaluate the correct quantification of authentic matrix samples based on a calibration curve in surrogate matrix. Copyright © 2016 Elsevier B.V. All rights reserved.
Polymer sol-gel composite inverse opal structures.
Zhang, Xiaoran; Blanchard, G J
2015-03-25
We report on the formation of composite inverse opal structures where the matrix used to form the inverse opal contains both silica, formed using sol-gel chemistry, and poly(ethylene glycol), PEG. We find that the morphology of the inverse opal structure depends on both the amount of PEG incorporated into the matrix and its molecular weight. The extent of organization in the inverse opal structure, which is characterized by scanning electron microscopy and optical reflectance data, is mediated by the chemical bonding interactions between the silica and PEG constituents in the hybrid matrix. Both polymer chain terminus Si-O-C bonding and hydrogen bonding between the polymer backbone oxygens and silanol functionalities can contribute, with the polymer mediating the extent to which Si-O-Si bonds can form within the silica regions of the matrix due to hydrogen-bonding interactions.
Approximation of reliabilities for multiple-trait model with maternal effects.
Strabel, T; Misztal, I; Bertrand, J K
2001-04-01
Reliabilities for a multiple-trait maternal model were obtained by combining reliabilities obtained from single-trait models. Single-trait reliabilities were obtained using an approximation that supported models with additive and permanent environmental effects. For the direct effect, the maternal and permanent environmental variances were assigned to the residual. For the maternal effect, variance of the direct effect was assigned to the residual. Data included 10,550 birth weight, 11,819 weaning weight, and 3,617 postweaning gain records of Senepol cattle. Reliabilities were obtained by generalized inversion and by using single-trait and multiple-trait approximation methods. Some reliabilities obtained by inversion were negative because inbreeding was ignored in calculating the inverse of the relationship matrix. The multiple-trait approximation method reduced the bias of approximation when compared with the single-trait method. The correlations between reliabilities obtained by inversion and by multiple-trait procedures for the direct effect were 0.85 for birth weight, 0.94 for weaning weight, and 0.96 for postweaning gain. Correlations for maternal effects for birth weight and weaning weight were 0.96 to 0.98 for both approximations. Further improvements can be achieved by refining the single-trait procedures.
Fast polar decomposition of an arbitrary matrix
NASA Technical Reports Server (NTRS)
Higham, Nicholas J.; Schreiber, Robert S.
1988-01-01
The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.
Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.
Park, Jongin; Wi, Seok-Min; Lee, Jin S
2016-02-01
Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.
RAWS II: A MULTIPLE REGRESSION ANALYSIS PROGRAM,
This memorandum gives instructions for the use and operation of a revised version of RAWS, a multiple regression analysis program. The program...of preprocessed data, the directed retention of variable, listing of the matrix of the normal equations and its inverse, and the bypassing of the regression analysis to provide the input variable statistics only. (Author)
Assembly of large-area, highly ordered, crack-free inverse opal films
Hatton, Benjamin; Mishchenko, Lidiya; Davis, Stan; Sandhage, Kenneth H.; Aizenberg, Joanna
2010-01-01
Whereas considerable interest exists in self-assembly of well-ordered, porous “inverse opal” structures for optical, electronic, and (bio)chemical applications, uncontrolled defect formation has limited the scale-up and practicality of such approaches. Here we demonstrate a new method for assembling highly ordered, crack-free inverse opal films over a centimeter scale. Multilayered composite colloidal crystal films have been generated via evaporative deposition of polymeric colloidal spheres suspended within a hydrolyzed silicate sol-gel precursor solution. The coassembly of a sacrificial colloidal template with a matrix material avoids the need for liquid infiltration into the preassembled colloidal crystal and minimizes the associated cracking and inhomogeneities of the resulting inverse opal films. We discuss the underlying mechanisms that may account for the formation of large-area defect-free films, their unique preferential growth along the 〈110〉 direction and unusual fracture behavior. We demonstrate that this coassembly approach allows the fabrication of hierarchical structures not achievable by conventional methods, such as multilayered films and deposition onto patterned or curved surfaces. These robust SiO2 inverse opals can be transformed into various materials that retain the morphology and order of the original films, as exemplified by the reactive conversion into Si or TiO2 replicas. We show that colloidal coassembly is available for a range of organometallic sol-gel and polymer matrix precursors, and represents a simple, low-cost, scalable method for generating high-quality, chemically tailorable inverse opal films for a variety of applications. PMID:20484675
Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.
Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter
2017-09-01
An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.
Hessian Schatten-norm regularization for linear inverse problems.
Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael
2013-05-01
We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.
Efficient 3D inversions using the Richards equation
NASA Astrophysics Data System (ADS)
Cockett, Rowan; Heagy, Lindsey J.; Haber, Eldad
2018-07-01
Fluid flow in the vadose zone is governed by the Richards equation; it is parameterized by hydraulic conductivity, which is a nonlinear function of pressure head. Investigations in the vadose zone typically require characterizing distributed hydraulic properties. Water content or pressure head data may include direct measurements made from boreholes. Increasingly, proxy measurements from hydrogeophysics are being used to supply more spatially and temporally dense data sets. Inferring hydraulic parameters from such datasets requires the ability to efficiently solve and optimize the nonlinear time domain Richards equation. This is particularly important as the number of parameters to be estimated in a vadose zone inversion continues to grow. In this paper, we describe an efficient technique to invert for distributed hydraulic properties in 1D, 2D, and 3D. Our technique does not store the Jacobian matrix, but rather computes its product with a vector. Existing literature for the Richards equation inversion explicitly calculates the sensitivity matrix using finite difference or automatic differentiation, however, for large scale problems these methods are constrained by computation and/or memory. Using an implicit sensitivity algorithm enables large scale inversion problems for any distributed hydraulic parameters in the Richards equation to become tractable on modest computational resources. We provide an open source implementation of our technique based on the SimPEG framework, and show it in practice for a 3D inversion of saturated hydraulic conductivity using water content data through time.
A robust method of computing finite difference coefficients based on Vandermonde matrix
NASA Astrophysics Data System (ADS)
Zhang, Yijie; Gao, Jinghuai; Peng, Jigen; Han, Weimin
2018-05-01
When the finite difference (FD) method is employed to simulate the wave propagation, high-order FD method is preferred in order to achieve better accuracy. However, if the order of FD scheme is high enough, the coefficient matrix of the formula for calculating finite difference coefficients is close to be singular. In this case, when the FD coefficients are computed by matrix inverse operator of MATLAB, inaccuracy can be produced. In order to overcome this problem, we have suggested an algorithm based on Vandermonde matrix in this paper. After specified mathematical transformation, the coefficient matrix is transformed into a Vandermonde matrix. Then the FD coefficients of high-order FD method can be computed by the algorithm of Vandermonde matrix, which prevents the inverse of the singular matrix. The dispersion analysis and numerical results of a homogeneous elastic model and a geophysical model of oil and gas reservoir demonstrate that the algorithm based on Vandermonde matrix has better accuracy compared with matrix inverse operator of MATLAB.
Computing the Moore-Penrose Inverse of a Matrix with a Computer Algebra System
ERIC Educational Resources Information Center
Schmidt, Karsten
2008-01-01
In this paper "Derive" functions are provided for the computation of the Moore-Penrose inverse of a matrix, as well as for solving systems of linear equations by means of the Moore-Penrose inverse. Making it possible to compute the Moore-Penrose inverse easily with one of the most commonly used Computer Algebra Systems--and to have the blueprint…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cuenca, Jacques, E-mail: jcuenca@kth.se; Van der Kelen, Christophe; Göransson, Peter
2014-02-28
This paper proposes an inverse estimation method for the characterisation of the elastic and anelastic properties of the frame of anisotropic open-cell foams used for sound absorption. A model of viscoelasticity based on a fractional differential constitutive equation is used, leading to an augmented Hooke's law in the frequency domain, where the elastic and anelastic phenomena appear as distinctive terms in the stiffness matrix. The parameters of the model are nine orthotropic elastic moduli, three angles of orientation of the material principal directions and three parameters governing the anelastic frequency dependence. The inverse estimation consists in numerically fitting the modelmore » on a set of transfer functions extracted from a sample of material. The setup uses a seismic-mass measurement repeated in the three directions of space and is placed in a vacuum chamber in order to remove the air from the pores of the sample. The method allows to reconstruct the full frequency-dependent complex stiffness matrix of the frame of an anisotropic open-cell foam and in particular it provides the frequency of maximum energy dissipation by viscoelastic effects. The characterisation of a melamine foam sample is performed and the relation between the fractional-derivative model and other types of parameterisations of the augmented Hooke's law is discussed.« less
NASA Astrophysics Data System (ADS)
Cuenca, Jacques; Van der Kelen, Christophe; Göransson, Peter
2014-02-01
This paper proposes an inverse estimation method for the characterisation of the elastic and anelastic properties of the frame of anisotropic open-cell foams used for sound absorption. A model of viscoelasticity based on a fractional differential constitutive equation is used, leading to an augmented Hooke's law in the frequency domain, where the elastic and anelastic phenomena appear as distinctive terms in the stiffness matrix. The parameters of the model are nine orthotropic elastic moduli, three angles of orientation of the material principal directions and three parameters governing the anelastic frequency dependence. The inverse estimation consists in numerically fitting the model on a set of transfer functions extracted from a sample of material. The setup uses a seismic-mass measurement repeated in the three directions of space and is placed in a vacuum chamber in order to remove the air from the pores of the sample. The method allows to reconstruct the full frequency-dependent complex stiffness matrix of the frame of an anisotropic open-cell foam and in particular it provides the frequency of maximum energy dissipation by viscoelastic effects. The characterisation of a melamine foam sample is performed and the relation between the fractional-derivative model and other types of parameterisations of the augmented Hooke's law is discussed.
Pseudoinverse Decoding Process in Delay-Encoded Synthetic Transmit Aperture Imaging.
Gong, Ping; Kolios, Michael C; Xu, Yuan
2016-09-01
Recently, we proposed a new method to improve the signal-to-noise ratio of the prebeamformed radio-frequency data in synthetic transmit aperture (STA) imaging: the delay-encoded STA (DE-STA) imaging. In the decoding process of DE-STA, the equivalent STA data were obtained by directly inverting the coding matrix. This is usually regarded as an ill-posed problem, especially under high noise levels. Pseudoinverse (PI) is usually used instead for seeking a more stable inversion process. In this paper, we apply singular value decomposition to the coding matrix to conduct the PI. Our numerical studies demonstrate that the singular values of the coding matrix have a special distribution, i.e., all the values are the same except for the first and last ones. We compare the PI in two cases: complete PI (CPI), where all the singular values are kept, and truncated PI (TPI), where the last and smallest singular value is ignored. The PI (both CPI and TPI) DE-STA processes are tested against noise with both numerical simulations and experiments. The CPI and TPI can restore the signals stably, and the noise mainly affects the prebeamformed signals corresponding to the first transmit channel. The difference in the overall enveloped beamformed image qualities between the CPI and TPI is negligible. Thus, it demonstrates that DE-STA is a relatively stable encoding and decoding technique. Also, according to the special distribution of the singular values of the coding matrix, we propose a new efficient decoding formula that is based on the conjugate transpose of the coding matrix. We also compare the computational complexity of the direct inverse and the new formula.
Xia, J.; Miller, R.D.; Xu, Y.
2008-01-01
Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (>2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. We employed a data-resolution matrix to select data that would be well predicted and we find that there are advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher-mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher-mode data are normally more accurately predicted than fundamental-mode data because of restrictions on the data kernel for the inversion system. We used synthetic and real-world examples to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher-mode data in inversion can provide better results. We also calculated model-resolution matrices in these examples to show the potential of increasing model resolution with selected surface-wave data. ?? Birkhaueser 2008.
An ambiguity of information content and error in an ill-posed satellite inversion
NASA Astrophysics Data System (ADS)
Koner, Prabhat
According to Rodgers (2000, stochastic approach), the averaging kernel (AK) is the representational matrix to understand the information content in a scholastic inversion. On the other hand, in deterministic approach this is referred to as model resolution matrix (MRM, Menke 1989). The analysis of AK/MRM can only give some understanding of how much regularization is imposed on the inverse problem. The trace of the AK/MRM matrix, which is the so-called degree of freedom from signal (DFS; stochastic) or degree of freedom in retrieval (DFR; deterministic). There are no physical/mathematical explanations in the literature: why the trace of the matrix is a valid form to calculate this quantity? We will present an ambiguity between information and error using a real life problem of SST retrieval from GOES13. The stochastic information content calculation is based on the linear assumption. The validity of such mathematics in satellite inversion will be questioned because it is based on the nonlinear radiative transfer and ill-conditioned inverse problems. References: Menke, W., 1989: Geophysical data analysis: discrete inverse theory. San Diego academic press. Rodgers, C.D., 2000: Inverse methods for atmospheric soundings: theory and practice. Singapore :World Scientific.
Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform
NASA Astrophysics Data System (ADS)
Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin
2013-12-01
Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.
Spatial operator factorization and inversion of the manipulator mass matrix
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo; Kreutz-Delgado, Kenneth
1992-01-01
This paper advances two linear operator factorizations of the manipulator mass matrix. Embedded in the factorizations are many of the techniques that are regarded as very efficient computational solutions to inverse and forward dynamics problems. The operator factorizations provide a high-level architectural understanding of the mass matrix and its inverse, which is not visible in the detailed algorithms. They also lead to a new approach to the development of computer programs or organize complexity in robot dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pablant, N. A.; Bell, R. E.; Bitter, M.
2014-11-15
Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at the Large Helical Device. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy andmore » tomographic inversion, XICS can provide profile measurements of the local emissivity, temperature, and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modified Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example, geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less
Pablant, N. A.; Bell, R. E.; Bitter, M.; ...
2014-08-08
Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at LHD. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy and tomographic inversion, XICSmore » can provide pro file measurements of the local emissivity, temperature and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modifi ed Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less
A Fine-Grained Pipelined Implementation for Large-Scale Matrix Inversion on FPGA
NASA Astrophysics Data System (ADS)
Zhou, Jie; Dou, Yong; Zhao, Jianxun; Xia, Fei; Lei, Yuanwu; Tang, Yuxing
Large-scale matrix inversion play an important role in many applications. However to the best of our knowledge, there is no FPGA-based implementation. In this paper, we explore the possibility of accelerating large-scale matrix inversion on FPGA. To exploit the computational potential of FPGA, we introduce a fine-grained parallel algorithm for matrix inversion. A scalable linear array processing elements (PEs), which is the core component of the FPGA accelerator, is proposed to implement this algorithm. A total of 12 PEs can be integrated into an Altera StratixII EP2S130F1020C5 FPGA on our self-designed board. Experimental results show that a factor of 2.6 speedup and the maximum power-performance of 41 can be achieved compare to Pentium Dual CPU with double SSE threads.
NASA Astrophysics Data System (ADS)
Pan, Xinpeng; Zhang, Guangzhi; Yin, Xingyao
2018-01-01
Seismic amplitude variation with offset and azimuth (AVOaz) inversion is well known as a popular and pragmatic tool utilized to estimate fracture parameters. A single set of vertical fractures aligned along a preferred horizontal direction embedded in a horizontally layered medium can be considered as an effective long-wavelength orthorhombic medium. Estimation of Thomsen's weak-anisotropy (WA) parameters and fracture weaknesses plays an important role in characterizing the orthorhombic anisotropy in a weakly anisotropic medium. Our goal is to demonstrate an orthorhombic anisotropic AVOaz inversion approach to describe the orthorhombic anisotropy utilizing the observable wide-azimuth seismic reflection data in a fractured reservoir with the assumption of orthorhombic symmetry. Combining Thomsen's WA theory and linear-slip model, we first derive a perturbation in stiffness matrix of a weakly anisotropic medium with orthorhombic symmetry under the assumption of small WA parameters and fracture weaknesses. Using the perturbation matrix and scattering function, we then derive an expression for linearized PP-wave reflection coefficient in terms of P- and S-wave moduli, density, Thomsen's WA parameters, and fracture weaknesses in such an orthorhombic medium, which avoids the complicated nonlinear relationship between the orthorhombic anisotropy and azimuthal seismic reflection data. Incorporating azimuthal seismic data and Bayesian inversion theory, the maximum a posteriori solutions of Thomsen's WA parameters and fracture weaknesses in a weakly anisotropic medium with orthorhombic symmetry are reasonably estimated with the constraints of Cauchy a priori probability distribution and smooth initial models of model parameters to enhance the inversion resolution and the nonlinear iteratively reweighted least squares strategy. The synthetic examples containing a moderate noise demonstrate the feasibility of the derived orthorhombic anisotropic AVOaz inversion method, and the real data illustrate the inversion stabilities of orthorhombic anisotropy in a fractured reservoir.
The incomplete inverse and its applications to the linear least squares problem
NASA Technical Reports Server (NTRS)
Morduch, G. E.
1977-01-01
A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.
Transurethral Ultrasound Diffraction Tomography
2007-03-01
the covariance matrix was derived. The covariance reduced to that of the X- ray CT under the assumptions of linear operator and real data.[5] The...the covariance matrix in the linear x- ray computed tomography is a special case of the inverse scattering matrix derived in this paper. The matrix was...is derived in Sec. IV, and its relation to that of the linear x- ray computed tomography appears in Sec. V. In Sec. VI, the inverse scattering
M-matrices with prescribed elementary divisors
NASA Astrophysics Data System (ADS)
Soto, Ricardo L.; Díaz, Roberto C.; Salas, Mario; Rojo, Oscar
2017-09-01
A real matrix A is said to be an M-matrix if it is of the form A=α I-B, where B is a nonnegative matrix with Perron eigenvalue ρ (B), and α ≥slant ρ (B) . This paper provides sufficient conditions for the existence and construction of an M-matrix A with prescribed elementary divisors, which are the characteristic polynomials of the Jordan blocks of the Jordan canonical form of A. This inverse problem on M-matrices has not been treated until now. We solve the inverse elementary divisors problem for diagonalizable M-matrices and the symmetric generalized doubly stochastic inverse M-matrix problem for lists of real numbers and for lists of complex numbers of the form Λ =\\{λ 1, a+/- bi, \\ldots, a+/- bi\\} . The constructive nature of our results allows for the computation of a solution matrix. The paper also discusses an application of M-matrices to a capacity problem in wireless communications.
NASA Astrophysics Data System (ADS)
Liu, Qimao
2018-02-01
This paper proposes an assumption that the fibre is elastic material and polymer matrix is viscoelastic material so that the energy dissipation depends only on the polymer matrix in dynamic response process. The damping force vectors in frequency and time domains, of FRP (Fibre-Reinforced Polymer matrix) laminated composite plates, are derived based on this assumption. The governing equations of FRP laminated composite plates are formulated in both frequency and time domains. The direct inversion method and direct time integration method for nonviscously damped systems are employed to solve the governing equations and achieve the dynamic responses in frequency and time domains, respectively. The computational procedure is given in detail. Finally, dynamic responses (frequency responses with nonzero and zero initial conditions, free vibration, forced vibrations with nonzero and zero initial conditions) of a FRP laminated composite plate are computed using the proposed methodology. The proposed methodology in this paper is easy to be inserted into the commercial finite element analysis software. The proposed assumption, based on the theory of material mechanics, needs to be further proved by experiment technique in the future.
2.5D complex resistivity modeling and inversion using unstructured grids
NASA Astrophysics Data System (ADS)
Xu, Kaijun; Sun, Jie
2016-04-01
The characteristic of complex resistivity on rock and ore has been recognized by people for a long time. Generally we have used the Cole-Cole Model(CCM) to describe complex resistivity. It has been proved that the electrical anomaly of geologic body can be quantitative estimated by CCM parameters such as direct resistivity(ρ0), chargeability(m), time constant(τ) and frequency dependence(c). Thus it is very important to obtain the complex parameters of geologic body. It is difficult to approximate complex structures and terrain using traditional rectangular grid. In order to enhance the numerical accuracy and rationality of modeling and inversion, we use an adaptive finite-element algorithm for forward modeling of the frequency-domain 2.5D complex resistivity and implement the conjugate gradient algorithm in the inversion of 2.5D complex resistivity. An adaptive finite element method is applied for solving the 2.5D complex resistivity forward modeling of horizontal electric dipole source. First of all, the CCM is introduced into the Maxwell's equations to calculate the complex resistivity electromagnetic fields. Next, the pseudo delta function is used to distribute electric dipole source. Then the electromagnetic fields can be expressed in terms of the primary fields caused by layered structure and the secondary fields caused by inhomogeneities anomalous conductivity. At last, we calculated the electromagnetic fields response of complex geoelectric structures such as anticline, syncline, fault. The modeling results show that adaptive finite-element methods can automatically improve mesh generation and simulate complex geoelectric models using unstructured grids. The 2.5D complex resistivity invertion is implemented based the conjugate gradient algorithm.The conjugate gradient algorithm doesn't need to compute the sensitivity matrix but directly computes the sensitivity matrix or its transpose multiplying vector. In addition, the inversion target zones are segmented with fine grids and the background zones are segmented with big grid, the method can reduce the grid amounts of inversion, it is very helpful to improve the computational efficiency. The inversion results verify the validity and stability of conjugate gradient inversion algorithm. The results of theoretical calculation indicate that the modeling and inversion of 2.5D complex resistivity using unstructured grids are feasible. Using unstructured grids can improve the accuracy of modeling, but the large number of grids inversion is extremely time-consuming, so the parallel computation for the inversion is necessary. Acknowledgments: We thank to the support of the National Natural Science Foundation of China(41304094).
Vibrio cholerae VpsT Regulates Matrix Production and Motility by Directly Sensing Cyclic di-GMP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krasteva, P.; Fong, J; Shikuma, N
2010-01-01
Microorganisms can switch from a planktonic, free-swimming life-style to a sessile, colonial state, called a biofilm, which confers resistance to environmental stress. Conversion between the motile and biofilm life-styles has been attributed to increased levels of the prokaryotic second messenger cyclic di-guanosine monophosphate (c-di-GMP), yet the signaling mechanisms mediating such a global switch are poorly understood. Here we show that the transcriptional regulator VpsT from Vibrio cholerae directly senses c-di-GMP to inversely control extracellular matrix production and motility, which identifies VpsT as a master regulator for biofilm formation. Rather than being regulated by phosphorylation, VpsT undergoes a change in oligomerizationmore » on c-di-GMP binding.« less
Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.
Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng
2013-01-01
Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.
NASA Astrophysics Data System (ADS)
Prinari, Barbara; Demontis, Francesco; Li, Sitai; Horikis, Theodoros P.
2018-04-01
The inverse scattering transform (IST) with non-zero boundary conditions at infinity is developed for an m × m matrix nonlinear Schrödinger-type equation which, in the case m = 2, has been proposed as a model to describe hyperfine spin F = 1 spinor Bose-Einstein condensates with either repulsive interatomic interactions and anti-ferromagnetic spin-exchange interactions (self-defocusing case), or attractive interatomic interactions and ferromagnetic spin-exchange interactions (self-focusing case). The IST for this system was first presented by Ieda et al. (2007) , using a different approach. In our formulation, both the direct and the inverse problems are posed in terms of a suitable uniformization variable which allows to develop the IST on the standard complex plane, instead of a two-sheeted Riemann surface or the cut plane with discontinuities along the cuts. Analyticity of the scattering eigenfunctions and scattering data, symmetries, properties of the discrete spectrum, and asymptotics are derived. The inverse problem is posed as a Riemann-Hilbert problem for the eigenfunctions, and the reconstruction formula of the potential in terms of eigenfunctions and scattering data is provided. In addition, the general behavior of the soliton solutions is analyzed in detail in the 2 × 2 self-focusing case, including some special solutions not previously discussed in the literature.
MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion
NASA Astrophysics Data System (ADS)
Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong
This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.
An efficient numerical technique for calculating thermal spreading resistance
NASA Technical Reports Server (NTRS)
Gale, E. H., Jr.
1977-01-01
An efficient numerical technique for solving the equations resulting from finite difference analyses of fields governed by Poisson's equation is presented. The method is direct (noniterative)and the computer work required varies with the square of the order of the coefficient matrix. The computational work required varies with the cube of this order for standard inversion techniques, e.g., Gaussian elimination, Jordan, Doolittle, etc.
Improved characterisation of measurement errors in electrical resistivity tomography (ERT) surveys
NASA Astrophysics Data System (ADS)
Tso, C. H. M.; Binley, A. M.; Kuras, O.; Graham, J.
2016-12-01
Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe a statistical model of data errors before inversion. Wrongly prescribed error levels can lead to over- or under-fitting of data, yet commonly used models of measurement error are relatively simplistic. With the heightening interests in uncertainty estimation across hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide more reliable estimates of uncertainty. We have analysed two time-lapse electrical resistivity tomography (ERT) datasets; one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24h timeframe, while the other is a year-long cross-borehole survey at a UK nuclear site with over 50,000 daily measurements. Our study included the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and covariance analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used. This agrees with reported speculation in previous literature that ERT errors could be somewhat correlated. Based on these findings, we develop a new error model that allows grouping based on electrode number in additional to fitting a linear model to transfer resistance. The new model fits the observed measurement errors better and shows superior inversion and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the number of the four electrodes used to make each measurement. The new model can be readily applied to the diagonal data weighting matrix commonly used in classical inversion methods, as well as to the data covariance matrix in the Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.
Using Fisher Information Criteria for Chemical Sensor Selection via Convex Optimization Methods
2016-11-16
determinant of the inverse Fisher information matrix which is proportional to the global error volume. If a practitioner has a suitable...pro- ceeds from the determinant of the inverse Fisher information matrix which is proportional to the global error volume. If a practitioner has a...design of statistical estimators (i.e. sensors) as their respective inverses act as lower bounds to the (co)variances of the subject estimator, a property
Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
2016-12-08
In neutron multiplicity counting one may fit a curve by minimizing an objective function, χmore » $$2\\atop{n}$$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W -1 is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$$2\\atop{n}$$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.« less
Computationally efficient modeling and simulation of large scale systems
NASA Technical Reports Server (NTRS)
Jain, Jitesh (Inventor); Cauley, Stephen F. (Inventor); Li, Hong (Inventor); Koh, Cheng-Kok (Inventor); Balakrishnan, Venkataramanan (Inventor)
2010-01-01
A method of simulating operation of a VLSI interconnect structure having capacitive and inductive coupling between nodes thereof. A matrix X and a matrix Y containing different combinations of passive circuit element values for the interconnect structure are obtained where the element values for each matrix include inductance L and inverse capacitance P. An adjacency matrix A associated with the interconnect structure is obtained. Numerical integration is used to solve first and second equations, each including as a factor the product of the inverse matrix X.sup.1 and at least one other matrix, with first equation including X.sup.1Y, X.sup.1A, and X.sup.1P, and the second equation including X.sup.1A and X.sup.1P.
Visco-elastic controlled-source full waveform inversion without surface waves
NASA Astrophysics Data System (ADS)
Paschke, Marco; Krause, Martin; Bleibinhaus, Florian
2016-04-01
We developed a frequency-domain visco-elastic full waveform inversion for onshore seismic experiments with topography. The forward modeling is based on a finite-difference time-domain algorithm by Robertsson that uses the image-method to ensure a stress-free condition at the surface. The time-domain data is Fourier-transformed at every point in the model space during the forward modeling for a given set of frequencies. The motivation for this approach is the reduced amount of memory when computing kernels, and the straightforward implementation of the multiscale approach. For the inversion, we calculate the Frechet derivative matrix explicitly, and we implement a Levenberg-Marquardt scheme that allows for computing the resolution matrix. To reduce the size of the Frechet derivative matrix, and to stabilize the inversion, an adapted inverse mesh is used. The node spacing is controlled by the velocity distribution and the chosen frequencies. To focus the inversion on body waves (P, P-coda, and S) we mute the surface waves from the data. Consistent spatiotemporal weighting factors are applied to the wavefields during the Fourier transform to obtain the corresponding kernels. We test our code with a synthetic study using the Marmousi model with arbitrary topography. This study also demonstrates the importance of topography and muting surface waves in controlled-source full waveform inversion.
Luan, Xiaoli; Chen, Qiang; Liu, Fei
2014-09-01
This article presents a new scheme to design full matrix controller for high dimensional multivariable processes based on equivalent transfer function (ETF). Differing from existing ETF method, the proposed ETF is derived directly by exploiting the relationship between the equivalent closed-loop transfer function and the inverse of open-loop transfer function. Based on the obtained ETF, the full matrix controller is designed utilizing the existing PI tuning rules. The new proposed ETF model can more accurately represent the original processes. Furthermore, the full matrix centralized controller design method proposed in this paper is applicable to high dimensional multivariable systems with satisfactory performance. Comparison with other multivariable controllers shows that the designed ETF based controller is superior with respect to design-complexity and obtained performance. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
3-D Inversion of the MT EarthScope Data, Collected Over the East Central United States
NASA Astrophysics Data System (ADS)
Gribenko, A. V.; Zhdanov, M. S.
2017-12-01
The magnetotelluric (MT) data collected as a part of the EarthScope project provided a unique opportunity to study the conductivity structure of the deep interior of the North American continent. Besides the scientific value of the recovered subsurface models, the data also allowed inversion practitioners to test the robustness of their algorithms applied to regional long-period data. In this paper, we present the results of MT inversion of a subset of the second footprint of the MT data collection covering the East Central United States. Our inversion algorithm implements simultaneous inversion of the full MT impedance data both for the 3-D conductivity distribution and for the distortion matrix. The distortion matrix provides the means to account for the effect of the near-surface geoelectrical inhomogeneities on the MT data. The long-period data do not have the resolution for the small near-surface conductivity anomalies, which makes an application of the distortion matrix especially appropriate. The determined conductivity model of the region agrees well with the known geologic and tectonic features of the East Central United States. The conductivity anomalies recovered by our inversion indicate a possible presence of the hot spot track in the area.
Matrix Sturm-Liouville equation with a Bessel-type singularity on a finite interval
NASA Astrophysics Data System (ADS)
Bondarenko, Natalia
2017-03-01
The matrix Sturm-Liouville equation on a finite interval with a Bessel-type singularity in the end of the interval is studied. Special fundamental systems of solutions for this equation are constructed: analytic Bessel-type solutions with the prescribed behavior at the singular point and Birkhoff-type solutions with the known asymptotics for large values of the spectral parameter. The asymptotic formulas for Stokes multipliers, connecting these two fundamental systems of solutions, are derived. We also set boundary conditions and obtain asymptotic formulas for the spectral data (the eigenvalues and the weight matrices) of the boundary value problem. Our results will be useful in the theory of direct and inverse spectral problems.
NASA Astrophysics Data System (ADS)
Justino, Júlia
2017-06-01
Matrices with coefficients having uncertainties of type o (.) or O (.), called flexible matrices, are studied from the point of view of nonstandard analysis. The uncertainties of the afore-mentioned kind will be given in the form of the so-called neutrices, for instance the set of all infinitesimals. Since flexible matrices have uncertainties in their coefficients, it is not possible to define the identity matrix in an unique way and so the notion of spectral identity matrix arises. Not all nonsingular flexible matrices can be turned into a spectral identity matrix using Gauss-Jordan elimination method, implying that that not all nonsingular flexible matrices have the inverse matrix. Under certain conditions upon the size of the uncertainties appearing in a nonsingular flexible matrix, a general theorem concerning the boundaries of its minors is presented which guarantees the existence of the inverse matrix of a nonsingular flexible matrix.
NASA Astrophysics Data System (ADS)
Mao, Deqing; Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu
2018-01-01
Doppler beam sharpening (DBS) is a critical technology for airborne radar ground mapping in forward-squint region. In conventional DBS technology, the narrow-band Doppler filter groups formed by fast Fourier transform (FFT) method suffer from low spectral resolution and high side lobe levels. The iterative adaptive approach (IAA), based on the weighted least squares (WLS), is applied to the DBS imaging applications, forming narrower Doppler filter groups than the FFT with lower side lobe levels. Regrettably, the IAA is iterative, and requires matrix multiplication and inverse operation when forming the covariance matrix, its inverse and traversing the WLS estimate for each sampling point, resulting in a notably high computational complexity for cubic time. We propose a fast IAA (FIAA)-based super-resolution DBS imaging method, taking advantage of the rich matrix structures of the classical narrow-band filtering. First, we formulate the covariance matrix via the FFT instead of the conventional matrix multiplication operation, based on the typical Fourier structure of the steering matrix. Then, by exploiting the Gohberg-Semencul representation, the inverse of the Toeplitz covariance matrix is computed by the celebrated Levinson-Durbin (LD) and Toeplitz-vector algorithm. Finally, the FFT and fast Toeplitz-vector algorithm are further used to traverse the WLS estimates based on the data-dependent trigonometric polynomials. The method uses the Hermitian feature of the echo autocorrelation matrix R to achieve its fast solution and uses the Toeplitz structure of R to realize its fast inversion. The proposed method enjoys a lower computational complexity without performance loss compared with the conventional IAA-based super-resolution DBS imaging method. The results based on simulations and measured data verify the imaging performance and the operational efficiency.
Recurrent Neural Network for Computing the Drazin Inverse.
Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin
2015-11-01
This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.
NASA Technical Reports Server (NTRS)
Williams, Robert L., II
1992-01-01
The forward position and velocity kinematics for the redundant eight-degree-of-freedom Advanced Research Manipulator 2 (ARM2) are presented. Inverse position and velocity kinematic solutions are also presented. The approach in this paper is to specify two of the unknowns and solve for the remaining six unknowns. Two unknowns can be specified with two restrictions. First, the elbow joint angle and rate cannot be specified because they are known from the end-effector position and velocity. Second, one unknown must be specified from the four-jointed wrist, and the second from joints that translate the wrist, elbow joint excluded. There are eight solutions to the inverse position problem. The inverse velocity solution is unique, assuming the Jacobian matrix is not singular. A discussion of singularities is based on specifying two joint rates and analyzing the reduced Jacobian matrix. When this matrix is singular, the generalized inverse may be used as an alternate solution. Computer simulations were developed to verify the equations. Examples demonstrate agreement between forward and inverse solutions.
2012-08-01
small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This in turn enables fast solution of an appropriately...implication of the compactness of the Hessian is that for small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This...probability distribution is given by the inverse of the Hessian of the negative log likelihood function. For Gaussian data noise and model error, this
Spatiotemporal matrix image formation for programmable ultrasound scanners
NASA Astrophysics Data System (ADS)
Berthon, Beatrice; Morichau-Beauchant, Pierre; Porée, Jonathan; Garofalakis, Anikitos; Tavitian, Bertrand; Tanter, Mickael; Provost, Jean
2018-02-01
As programmable ultrasound scanners become more common in research laboratories, it is increasingly important to develop robust software-based image formation algorithms that can be obtained in a straightforward fashion for different types of probes and sequences with a small risk of error during implementation. In this work, we argue that as the computational power keeps increasing, it is becoming practical to directly implement an approximation to the matrix operator linking reflector point targets to the corresponding radiofrequency signals via thoroughly validated and widely available simulations software. Once such a spatiotemporal forward-problem matrix is constructed, standard and thus highly optimized inversion procedures can be leveraged to achieve very high quality images in real time. Specifically, we show that spatiotemporal matrix image formation produces images of similar or enhanced quality when compared against standard delay-and-sum approaches in phantoms and in vivo, and show that this approach can be used to form images even when using non-conventional probe designs for which adapted image formation algorithms are not readily available.
NASA Astrophysics Data System (ADS)
Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew
2017-11-01
Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.
NASA Astrophysics Data System (ADS)
Imamura, N.; Schultz, A.
2016-12-01
Recently, a full waveform time domain inverse solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion to solve simultaneously for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations, the ability to operate in areas of high levels of source signal spatial complexity, and non-stationarity. This goal would not be obtainable if one were to adopt the pure time domain solution for the inverse problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across a large frequency bandwidth. This means that for the forward simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a sensitivity matrix that is computationally burdensome to solve a model update. We have implemented a code that addresses this situation through the use of cascade decimation decomposition to reduce the size of the sensitivity matrix substantially, through quasi-equivalent time domain decomposition. We also use a fictitious wave domain method to speed up computation time of the forward simulation in the time domain. By combining these refinements, we have developed a full waveform joint source field/earth conductivity inverse modeling method. We found that cascade decimation speeds computations of the sensitivity matrices dramatically, keeping the solution close to that of the undecimated case. For example, for a model discretized into 2.6x105 cells, we obtain model updates in less than 1 hour on a 4U rack-mounted workgroup Linux server, which is a practical computational time for the inverse problem.
NASA Astrophysics Data System (ADS)
Castro-González, N.; Vélez-Cerrada, J. Y.
2008-05-01
Given a bounded operator A on a Banach space X with Drazin inverse AD and index r, we study the class of group invertible bounded operators B such that I+AD(B-A) is invertible and . We show that they can be written with respect to the decomposition as a matrix operator, , where B1 and are invertible. Several characterizations of the perturbed operators are established, extending matrix results. We analyze the perturbation of the Drazin inverse and we provide explicit upper bounds of ||B#-AD|| and ||BB#-ADA||. We obtain a result on the continuity of the group inverse for operators on Banach spaces.
Inversion Of Jacobian Matrix For Robot Manipulators
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
Report discusses inversion of Jacobian matrix for class of six-degree-of-freedom arms with spherical wrist, i.e., with last three joints intersecting. Shows by taking advantage of simple geometry of such arms, closed-form solution of Q=J-1X, which represents linear transformation from task space to joint space, obtained efficiently. Presents solutions for PUMA arm, JPL/Stanford arm, and six-revolute-joint coplanar arm along with all singular points. Main contribution of paper shows simple geometry of this type of arms exploited in performing inverse transformation without any need to compute Jacobian or its inverse explicitly. Implication of this computational efficiency advanced task-space control schemes for spherical-wrist arms implemented more efficiently.
Recursive inverse factorization.
Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N
2008-03-14
A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.
The Equivalence between (AB)[dagger] = B[dagger]A[dagger] and Other Mixed-Type Reverse-Order Laws
ERIC Educational Resources Information Center
Tian, Yongge
2006-01-01
The standard reverse-order law for the Moore-Penrose inverse of a matrix product is (AB)[dagger] = B[dagger]A[dagger]. The purpose of this article is to give a set of equivalences of this reverse-order law and other mixed-type reverse-order laws for the Moore-Penrose inverse of matrix products.
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
ERIC Educational Resources Information Center
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
Building Generalized Inverses of Matrices Using Only Row and Column Operations
ERIC Educational Resources Information Center
Stuart, Jeffrey
2010-01-01
Most students complete their first and only course in linear algebra with the understanding that a real, square matrix "A" has an inverse if and only if "rref"("A"), the reduced row echelon form of "A", is the identity matrix I[subscript n]. That is, if they apply elementary row operations via the Gauss-Jordan algorithm to the partitioned matrix…
Aguilar, I; Misztal, I; Legarra, A; Tsuruta, S
2011-12-01
Genomic evaluations can be calculated using a unified procedure that combines phenotypic, pedigree and genomic information. Implementation of such a procedure requires the inverse of the relationship matrix based on pedigree and genomic relationships. The objective of this study was to investigate efficient computing options to create relationship matrices based on genomic markers and pedigree information as well as their inverses. SNP maker information was simulated for a panel of 40 K SNPs, with the number of genotyped animals up to 30 000. Matrix multiplication in the computation of the genomic relationship was by a simple 'do' loop, by two optimized versions of the loop, and by a specific matrix multiplication subroutine. Inversion was by a generalized inverse algorithm and by a LAPACK subroutine. With the most efficient choices and parallel processing, creation of matrices for 30 000 animals would take a few hours. Matrices required to implement a unified approach can be computed efficiently. Optimizations can be either by modifications of existing code or by the use of efficient automatic optimizations provided by open source or third-party libraries. © 2011 Blackwell Verlag GmbH.
Video Bandwidth Compression System.
1980-08-01
scaling function, located between the inverse DPCM and inverse transform , on the decoder matrix multiplier chips. 1"V1 T.. ---- i.13 SECURITY...Bit Unpacker and Inverse DPCM Slave Sync Board 15 e. Inverse DPCM Loop Boards 15 f. Inverse Transform Board 16 g. Composite Video Output Board 16...36 a. Display Refresh Memory 36 (1) Memory Section 37 (2) Timing and Control 39 b. Bit Unpacker and Inverse DPCM 40 c. Inverse Transform Processor 43
Improvement of Mishchenko's T-matrix code for absorbing particles.
Moroz, Alexander
2005-06-10
The use of Gaussian elimination with backsubstitution for matrix inversion in scattering theories is discussed. Within the framework of the T-matrix method (the state-of-the-art code by Mishchenko is freely available at http://www.giss.nasa.gov/-crmim), it is shown that the domain of applicability of Mishchenko's FORTRAN 77 (F77) code can be substantially expanded in the direction of strongly absorbing particles where the current code fails to converge. Such an extension is especially important if the code is to be used in nanoplasmonic or nanophotonic applications involving metallic particles. At the same time, convergence can also be achieved for large nonabsorbing particles, in which case the non-Numerical Algorithms Group option of Mishchenko's code diverges. Computer F77 implementation of Mishchenko's code supplemented with Gaussian elimination with backsubstitution is freely available at http://www.wave-scattering.com.
Noniterative MAP reconstruction using sparse matrix representations.
Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J
2009-09-01
We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.
Bessel smoothing filter for spectral-element mesh
NASA Astrophysics Data System (ADS)
Trinh, P. T.; Brossier, R.; Métivier, L.; Virieux, J.; Wellington, P.
2017-06-01
Smoothing filters are extremely important tools in seismic imaging and inversion, such as for traveltime tomography, migration and waveform inversion. For efficiency, and as they can be used a number of times during inversion, it is important that these filters can easily incorporate prior information on the geological structure of the investigated medium, through variable coherent lengths and orientation. In this study, we promote the use of the Bessel filter to achieve these purposes. Instead of considering the direct application of the filter, we demonstrate that we can rely on the equation associated with its inverse filter, which amounts to the solution of an elliptic partial differential equation. This enhances the efficiency of the filter application, and also its flexibility. We apply this strategy within a spectral-element-based elastic full waveform inversion framework. Taking advantage of this formulation, we apply the Bessel filter by solving the associated partial differential equation directly on the spectral-element mesh through the standard weak formulation. This avoids cumbersome projection operators between the spectral-element mesh and a regular Cartesian grid, or expensive explicit windowed convolution on the finite-element mesh, which is often used for applying smoothing operators. The associated linear system is solved efficiently through a parallel conjugate gradient algorithm, in which the matrix vector product is factorized and highly optimized with vectorized computation. Significant scaling behaviour is obtained when comparing this strategy with the explicit convolution method. The theoretical numerical complexity of this approach increases linearly with the coherent length, whereas a sublinear relationship is observed practically. Numerical illustrations are provided here for schematic examples, and for a more realistic elastic full waveform inversion gradient smoothing on the SEAM II benchmark model. These examples illustrate well the efficiency and flexibility of the approach proposed.
An iterative solver for the 3D Helmholtz equation
NASA Astrophysics Data System (ADS)
Belonosov, Mikhail; Dmitriev, Maxim; Kostin, Victor; Neklyudov, Dmitry; Tcheverda, Vladimir
2017-09-01
We develop a frequency-domain iterative solver for numerical simulation of acoustic waves in 3D heterogeneous media. It is based on the application of a unique preconditioner to the Helmholtz equation that ensures convergence for Krylov subspace iteration methods. Effective inversion of the preconditioner involves the Fast Fourier Transform (FFT) and numerical solution of a series of boundary value problems for ordinary differential equations. Matrix-by-vector multiplication for iterative inversion of the preconditioned matrix involves inversion of the preconditioner and pointwise multiplication of grid functions. Our solver has been verified by benchmarking against exact solutions and a time-domain solver.
Frequency-domain elastic full waveform inversion using encoded simultaneous sources
NASA Astrophysics Data System (ADS)
Jeong, W.; Son, W.; Pyun, S.; Min, D.
2011-12-01
Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results provided by the approximate Hessian matrix, it is noted that the latter are better than the former for deeper parts of the model. This work was financially supported by the Brain Korea 21 project of Energy System Engineering, by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0006155), by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2010T100200133).
A trade-off between model resolution and variance with selected Rayleigh-wave data
Xia, J.; Miller, R.D.; Xu, Y.
2008-01-01
Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (??? 2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. First, we employed a data-resolution matrix to select data that would be well predicted and to explain advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher mode data are normally more accurately predicted than fundamental mode data because of restrictions on the data kernel for the inversion system. Second, we obtained an optimal damping vector in a vicinity of an inverted model by the singular value decomposition of a trade-off function of model resolution and variance. In the end of the paper, we used a real-world example to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher mode data in inversion can provide better results. We also calculated model-resolution matrices of these examples to show the potential of increasing model resolution with selected surface-wave data. With the optimal damping vector, we can improve and assess an inverted model obtained by a damped least-square method.
Building generalized inverses of matrices using only row and column operations
NASA Astrophysics Data System (ADS)
Stuart, Jeffrey
2010-12-01
Most students complete their first and only course in linear algebra with the understanding that a real, square matrix A has an inverse if and only if rref(A), the reduced row echelon form of A, is the identity matrix I n . That is, if they apply elementary row operations via the Gauss-Jordan algorithm to the partitioned matrix [A | I n ] to obtain [rref(A) | P], then the matrix A is invertible exactly when rref(A) = I n , in which case, P = A -1. Many students must wonder what happens when A is not invertible, and what information P conveys in that case. That question is, however, seldom answered in a first course. We show that investigating that question emphasizes the close relationships between matrix multiplication, elementary row operations, linear systems, and the four fundamental spaces associated with a matrix. More important, answering that question provides an opportunity to show students how mathematicians extend results by relaxing hypotheses and then exploring the strengths and limitations of the resulting generalization, and how the first relaxation found is often not the best relaxation to be found. Along the way, we introduce students to the basic properties of generalized inverses. Finally, our approach should fit within the time and topic constraints of a first course in linear algebra.
NASA Astrophysics Data System (ADS)
Spicer, Graham L. C.; Azarin, Samira M.; Yi, Ji; Young, Scott T.; Ellis, Ronald; Bauer, Greta M.; Shea, Lonnie D.; Backman, Vadim
2016-10-01
In cancer biology, there has been a recent effort to understand tumor formation in the context of the tissue microenvironment. In particular, recent progress has explored the mechanisms behind how changes in the cell-extracellular matrix ensemble influence progression of the disease. The extensive use of in vitro tissue culture models in simulant matrix has proven effective at studying such interactions, but modalities for non-invasively quantifying aspects of these systems are scant. We present the novel application of an imaging technique, Inverse Spectroscopic Optical Coherence Tomography, for the non-destructive measurement of in vitro biological samples during matrix remodeling. Our findings indicate that the nanoscale-sensitive mass density correlation shape factor D of cancer cells increases in response to a more crosslinked matrix. We present a facile technique for the non-invasive, quantitative study of the micro- and nano-scale structure of the extracellular matrix and its host cells.
Lee, Kiju; Wang, Yunfeng; Chirikjian, Gregory S
2007-11-01
Over the past several decades a number of O(n) methods for forward and inverse dynamics computations have been developed in the multi-body dynamics and robotics literature. A method was developed in 1974 by Fixman for O(n) computation of the mass-matrix determinant for a serial polymer chain consisting of point masses. In other recent papers, we extended this method in order to compute the inverse of the mass matrix for serial chains consisting of point masses. In the present paper, we extend these ideas further and address the case of serial chains composed of rigid-bodies. This requires the use of relatively deep mathematics associated with the rotation group, SO(3), and the special Euclidean group, SE(3), and specifically, it requires that one differentiates functions of Lie-group-valued argument.
Limited-memory BFGS based least-squares pre-stack Kirchhoff depth migration
NASA Astrophysics Data System (ADS)
Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu
2015-08-01
Least-squares migration (LSM) is a linearized inversion technique for subsurface reflectivity estimation. Compared to conventional migration algorithms, it can improve spatial resolution significantly with a few iterative calculations. There are three key steps in LSM, (1) calculate data residuals between observed data and demigrated data using the inverted reflectivity model; (2) migrate data residuals to form reflectivity gradient and (3) update reflectivity model using optimization methods. In order to obtain an accurate and high-resolution inversion result, the good estimation of inverse Hessian matrix plays a crucial role. However, due to the large size of Hessian matrix, the inverse matrix calculation is always a tough task. The limited-memory BFGS (L-BFGS) method can evaluate the Hessian matrix indirectly using a limited amount of computer memory which only maintains a history of the past m gradients (often m < 10). We combine the L-BFGS method with least-squares pre-stack Kirchhoff depth migration. Then, we validate the introduced approach by the 2-D Marmousi synthetic data set and a 2-D marine data set. The results show that the introduced method can effectively obtain reflectivity model and has a faster convergence rate with two comparison gradient methods. It might be significant for general complex subsurface imaging.
SMI adaptive antenna arrays for weak interfering signals. [Sample Matrix Inversion
NASA Technical Reports Server (NTRS)
Gupta, Inder J.
1986-01-01
The performance of adaptive antenna arrays in the presence of weak interfering signals (below thermal noise) is studied. It is shown that a conventional adaptive antenna array sample matrix inversion (SMI) algorithm is unable to suppress such interfering signals. To overcome this problem, the SMI algorithm is modified. In the modified algorithm, the covariance matrix is redefined such that the effect of thermal noise on the weights of adaptive arrays is reduced. Thus, the weights are dictated by relatively weak signals. It is shown that the modified algorithm provides the desired interference protection.
NASA Astrophysics Data System (ADS)
O'Malley, D.; Le, E. B.; Vesselinov, V. V.
2015-12-01
We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.
Carta, D; Marras, C; Loche, D; Mountjoy, G; Ahmed, S I; Corrias, A
2013-02-07
The structural properties of zinc ferrite nanoparticles with spinel structure dispersed in a highly porous SiO(2) aerogel matrix were compared with a bulk zinc ferrite sample. In particular, the details of the cation distribution between the octahedral (B) and tetrahedral (A) sites of the spinel structure were determined using X-ray absorption spectroscopy. The analysis of both the X-ray absorption near edge structure and the extended X-ray absorption fine structure indicates that the degree of inversion of the zinc ferrite spinel structures varies with particle size. In particular, in the bulk microcrystalline sample, Zn(2+) ions are at the tetrahedral sites and trivalent Fe(3+) ions occupy octahedral sites (normal spinel). When particle size decreases, Zn(2+) ions are transferred to octahedral sites and the degree of inversion is found to increase as the nanoparticle size decreases. This is the first time that a variation of the degree of inversion with particle size is observed in ferrite nanoparticles grown within an aerogel matrix.
Synthetic Division and Matrix Factorization
ERIC Educational Resources Information Center
Barabe, Samuel; Dubeau, Franc
2007-01-01
Synthetic division is viewed as a change of basis for polynomials written under the Newton form. Then, the transition matrices obtained from a sequence of changes of basis are used to factorize the inverse of a bidiagonal matrix or a block bidiagonal matrix.
On the Duality of Forward and Inverse Light Transport.
Chandraker, Manmohan; Bai, Jiamin; Ng, Tian-Tsong; Ramamoorthi, Ravi
2011-10-01
Inverse light transport seeks to undo global illumination effects, such as interreflections, that pervade images of most scenes. This paper presents the theoretical and computational foundations for inverse light transport as a dual of forward rendering. Mathematically, this duality is established through the existence of underlying Neumann series expansions. Physically, it can be shown that each term of our inverse series cancels an interreflection bounce, just as the forward series adds them. While the convergence properties of the forward series are well known, we show that the oscillatory convergence of the inverse series leads to more interesting conditions on material reflectance. Conceptually, the inverse problem requires the inversion of a large light transport matrix, which is impractical for realistic resolutions using standard techniques. A natural consequence of our theoretical framework is a suite of fast computational algorithms for light transport inversion--analogous to finite element radiosity, Monte Carlo and wavelet-based methods in forward rendering--that rely at most on matrix-vector multiplications. We demonstrate two practical applications, namely, separation of individual bounces of the light transport and fast projector radiometric compensation, to display images free of global illumination artifacts in real-world environments.
Lee, Kiju; Wang, Yunfeng; Chirikjian, Gregory S.
2010-01-01
Over the past several decades a number of O(n) methods for forward and inverse dynamics computations have been developed in the multi-body dynamics and robotics literature. A method was developed in 1974 by Fixman for O(n) computation of the mass-matrix determinant for a serial polymer chain consisting of point masses. In other recent papers, we extended this method in order to compute the inverse of the mass matrix for serial chains consisting of point masses. In the present paper, we extend these ideas further and address the case of serial chains composed of rigid-bodies. This requires the use of relatively deep mathematics associated with the rotation group, SO(3), and the special Euclidean group, SE(3), and specifically, it requires that one differentiates functions of Lie-group-valued argument. PMID:20165563
An indirect approach to the extensive calculation of relationship coefficients
Colleau, Jean-Jacques
2002-01-01
A method was described for calculating population statistics on relationship coefficients without using corresponding individual data. It relied on the structure of the inverse of the numerator relationship matrix between individuals under investigation and ancestors. Computation times were observed on simulated populations and were compared to those incurred with a conventional direct approach. The indirect approach turned out to be very efficient for multiplying the relationship matrix corresponding to planned matings (full design) by any vector. Efficiency was generally still good or very good for calculating statistics on these simulated populations. An extreme implementation of the method is the calculation of inbreeding coefficients themselves. Relative performances of the indirect method were good except when many full-sibs during many generations existed in the population. PMID:12270102
Recursive flexible multibody system dynamics using spatial operators
NASA Technical Reports Server (NTRS)
Jain, A.; Rodriguez, G.
1992-01-01
This paper uses spatial operators to develop new spatially recursive dynamics algorithms for flexible multibody systems. The operator description of the dynamics is identical to that for rigid multibody systems. Assumed-mode models are used for the deformation of each individual body. The algorithms are based on two spatial operator factorizations of the system mass matrix. The first (Newton-Euler) factorization of the mass matrix leads to recursive algorithms for the inverse dynamics, mass matrix evaluation, and composite-body forward dynamics for the systems. The second (innovations) factorization of the mass matrix, leads to an operator expression for the mass matrix inverse and to a recursive articulated-body forward dynamics algorithm. The primary focus is on serial chains, but extensions to general topologies are also described. A comparison of computational costs shows that the articulated-body, forward dynamics algorithm is much more efficient than the composite-body algorithm for most flexible multibody systems.
Liu, Xiaoji; Qin, Xiaolan
2015-01-01
We investigate additive properties of the generalized Drazin inverse in a Banach algebra A. We find explicit expressions for the generalized Drazin inverse of the sum a + b, under new conditions on a, b ∈ A. As an application we give some new representations for the generalized Drazin inverse of an operator matrix. PMID:25729767
Liu, Xiaoji; Qin, Xiaolan
2015-01-01
We investigate additive properties of the generalized Drazin inverse in a Banach algebra A. We find explicit expressions for the generalized Drazin inverse of the sum a + b, under new conditions on a, b ∈ A. As an application we give some new representations for the generalized Drazin inverse of an operator matrix.
Refining mortality estimates in shark demographic analyses: a Bayesian inverse matrix approach.
Smart, Jonathan J; Punt, André E; White, William T; Simpfendorfer, Colin A
2018-01-18
Leslie matrix models are an important analysis tool in conservation biology that are applied to a diversity of taxa. The standard approach estimates the finite rate of population growth (λ) from a set of vital rates. In some instances, an estimate of λ is available, but the vital rates are poorly understood and can be solved for using an inverse matrix approach. However, these approaches are rarely attempted due to prerequisites of information on the structure of age or stage classes. This study addressed this issue by using a combination of Monte Carlo simulations and the sample-importance-resampling (SIR) algorithm to solve the inverse matrix problem without data on population structure. This approach was applied to the grey reef shark (Carcharhinus amblyrhynchos) from the Great Barrier Reef (GBR) in Australia to determine the demography of this population. Additionally, these outputs were applied to another heavily fished population from Papua New Guinea (PNG) that requires estimates of λ for fisheries management. The SIR analysis determined that natural mortality (M) and total mortality (Z) based on indirect methods have previously been overestimated for C. amblyrhynchos, leading to an underestimated λ. The updated Z distributions determined using SIR provided λ estimates that matched an empirical λ for the GBR population and corrected obvious error in the demographic parameters for the PNG population. This approach provides opportunity for the inverse matrix approach to be applied more broadly to situations where information on population structure is lacking. © 2018 by the Ecological Society of America.
A Scalable O(N) Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osei-Kuffuor, Daniel; Fattebert, Jean-Luc
2014-01-01
Traditional algorithms for first-principles molecular dynamics (FPMD) simulations only gain a modest capability increase from current petascale computers, due to their O(N 3) complexity and their heavy use of global communications. To address this issue, we are developing a truly scalable O(N) complexity FPMD algorithm, based on density functional theory (DFT), which avoids global communications. The computational model uses a general nonorthogonal orbital formulation for the DFT energy functional, which requires knowledge of selected elements of the inverse of the associated overlap matrix. We present a scalable algorithm for approximately computing selected entries of the inverse of the overlap matrix,more » based on an approximate inverse technique, by inverting local blocks corresponding to principal submatrices of the global overlap matrix. The new FPMD algorithm exploits sparsity and uses nearest neighbor communication to provide a computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic orbitals are confined, and a cutoff beyond which the entries of the overlap matrix can be omitted when computing selected entries of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to O(100K) atoms on O(100K) processors, with a wall-clock time of O(1) minute per molecular dynamics time step.« less
On the Construction of Involutory Rhotrices
ERIC Educational Resources Information Center
Usaini, S.
2012-01-01
An involutory matrix is a matrix that is its own inverse. Such matrices are of great importance in matrix theory and algebraic cryptography. In this note, we extend this involution to rhotrices and present their properties. We have also provided a method of constructing involutory rhotrices.
NASA Astrophysics Data System (ADS)
Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre
2014-12-01
In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.
NASA Astrophysics Data System (ADS)
Jahandari, H.; Farquharson, C. G.
2017-11-01
Unstructured grids enable representing arbitrary structures more accurately and with fewer cells compared to regular structured grids. These grids also allow more efficient refinements compared to rectilinear meshes. In this study, tetrahedral grids are used for the inversion of magnetotelluric (MT) data, which allows for the direct inclusion of topography in the model, for constraining an inversion using a wireframe-based geological model and for local refinement at the observation stations. A minimum-structure method with an iterative model-space Gauss-Newton algorithm for optimization is used. An iterative solver is employed for solving the normal system of equations at each Gauss-Newton step and the sensitivity matrix-vector products that are required by this solver are calculated using pseudo-forward problems. This method alleviates the need to explicitly form the Hessian or Jacobian matrices which significantly reduces the required computation memory. Forward problems are formulated using an edge-based finite-element approach and a sparse direct solver is used for the solutions. This solver allows saving and re-using the factorization of matrices for similar pseudo-forward problems within a Gauss-Newton iteration which greatly minimizes the computation time. Two examples are presented to show the capability of the algorithm: the first example uses a benchmark model while the second example represents a realistic geological setting with topography and a sulphide deposit. The data that are inverted are the full-tensor impedance and the magnetic transfer function vector. The inversions sufficiently recovered the models and reproduced the data, which shows the effectiveness of unstructured grids for complex and realistic MT inversion scenarios. The first example is also used to demonstrate the computational efficiency of the presented model-space method by comparison with its data-space counterpart.
Negre, Christian F A; Mniszewski, Susan M; Cawkwell, Marc J; Bock, Nicolas; Wall, Michael E; Niklasson, Anders M N
2016-07-12
We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive, iterative refinement of an initial guess of Z (inverse square root of the overlap matrix S). The initial guess of Z is obtained beforehand by using either an approximate divide-and-conquer technique or dynamical methods, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under the incomplete, approximate, iterative refinement of Z. Linear-scaling performance is obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables efficient shared-memory parallelization. As we show in this article using self-consistent density-functional-based tight-binding MD, our approach is faster than conventional methods based on the diagonalization of overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4158-atom water-solvated polyalanine system, we find an average speedup factor of 122 for the computation of Z in each MD step.
Bordehore, Cesar; Fuentes, Verónica L; Segarra, Jose G; Acevedo, Melisa; Canepa, Antonio; Raventós, Josep
2015-01-01
Frequently, population ecology of marine organisms uses a descriptive approach in which their sizes and densities are plotted over time. This approach has limited usefulness for design strategies in management or modelling different scenarios. Population projection matrix models are among the most widely used tools in ecology. Unfortunately, for the majority of pelagic marine organisms, it is difficult to mark individuals and follow them over time to determine their vital rates and built a population projection matrix model. Nevertheless, it is possible to get time-series data to calculate size structure and densities of each size, in order to determine the matrix parameters. This approach is known as a "demographic inverse problem" and it is based on quadratic programming methods, but it has rarely been used on aquatic organisms. We used unpublished field data of a population of cubomedusae Carybdea marsupialis to construct a population projection matrix model and compare two different management strategies to lower population to values before year 2008 when there was no significant interaction with bathers. Those strategies were by direct removal of medusae and by reducing prey. Our results showed that removal of jellyfish from all size classes was more effective than removing only juveniles or adults. When reducing prey, the highest efficiency to lower the C. marsupialis population occurred when prey depletion affected prey of all medusae sizes. Our model fit well with the field data and may serve to design an efficient management strategy or build hypothetical scenarios such as removal of individuals or reducing prey. TThis This sdfsdshis method is applicable to other marine or terrestrial species, for which density and population structure over time are available.
Complete spatiotemporal characterization and optical transfer matrix inversion of a 420 mode fiber.
Carpenter, Joel; Eggleton, Benjamin J; Schröder, Jochen
2016-12-01
The ability to measure a scattering medium's optical transfer matrix, the mapping between any spatial input and output, has enabled applications such as imaging to be performed through media which would otherwise be opaque due to scattering. However, the scattering of light occurs not just in space, but also in time. We complete the characterization of scatter by extending optical transfer matrix methods into the time domain, allowing any spatiotemporal input state at one end to be mapped directly to its corresponding spatiotemporal output state. We have measured the optical transfer function of a multimode fiber in its entirety; it consists of 420 modes in/out at 32768 wavelengths, the most detailed complete characterization of multimode waveguide light propagation to date, to the best of our knowledge. We then demonstrate the ability to generate any spatial/polarization state at the output of the fiber at any wavelength, as well as predict the temporal response of any spatial/polarization input state.
A Fast Estimation Algorithm for Two-Dimensional Gravity Data (GEOFAST),
1979-11-15
to a wide class of problems (Refs. 9 and 17). The major inhibitor to the widespread appli- ( cation of optimal gravity data processing is the severe...extends directly to two dimensions. Define the nln 2xn1 n2 diagonal window matrix W as the Kronecker product of two one-dimensional windows W = W1 0 W2 (B...Inversion of Separable Matrices Consider the linear system y = T x (B.3-1) where T is block Toeplitz of dimension nln 2xnIn 2 . Its fre- quency domain
Atomic configurations at InAs partial dislocation cores associated with Z-shape faulted dipoles.
Li, Luying; Gan, Zhaofeng; McCartney, Martha R; Liang, Hanshuang; Yu, Hongbin; Gao, Yihua; Wang, Jianbo; Smith, David J
2013-11-15
The atomic arrangements of two types of InAs dislocation cores associated by a Z-shape faulted dipole are observed directly by aberration-corrected high-angle annular-dark-field imaging. Single unpaired columns of different atoms in a matrix of dumbbells are clearly resolved, with observable variations of bonding lengths due to excess Coulomb force from bare ions at the dislocation core. The corresponding geometric phase analysis provides confirmation that the dislocation cores serve as origins of strain field inversion while stacking faults maintain the existing strain status.
Inverse modeling of the terrestrial carbon flux in China with flux covariance among inverted regions
NASA Astrophysics Data System (ADS)
Wang, H.; Jiang, F.; Chen, J. M.; Ju, W.; Wang, H.
2011-12-01
Quantitative understanding of the role of ocean and terrestrial biosphere in the global carbon cycle, their response and feedback to climate change is required for the future projection of the global climate. China has the largest amount of anthropogenic CO2 emission, diverse terrestrial ecosystems and an unprecedented rate of urbanization. Thus information on spatial and temporal distributions of the terrestrial carbon flux in China is of great importance in understanding the global carbon cycle. We developed a nested inversion with focus in China. Based on Transcom 22 regions for the globe, we divide China and its neighboring countries into 17 regions, making 39 regions in total for the globe. A Bayesian synthesis inversion is made to estimate the terrestrial carbon flux based on GlobalView CO2 data. In the inversion, GEOS-Chem is used as the transport model to develop the transport matrix. A terrestrial ecosystem model named BEPS is used to produce the prior surface flux to constrain the inversion. However, the sparseness of available observation stations in Asia poses a challenge to the inversion for the 17 small regions. To obtain additional constraint on the inversion, a prior flux covariance matrix is constructed using the BEPS model through analyzing the correlation in the net carbon flux among regions under variable climate conditions. The use of the covariance among different regions in the inversion effectively extends the information content of CO2 observations to more regions. The carbon flux over the 39 land and ocean regions are inverted for the period from 2004 to 2009. In order to investigate the impact of introducing the covariance matrix with non-zero off-diagonal values to the inversion, the inverted terrestrial carbon flux over China is evaluated against ChinaFlux eddy-covariance observations after applying an upscaling methodology.
3D CSEM inversion based on goal-oriented adaptive finite element method
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.
2016-12-01
We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.
Stochastic Gabor reflectivity and acoustic impedance inversion
NASA Astrophysics Data System (ADS)
Hariri Naghadeh, Diako; Morley, Christopher Keith; Ferguson, Angus John
2018-02-01
To delineate subsurface lithology to estimate petrophysical properties of a reservoir, it is possible to use acoustic impedance (AI) which is the result of seismic inversion. To change amplitude to AI, removal of wavelet effects from the seismic signal in order to get a reflection series, and subsequently transforming those reflections to AI, is vital. To carry out seismic inversion correctly it is important to not assume that the seismic signal is stationary. However, all stationary deconvolution methods are designed following that assumption. To increase temporal resolution and interpretation ability, amplitude compensation and phase correction are inevitable. Those are pitfalls of stationary reflectivity inversion. Although stationary reflectivity inversion methods are trying to estimate reflectivity series, because of incorrect assumptions their estimations will not be correct, but may be useful. Trying to convert those reflection series to AI, also merging with the low frequency initial model, can help us. The aim of this study was to apply non-stationary deconvolution to eliminate time variant wavelet effects from the signal and to convert the estimated reflection series to the absolute AI by getting bias from well logs. To carry out this aim, stochastic Gabor inversion in the time domain was used. The Gabor transform derived the signal’s time-frequency analysis and estimated wavelet properties from different windows. Dealing with different time windows gave an ability to create a time-variant kernel matrix, which was used to remove matrix effects from seismic data. The result was a reflection series that does not follow the stationary assumption. The subsequent step was to convert those reflections to AI using well information. Synthetic and real data sets were used to show the ability of the introduced method. The results highlight that the time cost to get seismic inversion is negligible related to general Gabor inversion in the frequency domain. Also, obtaining bias could help the method to estimate reliable AI. To justify the effect of random noise on deterministic and stochastic inversion results, a stationary noisy trace with signal-to-noise ratio equal to 2 was used. The results highlight the inability of deterministic inversion in dealing with a noisy data set even using a high number of regularization parameters. Also, despite the low level of signal, stochastic Gabor inversion not only can estimate correctly the wavelet’s properties but also, because of bias from well logs, the inversion result is very close to the real AI. Comparing deterministic and introduced inversion results on a real data set shows that low resolution results, especially in the deeper parts of seismic sections using deterministic inversion, creates significant reliability problems for seismic prospects, but this pitfall is solved completely using stochastic Gabor inversion. The estimated AI using Gabor inversion in the time domain is much better and faster than general Gabor inversion in the frequency domain. This is due to the extra number of windows required to analyze the time-frequency information and also the amount of temporal increment between windows. In contrast, stochastic Gabor inversion can estimate trustable physical properties close to the real characteristics. Applying to a real data set could give an ability to detect the direction of volcanic intrusion and the ability of lithology distribution delineation along the fan. Comparing the inversion results highlights the efficiency of stochastic Gabor inversion to delineate lateral lithology changes because of the improved frequency content and zero phasing of the final inversion volume.
On Max-Plus Algebra and Its Application on Image Steganography
Santoso, Kiswara Agung
2018-01-01
We propose a new steganography method to hide an image into another image using matrix multiplication operations on max-plus algebra. This is especially interesting because the matrix used in encoding or information disguises generally has an inverse, whereas matrix multiplication operations in max-plus algebra do not have an inverse. The advantages of this method are the size of the image that can be hidden into the cover image, larger than the previous method. The proposed method has been tested on many secret images, and the results are satisfactory which have a high level of strength and a high level of security and can be used in various operating systems. PMID:29887761
On Max-Plus Algebra and Its Application on Image Steganography.
Santoso, Kiswara Agung; Fatmawati; Suprajitno, Herry
2018-01-01
We propose a new steganography method to hide an image into another image using matrix multiplication operations on max-plus algebra. This is especially interesting because the matrix used in encoding or information disguises generally has an inverse, whereas matrix multiplication operations in max-plus algebra do not have an inverse. The advantages of this method are the size of the image that can be hidden into the cover image, larger than the previous method. The proposed method has been tested on many secret images, and the results are satisfactory which have a high level of strength and a high level of security and can be used in various operating systems.
Wei, Jianing; Bouman, Charles A; Allebach, Jan P
2014-05-01
Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.
NASA Astrophysics Data System (ADS)
Nuber, André; Manukyan, Edgar; Maurer, Hansruedi
2014-05-01
Conventional methods of interpreting seismic data rely on filtering and processing limited portions of the recorded wavefield. Typically, either reflections, refractions or surface waves are considered in isolation. Particularly in near-surface engineering and environmental investigations (depths less than, say 100 m), these wave types often overlap in time and are difficult to separate. Full waveform inversion is a technique that seeks to exploit and interpret the full information content of the seismic records without the need for separating events first; it yields models of the subsurface at sub-wavelength resolution. We use a finite element modelling code to solve the 2D elastic isotropic wave equation in the frequency domain. This code is part of a Gauss-Newton inversion scheme which we employ to invert for the P- and S-wave velocities as well as for density in the subsurface. For shallow surface data the use of an elastic forward solver is essential because surface waves often dominate the seismograms. This leads to high sensitivities (partial derivatives contained in the Jacobian matrix of the Gauss-Newton inversion scheme) and thus large model updates close to the surface. Reflections from deeper structures may also include useful information, but the large sensitivities of the surface waves often preclude this information from being fully exploited. We have developed two methods that balance the sensitivity distributions and thus may help resolve the deeper structures. The first method includes equilibrating the columns of the Jacobian matrix prior to every inversion step by multiplying them with individual scaling factors. This is expected to also balance the model updates throughout the entire subsurface model. It can be shown that this procedure is mathematically equivalent to balancing the regularization weights of the individual model parameters. A proper choice of the scaling factors required to balance the Jacobian matrix is critical. We decided to normalise the columns of the Jacobian based on their absolute column sum, but defining an upper threshold for the scaling factors. This avoids particularly small and therefore insignificant sensitivities being over-boosted, which would produce unstable results. The second method proposed includes adjusting the inversion cell size with depth. Multiple cells of the forward modelling grid are merged to form larger inversion cells (typical ratios between forward and inversion cells are in the order of 1:100). The irregular inversion grid is adapted to the expected resolution power of full waveform inversion. Besides stabilizing the inversion, this approach also reduces the number of model parameters to be recovered. Consequently, the computational costs and the memory consumption are reduced significantly. This is particularly critical when Gauss-Newton type inversion schemes are employed. Extensive tests with synthetic data demonstrated that both methods stabilise the inversion and improve the inversion results. The two methods have some redundancy, which can be seen when both are applied simultaneously, that is, when scaling of the Jacobian matrix is applied to an irregular inversion grid. The calculated scaling factors are quite balanced and span a much smaller range than in the case of a regular inversion grid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiffmann, Florian; VandeVondele, Joost, E-mail: Joost.VandeVondele@mat.ethz.ch
2015-06-28
We present an improved preconditioning scheme for electronic structure calculations based on the orbital transformation method. First, a preconditioner is developed which includes information from the full Kohn-Sham matrix but avoids computationally demanding diagonalisation steps in its construction. This reduces the computational cost of its construction, eliminating a bottleneck in large scale simulations, while maintaining rapid convergence. In addition, a modified form of Hotelling’s iterative inversion is introduced to replace the exact inversion of the preconditioner matrix. This method is highly effective during molecular dynamics (MD), as the solution obtained in earlier MD steps is a suitable initial guess. Filteringmore » small elements during sparse matrix multiplication leads to linear scaling inversion, while retaining robustness, already for relatively small systems. For system sizes ranging from a few hundred to a few thousand atoms, which are typical for many practical applications, the improvements to the algorithm lead to a 2-5 fold speedup per MD step.« less
Optimal aperture synthesis radar imaging
NASA Astrophysics Data System (ADS)
Hysell, D. L.; Chau, J. L.
2006-03-01
Aperture synthesis radar imaging has been used to investigate coherent backscatter from ionospheric plasma irregularities at Jicamarca and elsewhere for several years. Phenomena of interest include equatorial spread F, 150-km echoes, the equatorial electrojet, range-spread meteor trails, and mesospheric echoes. The sought-after images are related to spaced-receiver data mathematically through an integral transform, but direct inversion is generally impractical or suboptimal. We instead turn to statistical inverse theory, endeavoring to utilize fully all available information in the data inversion. The imaging algorithm used at Jicamarca is based on an implementation of the MaxEnt method developed for radio astronomy. Its strategy is to limit the space of candidate images to those that are positive definite, consistent with data to the degree required by experimental confidence limits; smooth (in some sense); and most representative of the class of possible solutions. The algorithm was improved recently by (1) incorporating the antenna radiation pattern in the prior probability and (2) estimating and including the full error covariance matrix in the constraints. The revised algorithm is evaluated using new 28-baseline electrojet data from Jicamarca.
NASA Astrophysics Data System (ADS)
Scheunert, M.; Ullmann, A.; Afanasjew, M.; Börner, R.-U.; Siemon, B.; Spitzer, K.
2016-06-01
We present an inversion concept for helicopter-borne frequency-domain electromagnetic (HEM) data capable of reconstructing 3-D conductivity structures in the subsurface. Standard interpretation procedures often involve laterally constrained stitched 1-D inversion techniques to create pseudo-3-D models that are largely representative for smoothly varying conductivity distributions in the subsurface. Pronounced lateral conductivity changes may, however, produce significant artifacts that can lead to serious misinterpretation. Still, 3-D inversions of entire survey data sets are numerically very expensive. Our approach is therefore based on a cut-&-paste strategy whereupon the full 3-D inversion needs to be applied only to those parts of the survey where the 1-D inversion actually fails. The introduced 3-D Gauss-Newton inversion scheme exploits information given by a state-of-the-art (laterally constrained) 1-D inversion. For a typical HEM measurement, an explicit representation of the Jacobian matrix is inevitable which is caused by the unique transmitter-receiver relation. We introduce tensor quantities which facilitate the matrix assembly of the forward operator as well as the efficient calculation of the Jacobian. The finite difference forward operator incorporates the displacement currents because they may seriously affect the electromagnetic response at frequencies above 100. Finally, we deliver the proof of concept for the inversion using a synthetic data set with a noise level of up to 5%.
NASA Astrophysics Data System (ADS)
Ha, Sanghyun; Park, Junshin; You, Donghyun
2018-01-01
Utility of the computational power of Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. The Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution methods used in the semi-implicit fractional-step method take advantage of multiple tridiagonal matrices whose inversion is known as the major bottleneck for acceleration on a typical multi-core machine. A novel implementation of the semi-implicit fractional-step method designed for GPU acceleration of the incompressible Navier-Stokes equations is presented. Aspects of the programing model of Compute Unified Device Architecture (CUDA), which are critical to the bandwidth-bound nature of the present method are discussed in detail. A data layout for efficient use of CUDA libraries is proposed for acceleration of tridiagonal matrix inversion and fast Fourier transform. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while the Navier-Stokes equations are computed on a GPU. Performance of the present method using CUDA is assessed by comparing the speed of solving three tridiagonal matrices using ADI with the speed of solving one heptadiagonal matrix using a conjugate gradient method. An overall speedup of 20 times is achieved using a Tesla K40 GPU in comparison with a single-core Xeon E5-2660 v3 CPU in simulations of turbulent boundary-layer flow over a flat plate conducted on over 134 million grids. Enhanced performance of 48 times speedup is reached for the same problem using a Tesla P100 GPU.
Reconstruction of structural damage based on reflection intensity spectra of fiber Bragg gratings
NASA Astrophysics Data System (ADS)
Huang, Guojun; Wei, Changben; Chen, Shiyuan; Yang, Guowei
2014-12-01
We present an approach for structural damage reconstruction based on the reflection intensity spectra of fiber Bragg gratings (FBGs). Our approach incorporates the finite element method, transfer matrix (T-matrix), and genetic algorithm to solve the inverse photo-elastic problem of damage reconstruction, i.e. to identify the location, size, and shape of a defect. By introducing a parameterized characterization of the damage information, the inverse photo-elastic problem is reduced to an optimization problem, and a relevant computational scheme was developed. The scheme iteratively searches for the solution to the corresponding direct photo-elastic problem until the simulated and measured (or target) reflection intensity spectra of the FBGs near the defect coincide within a prescribed error. Proof-of-concept validations of our approach were performed numerically and experimentally using both holed and cracked plate samples as typical cases of plane-stress problems. The damage identifiability was simulated by changing the deployment of the FBG sensors, including the total number of sensors and their distance to the defect. Both the numerical and experimental results demonstrate that our approach is effective and promising. It provides us with a photo-elastic method for developing a remote, automatic damage-imaging technique that substantially improves damage identification for structural health monitoring.
NASA Astrophysics Data System (ADS)
Sourbier, F.; Operto, S.; Virieux, J.
2006-12-01
We present a distributed-memory parallel algorithm for 2D visco-acoustic full-waveform inversion of wide-angle seismic data. Our code is written in fortran90 and use MPI for parallelism. The algorithm was applied to real wide-angle data set recorded by 100 OBSs with a 1-km spacing in the eastern-Nankai trough (Japan) to image the deep structure of the subduction zone. Full-waveform inversion is applied sequentially to discrete frequencies by proceeding from the low to the high frequencies. The inverse problem is solved with a classic gradient method. Full-waveform modeling is performed with a frequency-domain finite-difference method. In the frequency-domain, solving the wave equation requires resolution of a large unsymmetric system of linear equations. We use the massively parallel direct solver MUMPS (http://www.enseeiht.fr/irit/apo/MUMPS) for distributed-memory computer to solve this system. The MUMPS solver is based on a multifrontal method for the parallel factorization. The MUMPS algorithm is subdivided in 3 main steps: a symbolic analysis step that performs re-ordering of the matrix coefficients to minimize the fill-in of the matrix during the subsequent factorization and an estimation of the assembly tree of the matrix. Second, the factorization is performed with dynamic scheduling to accomodate numerical pivoting and provides the LU factors distributed over all the processors. Third, the resolution is performed for multiple sources. To compute the gradient of the cost function, 2 simulations per shot are required (one to compute the forward wavefield and one to back-propagate residuals). The multi-source resolutions can be performed in parallel with MUMPS. In the end, each processor stores in core a sub-domain of all the solutions. These distributed solutions can be exploited to compute in parallel the gradient of the cost function. Since the gradient of the cost function is a weighted stack of the shot and residual solutions of MUMPS, each processor computes the corresponding sub-domain of the gradient. In the end, the gradient is centralized on the master processor using a collective communation. The gradient is scaled by the diagonal elements of the Hessian matrix. This scaling is computed only once per frequency before the first iteration of the inversion. Estimation of the diagonal terms of the Hessian requires performing one simulation per non redondant shot and receiver position. The same strategy that the one used for the gradient is used to compute the diagonal Hessian in parallel. This algorithm was applied to a dense wide-angle data set recorded by 100 OBSs in the eastern Nankai trough, offshore Japan. Thirteen frequencies ranging from 3 and 15 Hz were inverted. Tweny iterations per frequency were computed leading to 260 tomographic velocity models of increasing resolution. The velocity model dimensions are 105 km x 25 km corresponding to a finite-difference grid of 4201 x 1001 grid with a 25-m grid interval. The number of shot was 1005 and the number of inverted OBS gathers was 93. The inversion requires 20 days on 6 32-bits bi-processor nodes with 4 Gbytes of RAM memory per node when only the LU factorization is performed in parallel. Preliminary estimations of the time required to perform the inversion with the fully-parallelized code is 6 and 4 days using 20 and 50 processors respectively.
ERIC Educational Resources Information Center
Adachi, Kohei
2009-01-01
In component analysis solutions, post-multiplying a component score matrix by a nonsingular matrix can be compensated by applying its inverse to the corresponding loading matrix. To eliminate this indeterminacy on nonsingular transformation, we propose Joint Procrustes Analysis (JPA) in which component score and loading matrices are simultaneously…
NASA Astrophysics Data System (ADS)
Xiao, X.; Cohan, D. S.
2009-12-01
Substantial uncertainties in current emission inventories have been detected by the Texas Air Quality Study 2006 (TexAQS 2006) intensive field program. These emission uncertainties have caused large inaccuracies in model simulations of air quality and its responses to management strategies. To improve the quantitative understanding of the temporal, spatial, and categorized distributions of primary pollutant emissions by utilizing the corresponding measurements collected during TexAQS 2006, we implemented both the recursive Kalman filter and a batch matrix inversion 4-D data assimilation (FDDA) method in an iterative inverse modeling framework of the CMAQ-DDM model. Equipped with the decoupled direct method, CMAQ-DDM enables simultaneous calculation of the sensitivity coefficients of pollutant concentrations to emissions to be used in the inversions. Primary pollutant concentrations measured by the multiple platforms (TCEQ ground-based, NOAA WP-3D aircraft and Ronald H. Brown vessel, and UH Moody Tower) during TexAQS 2006 have been integrated for the use in the inverse modeling. Firstly pseudo-data analyses have been conducted to assess the two methods, taking a coarse spatial resolution emission inventory as a case. Model base case concentrations of isoprene and ozone at arbitrarily selected ground grid cells were perturbed to generate pseudo measurements with different assumed Gaussian uncertainties expressed by 1-sigma standard deviations. Single-species inversions have been conducted with both methods for isoprene and NOx surface emissions from eight states in the Southeastern United States by using the pseudo measurements of isoprene and ozone, respectively. Utilization of ozone pseudo data to invert for NOx emissions serves only for the purpose of method assessment. Both the Kalman filter and FDDA methods show good performance in tuning arbitrarily shifted a priori emissions to the base case “true” values within 3-4 iterations even for the nonlinear responses of ozone to NOx emissions. While the Kalman filter has better performance under the situation of very large observational uncertainties, the batch matrix FDDA method is better suited for incorporating temporally and spatially irregular data such as those measured by NOAA aircraft and ship. After validating the methods with the pseudo data, the inverse technique is applied to improve emission estimates of NOx from different source sectors and regions in the Houston metropolitan area by using NOx measurements during TexAQS 2006. EPA NEI2005-based and Texas-specified Emission Inventories for 2006 are used as the a priori emission estimates before optimization. The inversion results will be presented and discussed. Future work will conduct inverse modeling for additional species, and then perform a multi-species inversion for emissions consistency and reconciliation with secondary pollutants such as ozone.
Recursive inversion of externally defined linear systems
NASA Technical Reports Server (NTRS)
Bach, Ralph E., Jr.; Baram, Yoram
1988-01-01
The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problems of system identification and compensation.
NASA Astrophysics Data System (ADS)
Orbaek, Alvin W.; Barron, Andrew R.
2013-03-01
Comparison of AFM and SEM images of single walled carbon nanotubes (SWNTs) grown within a dielectric matrix reveal subterranean nanotubes that are present within the matrix, and as such can be charge screened by the dielectric. Under adequate imaging conditions for the SWNT/silica sample the intensity of isolated nanotubes is found to be inversely proportional to the instrument dwell time (i.e., shorter dwell times were found to make SWNT intensities brighter). The threshold dwell time required to enable isolated tubes to be visible was found to be 10 μs moreover, the degree change in intensity was found to be nanotube specific, i.e., different SWNTs respond in a different manner at different dwell times. The results indicate that care should be taken when attempting to quantify number density and length distributions of SWNTs on or within a dielectric matrix.Comparison of AFM and SEM images of single walled carbon nanotubes (SWNTs) grown within a dielectric matrix reveal subterranean nanotubes that are present within the matrix, and as such can be charge screened by the dielectric. Under adequate imaging conditions for the SWNT/silica sample the intensity of isolated nanotubes is found to be inversely proportional to the instrument dwell time (i.e., shorter dwell times were found to make SWNT intensities brighter). The threshold dwell time required to enable isolated tubes to be visible was found to be 10 μs moreover, the degree change in intensity was found to be nanotube specific, i.e., different SWNTs respond in a different manner at different dwell times. The results indicate that care should be taken when attempting to quantify number density and length distributions of SWNTs on or within a dielectric matrix. Electronic supplementary information (ESI) available: Plots of SEM for cross over points, raw SEM images used for Fig. 5, and Fig. 6, SEM image of scattering centre, and SEM images with various scan directions at 10 μs dwell time. See DOI: 10.1039/c3nr00142c
Geostatistical regularization operators for geophysical inverse problems on irregular meshes
NASA Astrophysics Data System (ADS)
Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA
2018-05-01
Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.
NASA Astrophysics Data System (ADS)
Jiang, Jinghui; Zhou, Han; Ding, Jian; Zhang, Fan; Fan, Tongxiang; Zhang, Di
2015-08-01
Bio-template approach was employed to construct inverse V-type TiO2-based photocatalyst with well distributed AgBr in TiO2 matrix by making dead Troides Helena wings with inverse V-type scales as the template. A cross-linked titanium precursor with homogenous hydrolytic rate, good liquidity, and low viscosity was employed to facilitate a perfect duplication of the template and the dispersion of AgBr based on appropriate pretreatment of the template by alkali and acid. The as-synthesized inverse V-type TiO2/AgBr can be turned into inverse V-type TiO2/Ag0 from AgBr photolysis during photocatalysis to achieve in situ deposition of Ag0 in TiO2 matrix, by this approach, to avoid the deformation of surface microstructure inherited from the template. The result showed that the cooperation of perfect inverse V-type structure and the well distributed TiO2/Ag0 microstructures can efficiently boost the photosynthetic water oxidation compared to non-inverse V-type TiO2/Ag0 and TiO2/Ag0 without using template. The anti-reflection function of inverse V-type structure and the plasmatic effect of Ag0 might be able to account for the enhanced photon capture and efficient photoelectric conversion.
NASA Astrophysics Data System (ADS)
Bigdeli, Abbas; Biglari-Abhari, Morteza; Salcic, Zoran; Tin Lai, Yat
2006-12-01
A new pipelined systolic array-based (PSA) architecture for matrix inversion is proposed. The pipelined systolic array (PSA) architecture is suitable for FPGA implementations as it efficiently uses available resources of an FPGA. It is scalable for different matrix size and as such allows employing parameterisation that makes it suitable for customisation for application-specific needs. This new architecture has an advantage of[InlineEquation not available: see fulltext.] processing element complexity, compared to the[InlineEquation not available: see fulltext.] in other systolic array structures, where the size of the input matrix is given by[InlineEquation not available: see fulltext.]. The use of the PSA architecture for Kalman filter as an implementation example, which requires different structures for different number of states, is illustrated. The resulting precision error is analysed and shown to be negligible.
NASA Technical Reports Server (NTRS)
Melbourne, William G.
1986-01-01
In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.
A Strassen-Newton algorithm for high-speed parallelizable matrix inversion
NASA Technical Reports Server (NTRS)
Bailey, David H.; Ferguson, Helaman R. P.
1988-01-01
Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.
A physiologically motivated sparse, compact, and smooth (SCS) approach to EEG source localization.
Cao, Cheng; Akalin Acar, Zeynep; Kreutz-Delgado, Kenneth; Makeig, Scott
2012-01-01
Here, we introduce a novel approach to the EEG inverse problem based on the assumption that principal cortical sources of multi-channel EEG recordings may be assumed to be spatially sparse, compact, and smooth (SCS). To enforce these characteristics of solutions to the EEG inverse problem, we propose a correlation-variance model which factors a cortical source space covariance matrix into the multiplication of a pre-given correlation coefficient matrix and the square root of the diagonal variance matrix learned from the data under a Bayesian learning framework. We tested the SCS method using simulated EEG data with various SNR and applied it to a real ECOG data set. We compare the results of SCS to those of an established SBL algorithm.
An efficient implementation of a high-order filter for a cubed-sphere spectral element model
NASA Astrophysics Data System (ADS)
Kang, Hyun-Gyu; Cheong, Hyeong-Bin
2017-03-01
A parallel-scalable, isotropic, scale-selective spatial filter was developed for the cubed-sphere spectral element model on the sphere. The filter equation is a high-order elliptic (Helmholtz) equation based on the spherical Laplacian operator, which is transformed into cubed-sphere local coordinates. The Laplacian operator is discretized on the computational domain, i.e., on each cell, by the spectral element method with Gauss-Lobatto Lagrange interpolating polynomials (GLLIPs) as the orthogonal basis functions. On the global domain, the discrete filter equation yielded a linear system represented by a highly sparse matrix. The density of this matrix increases quadratically (linearly) with the order of GLLIP (order of the filter), and the linear system is solved in only O (Ng) operations, where Ng is the total number of grid points. The solution, obtained by a row reduction method, demonstrated the typical accuracy and convergence rate of the cubed-sphere spectral element method. To achieve computational efficiency on parallel computers, the linear system was treated by an inverse matrix method (a sparse matrix-vector multiplication). The density of the inverse matrix was lowered to only a few times of the original sparse matrix without degrading the accuracy of the solution. For better computational efficiency, a local-domain high-order filter was introduced: The filter equation is applied to multiple cells, and then the central cell was only used to reconstruct the filtered field. The parallel efficiency of applying the inverse matrix method to the global- and local-domain filter was evaluated by the scalability on a distributed-memory parallel computer. The scale-selective performance of the filter was demonstrated on Earth topography. The usefulness of the filter as a hyper-viscosity for the vorticity equation was also demonstrated.
NASA Astrophysics Data System (ADS)
Gourdji, S.; Yadav, V.; Karion, A.; Mueller, K. L.; Kort, E. A.; Conley, S.; Ryerson, T. B.; Nehrkorn, T.
2017-12-01
The ability of atmospheric inverse models to detect, spatially locate and quantify emissions from large point sources in urban domains needs improvement before inversions can be used reliably as carbon monitoring tools. In this study, we use the Aliso Canyon natural gas leak from October 2015 to February 2016 (near Los Angeles, CA) as a natural tracer experiment to assess inversion quality by comparison with published estimates of leak rates calculated using a mass balance approach (Conley et al., 2016). Fourteen dedicated flights were flown in horizontal transects downwind and throughout the duration of the leak to sample CH4 mole fractions and collect meteorological information for use in the mass-balance estimates. The same CH4 observational data were then used here in geostatistical inverse models with no prior assumptions about the leak location or emission rate and flux sensitivity matrices generated using the WRF-STILT atmospheric transport model. Transport model errors were assessed by comparing WRF-STILT wind speeds, wind direction and planetary boundary layer (PBL) height to those observed on the plane; the impact of these errors in the inversions, and the optimal inversion setup for reducing their influence was also explored. WRF-STILT provides a reasonable simulation of true atmospheric conditions on most flight dates, given the complex terrain and known difficulties in simulating atmospheric transport under such conditions. Moreover, even large (>120°) errors in wind direction were found to be tolerable in terms of spatially locating the leak rate within a 5-km radius of the actual site. Errors in the WRF-STILT wind speed (>50%) and PBL height have more negative impacts on the inversions, with too high wind speeds (typically corresponding with too low PBL heights) resulting in overestimated leak rates, and vice-versa. Coarser data averaging intervals and the use of observed wind speed errors in the model-data mismatch covariance matrix are shown to help reduce the influence of transport model errors, by averaging out compensating errors and de-weighting the influence of problematic observations. This study helps to enable the integration of aircraft measurements with other tower-based data in larger inverse models that can reliably detect, locate and quantify point source emissions in urban areas.
Kinematics of an in-parallel actuated manipulator based on the Stewart platform mechanism
NASA Technical Reports Server (NTRS)
Williams, Robert L., II
1992-01-01
This paper presents kinematic equations and solutions for an in-parallel actuated robotic mechanism based on Stewart's platform. These equations are required for inverse position and resolved rate (inverse velocity) platform control. NASA LaRC has a Vehicle Emulator System (VES) platform designed by MIT which is based on Stewart's platform. The inverse position solution is straight-forward and computationally inexpensive. Given the desired position and orientation of the moving platform with respect to the base, the lengths of the prismatic leg actuators are calculated. The forward position solution is more complicated and theoretically has 16 solutions. The position and orientation of the moving platform with respect to the base is calculated given the leg actuator lengths. Two methods are pursued in this paper to solve this problem. The resolved rate (inverse velocity) solution is derived. Given the desired Cartesian velocity of the end-effector, the required leg actuator rates are calculated. The Newton-Raphson Jacobian matrix resulting from the second forward position kinematics solution is a modified inverse Jacobian matrix. Examples and simulations are given for the VES.
Laterally constrained inversion for CSAMT data interpretation
NASA Astrophysics Data System (ADS)
Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun
2015-10-01
Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.
NASA Astrophysics Data System (ADS)
Klein, Ole; Cirpka, Olaf A.; Bastian, Peter; Ippisch, Olaf
2017-04-01
In the geostatistical inverse problem of subsurface hydrology, continuous hydraulic parameter fields, in most cases hydraulic conductivity, are estimated from measurements of dependent variables, such as hydraulic heads, under the assumption that the parameter fields are autocorrelated random space functions. Upon discretization, the continuous fields become large parameter vectors with O (104 -107) elements. While cokriging-like inversion methods have been shown to be efficient for highly resolved parameter fields when the number of measurements is small, they require the calculation of the sensitivity of each measurement with respect to all parameters, which may become prohibitive with large sets of measured data such as those arising from transient groundwater flow. We present a Preconditioned Conjugate Gradient method for the geostatistical inverse problem, in which a single adjoint equation needs to be solved to obtain the gradient of the objective function. Using the autocovariance matrix of the parameters as preconditioning matrix, expensive multiplications with its inverse can be avoided, and the number of iterations is significantly reduced. We use a randomized spectral decomposition of the posterior covariance matrix of the parameters to perform a linearized uncertainty quantification of the parameter estimate. The feasibility of the method is tested by virtual examples of head observations in steady-state and transient groundwater flow. These synthetic tests demonstrate that transient data can reduce both parameter uncertainty and time spent conducting experiments, while the presented methods are able to handle the resulting large number of measurements.
Accuracy limitations of hyperbolic multilateration systems
DOT National Transportation Integrated Search
1973-03-22
The report is an analysis of the accuracy limitations of hyperbolic multilateration systems. A central result is a demonstration that the inverse of the covariance matrix for positional errors corresponds to the moment of inertia matrix of a simple m...
Analysis of harmonic spline gravity models for Venus and Mars
NASA Technical Reports Server (NTRS)
Bowin, Carl
1986-01-01
Methodology utilizing harmonic splines for determining the true gravity field from Line-Of-Sight (LOS) acceleration data from planetary spacecraft missions was tested. As is well known, the LOS data incorporate errors in the zero reference level that appear to be inherent in the processing procedure used to obtain the LOS vectors. The proposed method offers a solution to this problem. The harmonic spline program was converted from the VAX 11/780 to the Ridge 32C computer. The problem with the matrix inversion routine that improved inversion of the data matrices used in the Optimum Estimate program for global Earth studies was solved. The problem of obtaining a successful matrix inversion for a single rev supplemented by data for the two adjacent revs still remains.
NASA Technical Reports Server (NTRS)
Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.
1992-01-01
The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.
Mathematical Problems in Imaging in Random Media
2015-01-15
of matrix Γ in [1], in the context of intensity based imaging of remote sources in random waveguides. That work is a direct application of the results...for which j we have z > ε−2Sj), and filters them out. It images by time reversing the received wave, weighting the modes based on their being coherent...transport based inversion), so we regularize to obtain ( |ξ̂1|2/β1, . . . |ξ̂N |2/βN )T ? ≈ J∑ j=1 e|Λj |Z ( uTj BQ−1M ) uj , (31) for J chosen so that
Recursive inversion of externally defined linear systems by FIR filters
NASA Technical Reports Server (NTRS)
Bach, Ralph E., Jr.; Baram, Yoram
1989-01-01
The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least-squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problem of system identification and compensation.
Kouri, Donald J [Houston, TX; Vijay, Amrendra [Houston, TX; Zhang, Haiyan [Houston, TX; Zhang, Jingfeng [Houston, TX; Hoffman, David K [Ames, IA
2007-05-01
A method and system for solving the inverse acoustic scattering problem using an iterative approach with consideration of half-off-shell transition matrix elements (near-field) information, where the Volterra inverse series correctly predicts the first two moments of the interaction, while the Fredholm inverse series is correct only for the first moment and that the Volterra approach provides a method for exactly obtaining interactions which can be written as a sum of delta functions.
NASA Astrophysics Data System (ADS)
Shi, X.; Utada, H.; Jiaying, W.
2009-12-01
The vector finite-element method combined with divergence corrections based on the magnetic field H, referred to as VFEH++ method, is developed to simulate the magnetotelluric (MT) responses of 3-D conductivity models. The advantages of the new VFEH++ method are the use of edge-elements to eliminate the vector parasites and the divergence corrections to explicitly guarantee the divergence-free conditions in the whole modeling domain. 3-D MT topographic responses are modeling using the new VFEH++ method, and are compared with those calculated by other numerical methods. The results show that MT responses can be modeled highly accurate using the VFEH+ +method. The VFEH++ algorithm is also employed for the 3-D MT data inversion incorporating topography. The 3-D MT inverse problem is formulated as a minimization problem of the regularized misfit function. In order to avoid the huge memory requirement and very long time for computing the Jacobian sensitivity matrix for Gauss-Newton method, we employ the conjugate gradient (CG) approach to solve the inversion equation. In each iteration of CG algorithm, the cost computation is the product of the Jacobian sensitivity matrix with a model vector x or its transpose with a data vector y, which can be transformed into two pseudo-forwarding modeling. This avoids the full explicitly Jacobian matrix calculation and storage which leads to considerable savings in the memory required by the inversion program in PC computer. The performance of CG algorithm will be illustrated by several typical 3-D models with horizontal earth surface and topographic surfaces. The results show that the VFEH++ and CG algorithms can be effectively employed to 3-D MT field data inversion.
NASA Technical Reports Server (NTRS)
Alfano, Robert R. (Inventor); Cai, Wei (Inventor)
2007-01-01
A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.
Efficient Storage Scheme of Covariance Matrix during Inverse Modeling
NASA Astrophysics Data System (ADS)
Mao, D.; Yeh, T. J.
2013-12-01
During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.
Denoised Wigner distribution deconvolution via low-rank matrix completion
Lee, Justin; Barbastathis, George
2016-08-23
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
Denoised Wigner distribution deconvolution via low-rank matrix completion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Justin; Barbastathis, George
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion
NASA Astrophysics Data System (ADS)
Brossier, R.; Virieux, J.; Operto, S.
2008-12-01
Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.
Shi, Yingzhong; Chung, Fu-Lai; Wang, Shitong
2015-09-01
Recently, a time-adaptive support vector machine (TA-SVM) is proposed for handling nonstationary datasets. While attractive performance has been reported and the new classifier is distinctive in simultaneously solving several SVM subclassifiers locally and globally by using an elegant SVM formulation in an alternative kernel space, the coupling of subclassifiers brings in the computation of matrix inversion, thus resulting to suffer from high computational burden in large nonstationary dataset applications. To overcome this shortcoming, an improved TA-SVM (ITA-SVM) is proposed using a common vector shared by all the SVM subclassifiers involved. ITA-SVM not only keeps an SVM formulation, but also avoids the computation of matrix inversion. Thus, we can realize its fast version, that is, improved time-adaptive core vector machine (ITA-CVM) for large nonstationary datasets by using the CVM technique. ITA-CVM has the merit of asymptotic linear time complexity for large nonstationary datasets as well as inherits the advantage of TA-SVM. The effectiveness of the proposed classifiers ITA-SVM and ITA-CVM is also experimentally confirmed.
Inverse free steering law for small satellite attitude control and power tracking with VSCMGs
NASA Astrophysics Data System (ADS)
Malik, M. S. I.; Asghar, Sajjad
2014-01-01
Recent developments in integrated power and attitude control systems (IPACSs) for small satellite, has opened a new dimension to more complex and demanding space missions. This paper presents a new inverse free steering approach for integrated power and attitude control systems using variable-speed single gimbal control moment gyroscope. The proposed inverse free steering law computes the VSCMG steering commands (gimbal rates and wheel accelerations) such that error signal (difference in command and output) in feedback loop is driven to zero. H∞ norm optimization approach is employed to synthesize the static matrix elements of steering law for a static state of VSCMG. Later these matrix elements are suitably made dynamic in order for the adaptation. In order to improve the performance of proposed steering law while passing through a singular state of CMG cluster (no torque output), the matrix element of steering law is suitably modified. Therefore, this steering law is capable of escaping internal singularities and using the full momentum capacity of CMG cluster. Finally, two numerical examples for a satellite in a low earth orbit are simulated to test the proposed steering law.
NASA Technical Reports Server (NTRS)
Fijany, Amir; Djouani, Karim; Fried, George; Pontnau, Jean
1997-01-01
In this paper a new factorization technique for computation of inverse of mass matrix, and the operational space mass matrix, as arising in implementation of the operational space control scheme, is presented.
Plantet, C; Meimon, S; Conan, J-M; Fusco, T
2015-11-02
Exoplanet direct imaging with large ground based telescopes requires eXtreme Adaptive Optics that couples high-order adaptive optics and coronagraphy. A key element of such systems is the high-order wavefront sensor. We study here several high-order wavefront sensing approaches, and more precisely compare their sensitivity to noise. Three techniques are considered: the classical Shack-Hartmann sensor, the pyramid sensor and the recently proposed LIFTed Shack-Hartmann sensor. They are compared in a unified framework based on precise diffractive models and on the Fisher information matrix, which conveys the information present in the data whatever the estimation method. The diagonal elements of the inverse of the Fisher information matrix, which we use as a figure of merit, are similar to noise propagation coefficients. With these diagonal elements, so called "Fisher coefficients", we show that the LIFTed Shack-Hartmann and pyramid sensors outperform the classical Shack-Hartmann sensor. In photon noise regime, the LIFTed Shack-Hartmann and modulated pyramid sensors obtain a similar overall noise propagation. The LIFTed Shack-Hartmann sensor however provides attractive noise properties on high orders.
Quark and lepton mixing as manifestations of violated mirror symmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyatlov, I. T., E-mail: dyatlov@thd.pnpi.spb.ru
2015-06-15
The existence of heavy mirror analogs of ordinary fermions would provide deeper insight into the gedanken paradox appearing in the Standard Model upon direct parity violation and consisting in a physical distinguishability of left- and right-hand coordinate frames. Arguments are presented in support of the statement that such mirror states may also be involved in the formation of observed properties of the system of Standard Model quarks and leptons—that is, their mass spectra and their weak-mixing matrices: (i) In the case of the involvement of mirror generations, the quark mixing matrix assumes the experimentally observed form. It is determined bymore » the constraints imposed by weak SU(2) symmetry and by the quark-mass hierarchy. (ii) Under the same conditions and upon the involvement of mirror particles, the lepton mixing matrix (neutrino mixing) may become drastically different from its quark analog—the Cabibbo-Kobayashi-Maskawa matrix; that is, it may acquire properties suggested by experimental data. This character of mixing is also indicative of an inverse mass spectrum of Standard Model neutrinos and their Dirac (not Majorana) nature.« less
Accelerated Training for Large Feedforward Neural Networks
NASA Technical Reports Server (NTRS)
Stepniewski, Slawomir W.; Jorgensen, Charles C.
1998-01-01
In this paper we introduce a new training algorithm, the scaled variable metric (SVM) method. Our approach attempts to increase the convergence rate of the modified variable metric method. It is also combined with the RBackprop algorithm, which computes the product of the matrix of second derivatives (Hessian) with an arbitrary vector. The RBackprop method allows us to avoid computationally expensive, direct line searches. In addition, it can be utilized in the new, 'predictive' updating technique of the inverse Hessian approximation. We have used directional slope testing to adjust the step size and found that this strategy works exceptionally well in conjunction with the Rbackprop algorithm. Some supplementary, but nevertheless important enhancements to the basic training scheme such as improved setting of a scaling factor for the variable metric update and computationally more efficient procedure for updating the inverse Hessian approximation are presented as well. We summarize by comparing the SVM method with four first- and second- order optimization algorithms including a very effective implementation of the Levenberg-Marquardt method. Our tests indicate promising computational speed gains of the new training technique, particularly for large feedforward networks, i.e., for problems where the training process may be the most laborious.
3D frequency-domain finite-difference modeling of acoustic wave propagation
NASA Astrophysics Data System (ADS)
Operto, S.; Virieux, J.
2006-12-01
We present a 3D frequency-domain finite-difference method for acoustic wave propagation modeling. This method is developed as a tool to perform 3D frequency-domain full-waveform inversion of wide-angle seismic data. For wide-angle data, frequency-domain full-waveform inversion can be applied only to few discrete frequencies to develop reliable velocity model. Frequency-domain finite-difference (FD) modeling of wave propagation requires resolution of a huge sparse system of linear equations. If this system can be solved with a direct method, solutions for multiple sources can be computed efficiently once the underlying matrix has been factorized. The drawback of the direct method is the memory requirement resulting from the fill-in of the matrix during factorization. We assess in this study whether representative problems can be addressed in 3D geometry with such approach. We start from the velocity-stress formulation of the 3D acoustic wave equation. The spatial derivatives are discretized with second-order accurate staggered-grid stencil on different coordinate systems such that the axis span over as many directions as possible. Once the discrete equations were developed on each coordinate system, the particle velocity fields are eliminated from the first-order hyperbolic system (following the so-called parsimonious staggered-grid method) leading to second-order elliptic wave equations in pressure. The second-order wave equations discretized on each coordinate system are combined linearly to mitigate the numerical anisotropy. Secondly, grid dispersion is minimized by replacing the mass term at the collocation point by its weighted averaging over all the grid points of the stencil. Use of second-order accurate staggered- grid stencil allows to reduce the bandwidth of the matrix to be factorized. The final stencil incorporates 27 points. Absorbing conditions are PML. The system is solved using the parallel direct solver MUMPS developed for distributed-memory computers. The MUMPS solver is based on a multifrontal method for LU factorization. We used the METIS algorithm to perform re-ordering of the matrix coefficients before factorization. Four grid points per minimum wavelength is used for discretization. We applied our algorithm to the 3D SEG/EAGE synthetic onshore OVERTHRUST model of dimensions 20 x 20 x 4.65 km. The velocities range between 2 and 6 km/s. We performed the simulations using 192 processors with 2 Gbytes of RAM memory per processor. We performed simulations for the 5 Hz, 7 Hz and 10 Hz frequencies in some fractions of the OVERTHRUST model. The grid interval was 100 m, 75 m and 50 m respectively. The grid dimensions were 207x207x53, 275x218x71 and 409x109x102 respectively corresponding to 100, 80 and 25 percents of the model respectively. The time for factorization is 20 mn, 108 mn and 163 mn respectively. The time for resolution was 3.8, 9.3 and 10.3 s per source. The total memory used during factorization is 143, 384 and 449 Gbytes respectively. One can note the huge memory requirement for factorization and the efficiency of the direct method to compute solutions for a large number of sources. This highlights the respective drawback and merit of the frequency-domain approach with respect to the time- domain counterpart. These results show that 3D acoustic frequency-domain wave propagation modeling can be performed at low frequencies using direct solver on large clusters of Pcs. This forward modeling algorithm may be used in the future as a tool to image the first kilometers of the crust by frequency-domain full-waveform inversion. For larger problems, we will use the out-of-core memory during factorization that has been implemented by the authors of MUMPS.
Convergence to equilibrium under a random Hamiltonian.
Brandão, Fernando G S L; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K; Mozrzymas, Marek
2012-09-01
We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.
Convergence to equilibrium under a random Hamiltonian
NASA Astrophysics Data System (ADS)
Brandão, Fernando G. S. L.; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K.; Mozrzymas, Marek
2012-09-01
We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.
S-Matrix to potential inversion of low-energy α-12C phase shifts
NASA Astrophysics Data System (ADS)
Cooper, S. G.; Mackintosh, R. S.
1990-10-01
The IP S-matrix to potential inversion procedure is applied to phase shifts for selected partial waves over a range of energies below the inelastic threshold for α-12C scattering. The phase shifts were determined by Plaga et al. Potentials found by Buck and Rubio to fit the low-energy alpha cluster resonances need only an increased attraction in the surface to accurately reproduce the phase-shift behaviour. Substantial differences between the potentials for odd and even partial waves are necessary. The surface tail of the potential is postulated to be a threshold effect.
Inverse Scattering and Local Observable Algebras in Integrable Quantum Field Theories
NASA Astrophysics Data System (ADS)
Alazzawi, Sabina; Lechner, Gandalf
2017-09-01
We present a solution method for the inverse scattering problem for integrable two-dimensional relativistic quantum field theories, specified in terms of a given massive single particle spectrum and a factorizing S-matrix. An arbitrary number of massive particles transforming under an arbitrary compact global gauge group is allowed, thereby generalizing previous constructions of scalar theories. The two-particle S-matrix S is assumed to be an analytic solution of the Yang-Baxter equation with standard properties, including unitarity, TCP invariance, and crossing symmetry. Using methods from operator algebras and complex analysis, we identify sufficient criteria on S that imply the solution of the inverse scattering problem. These conditions are shown to be satisfied in particular by so-called diagonal S-matrices, but presumably also in other cases such as the O( N)-invariant nonlinear {σ}-models.
Deflation as a method of variance reduction for estimating the trace of a matrix inverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas
Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less
Deflation as a method of variance reduction for estimating the trace of a matrix inverse
Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas
2017-04-06
Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less
Zhang, Ren; Lee, Bongjoon; Bockstaller, Michael R; Douglas, Jack F; Stafford, Christopher M; Kumar, Sanat K; Raghavan, Dharmaraj; Karim, Alamgir
The controlled organization of nanoparticle (NP) constituents into superstructures of well-defined shape, composition and connectivity represents a continuing challenge in the development of novel hybrid materials for many technological applications. We show that the phase separation of polymer-tethered nanoparticles immersed in a chemically different polymer matrix provides an effective and scalable method for fabricating defined submicron-sized amorphous NP domains in melt polymer thin films. We investigate this phenomenon with a view towards understanding and controlling the phase separation process through directed nanoparticle assembly. In particular, we consider isothermally annealed thin films of polystyrene-grafted gold nanoparticles (AuPS) dispersed in a poly(methyl methacrylate) (PMMA) matrix. Classic binary polymer blend phase separation related morphology transitions, from discrete AuPS domains to bicontinuous to inverse domain structure with increasing nanoparticle composition is observed, yet the kinetics of the AuPS/PMMA polymer blends system exhibit unique features compared to the parent PS/PMMA homopolymer blend. We further illustrate how to pattern-align the phase-separated AuPS nanoparticle domain shape, size and location through the imposition of a simple and novel external symmetry-breaking perturbation via soft-lithography. Specifically, submicron-sized topographically patterned elastomer confinement is introduced to direct the nanoparticles into kinetically controlled long-range ordered domains, having a dense yet well-dispersed distribution of non-crystallizing nanoparticles. The simplicity, versatility and roll-to-roll adaptability of this novel method for controlled nanoparticle assembly should make it useful in creating desirable patterned nanoparticle domains for a variety of functional materials and applications.
Inverse solutions for electrical impedance tomography based on conjugate gradients methods
NASA Astrophysics Data System (ADS)
Wang, M.
2002-01-01
A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.
A Generalized Method of Image Analysis from an Intercorrelation Matrix which May Be Singular.
ERIC Educational Resources Information Center
Yanai, Haruo; Mukherjee, Bishwa Nath
1987-01-01
This generalized image analysis method is applicable to singular and non-singular correlation matrices (CMs). Using the orthogonal projector and a weaker generalized inverse matrix, image and anti-image covariance matrices can be derived from a singular CM. (SLD)
Shape control of structures with semi-definite stiffness matrices for adaptive wings
NASA Astrophysics Data System (ADS)
Austin, Fred; Van Nostrand, William C.; Rossi, Michael J.
1993-09-01
Maintaining an optimum-wing cross section during transonic cruise can dramatically reduce the shock-induced drag and can result in significant fuel savings and increased range. Our adaptive-wing concept employs actuators as truss elements of active ribs to reshape the wing cross section by deforming the structure. In our previous work, to derive the shape control- system gain matrix, we developed a procedure that requires the inverse of the stiffness matrix of the structure without the actuators. However, this method cannot be applied to designs where the actuators are required structural elements since the stiffness matrices are singular when the actuator are removed. Consequently, a new method was developed, where the order of the problem is reduced and only the inverse of a small nonsingular partition of the stiffness matrix is required to obtain the desired gain matrix. The procedure was experimentally validated by achieving desired shapes of a physical model of an aircraft-wing rib. The theory and test results are presented.
A novel artificial neural network method for biomedical prediction based on matrix pseudo-inversion.
Cai, Binghuang; Jiang, Xia
2014-04-01
Biomedical prediction based on clinical and genome-wide data has become increasingly important in disease diagnosis and classification. To solve the prediction problem in an effective manner for the improvement of clinical care, we develop a novel Artificial Neural Network (ANN) method based on Matrix Pseudo-Inversion (MPI) for use in biomedical applications. The MPI-ANN is constructed as a three-layer (i.e., input, hidden, and output layers) feed-forward neural network, and the weights connecting the hidden and output layers are directly determined based on MPI without a lengthy learning iteration. The LASSO (Least Absolute Shrinkage and Selection Operator) method is also presented for comparative purposes. Single Nucleotide Polymorphism (SNP) simulated data and real breast cancer data are employed to validate the performance of the MPI-ANN method via 5-fold cross validation. Experimental results demonstrate the efficacy of the developed MPI-ANN for disease classification and prediction, in view of the significantly superior accuracy (i.e., the rate of correct predictions), as compared with LASSO. The results based on the real breast cancer data also show that the MPI-ANN has better performance than other machine learning methods (including support vector machine (SVM), logistic regression (LR), and an iterative ANN). In addition, experiments demonstrate that our MPI-ANN could be used for bio-marker selection as well. Copyright © 2013 Elsevier Inc. All rights reserved.
Sparse Regression as a Sparse Eigenvalue Problem
NASA Technical Reports Server (NTRS)
Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai
2008-01-01
We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization
Reflection Matrix Method for Controlling Light After Reflection From a Diffuse Scattering Surface
2016-12-22
reflective inverse diffusion, which was a proof-of-concept experiment that used phase modulation to shape the wavefront of a laser causing it to refocus...after reflection from a rough surface. By refocusing the light, reflective inverse diffusion has the potential to eliminate the complex radiometric model...photography. However, the initial reflective inverse diffusion experiments provided no mathematical background and were conducted under the premise that the
NASA Astrophysics Data System (ADS)
Zhang, H.; Fang, H.; Yao, H.; Maceira, M.; van der Hilst, R. D.
2014-12-01
Recently, Zhang et al. (2014, Pure and Appiled Geophysics) have developed a joint inversion code incorporating body-wave arrival times and surface-wave dispersion data. The joint inversion code was based on the regional-scale version of the double-difference tomography algorithm tomoDD. The surface-wave inversion part uses the propagator matrix solver in the algorithm DISPER80 (Saito, 1988) for forward calculation of dispersion curves from layered velocity models and the related sensitivities. The application of the joint inversion code to the SAFOD site in central California shows that the fault structure is better imaged in the new model, which is able to fit both the body-wave and surface-wave observations adequately. Here we present a new joint inversion method that solves the model in the wavelet domain constrained by sparsity regularization. Compared to the previous method, it has the following advantages: (1) The method is both data- and model-adaptive. For the velocity model, it can be represented by different wavelet coefficients at different scales, which are generally sparse. By constraining the model wavelet coefficients to be sparse, the inversion in the wavelet domain can inherently adapt to the data distribution so that the model has higher spatial resolution in the good data coverage zone. Fang and Zhang (2014, Geophysical Journal International) have showed the superior performance of the wavelet-based double-difference seismic tomography method compared to the conventional method. (2) For the surface wave inversion, the joint inversion code takes advantage of the recent development of direct inversion of surface wave dispersion data for 3-D variations of shear wave velocity without the intermediate step of phase or group velocity maps (Fang et al., 2014, Geophysical Journal International). A fast marching method is used to compute, at each period, surface wave traveltimes and ray paths between sources and receivers. We will test the new joint inversion code at the SAFOD site to compare its performance over the previous code. We will also select another fault zone such as the San Jacinto Fault Zone to better image its structure.
NASA Astrophysics Data System (ADS)
Saputro, Dewi Retno Sari; Widyaningsih, Purnami
2017-08-01
In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).
NASA Astrophysics Data System (ADS)
Joulidehsar, Farshad; Moradzadeh, Ali; Doulati Ardejani, Faramarz
2018-06-01
The joint interpretation of two sets of geophysical data related to the same source is an appropriate method for decreasing non-uniqueness of the resulting models during inversion process. Among the available methods, a method based on using cross-gradient constraint combines two datasets is an efficient approach. This method, however, is time-consuming for 3D inversion and cannot provide an exact assessment of situation and extension of anomaly of interest. In this paper, the first attempt is to speed up the required calculation by substituting singular value decomposition by least-squares QR method to solve the large-scale kernel matrix of 3D inversion, more rapidly. Furthermore, to improve the accuracy of resulting models, a combination of depth-weighing matrix and compacted constraint, as automatic selection covariance of initial parameters, is used in the proposed inversion algorithm. This algorithm was developed in Matlab environment and first implemented on synthetic data. The 3D joint inversion of synthetic gravity and magnetic data shows a noticeable improvement in the results and increases the efficiency of algorithm for large-scale problems. Additionally, a real gravity and magnetic dataset of Jalalabad mine, in southeast of Iran was tested. The obtained results by the improved joint 3D inversion of cross-gradient along with compacted constraint showed a mineralised zone in depth interval of about 110-300 m which is in good agreement with the available drilling data. This is also a further confirmation on the accuracy and progress of the improved inversion algorithm.
Kinematic control of robot with degenerate wrist
NASA Technical Reports Server (NTRS)
Barker, L. K.; Moore, M. C.
1984-01-01
Kinematic resolved rate equations allow an operator with visual feedback to dynamically control a robot hand. When the robot wrist is degenerate, the computed joint angle rates exceed operational limits, and unwanted hand movements can result. The generalized matrix inverse solution can also produce unwanted responses. A method is introduced to control the robot hand in the region of the degenerate robot wrist. The method uses a coordinated movement of the first and third joints of the robot wrist to locate the second wrist joint axis for movement of the robot hand in the commanded direction. The method does not entail infinite joint angle rates.
NASA Astrophysics Data System (ADS)
Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi
2017-10-01
When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.
Computing Generalized Matrix Inverse on Spiking Neural Substrate.
Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen
2018-01-01
Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.
Factor Analysis by Generalized Least Squares.
ERIC Educational Resources Information Center
Joreskog, Karl G.; Goldberger, Arthur S.
Aitkin's generalized least squares (GLS) principle, with the inverse of the observed variance-covariance matrix as a weight matrix, is applied to estimate the factor analysis model in the exploratory (unrestricted) case. It is shown that the GLS estimates are scale free and asymptotically efficient. The estimates are computed by a rapidly…
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Guidance of Autonomous Aerospace Vehicles for Vertical Soft Landing using Nonlinear Control Theory
2015-08-11
Measured and Kalman filter Estimate of the Roll Attitude of the Quad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.4...and faster Hart- ley et al. [2013]. With availability of small, light, high fidelity sensors (Inertial Measurement Units IMU ) and processors on board...is a product of inverse of rotation matrix and inertia matrix for the quad frame. Since both the matrix are invertible at all times except when roll
Yang, C L; Wei, H Y; Adler, A; Soleimani, M
2013-06-01
Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.
Fast Minimum Variance Beamforming Based on Legendre Polynomials.
Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae
2016-09-01
Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.
Viscoelastic material inversion using Sierra-SD and ROL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, Timothy; Aquino, Wilkins; Ridzal, Denis
2014-11-01
In this report we derive frequency-domain methods for inverse characterization of the constitutive parameters of viscoelastic materials. The inverse problem is cast in a PDE-constrained optimization framework with efficient computation of gradients and Hessian vector products through matrix free operations. The abstract optimization operators for first and second derivatives are derived from first principles. Various methods from the Rapid Optimization Library (ROL) are tested on the viscoelastic inversion problem. The methods described herein are applied to compute the viscoelastic bulk and shear moduli of a foam block model, which was recently used in experimental testing for viscoelastic property characterization.
Strategies for efficient resolution analysis in full-waveform inversion
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Leeuwen, T.; Trampert, J.
2016-12-01
Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.
NASA Astrophysics Data System (ADS)
Mustać, Marija; Tkalčić, Hrvoje; Burky, Alexander L.
2018-01-01
Moment tensor (MT) inversion studies of events in The Geysers geothermal field mostly focused on microseismicity and found a large number of earthquakes with significant non-double-couple (non-DC) seismic radiation. Here we concentrate on the largest events in the area in recent years using a hierarchical Bayesian MT inversion. Initially, we show that the non-DC components of the MT can be reliably retrieved using regional waveform data from a small number of stations. Subsequently, we present results for a number of events and show that accounting for noise correlations can lead to retrieval of a lower isotropic (ISO) component and significantly different focal mechanisms. We compute the Bayesian evidence to compare solutions obtained with different assumptions of the noise covariance matrix. Although a diagonal covariance matrix produces a better waveform fit, inversions that account for noise correlations via an empirically estimated noise covariance matrix account for interdependences of data errors and are preferred from a Bayesian point of view. This implies that improper treatment of data noise in waveform inversions can result in fitting the noise and misinterpreting the non-DC components. Finally, one of the analyzed events is characterized as predominantly DC, while the others still have significant non-DC components, probably as a result of crack opening, which is a reasonable hypothesis for The Geysers geothermal field geological setting.
The Inverse of Banded Matrices
2013-01-01
indexed entries all zeros. In this paper, generalizing a method of Mallik (1999) [5], we give the LU factorization and the inverse of the matrix Br,n (if it...r ≤ i ≤ r, 1 ≤ j ≤ r, with the remaining un-indexed entries all zeros. In this paper generalizing a method of Mallik (1999) [5...matrices and applications to piecewise cubic approximation, J. Comput. Appl. Math. 8 (4) (1982) 285–288. [5] R.K. Mallik , The inverse of a lower
Dynamic Forms. Part 1: Functions
NASA Technical Reports Server (NTRS)
Meyer, George; Smith, G. Allan
1993-01-01
The formalism of dynamic forms is developed as a means for organizing and systematizing the design control systems. The formalism allows the designer to easily compute derivatives to various orders of large composite functions that occur in flight-control design. Such functions involve many function-of-a-function calls that may be nested to many levels. The component functions may be multiaxis, nonlinear, and they may include rotation transformations. A dynamic form is defined as a variable together with its time derivatives up to some fixed but arbitrary order. The variable may be a scalar, a vector, a matrix, a direction cosine matrix, Euler angles, or Euler parameters. Algorithms for standard elementary functions and operations of scalar dynamic forms are developed first. Then vector and matrix operations and transformations between parameterization of rotations are developed in the next level in the hierarchy. Commonly occurring algorithms in control-system design, including inversion of pure feedback systems, are developed in the third level. A large-angle, three-axis attitude servo and other examples are included to illustrate the effectiveness of the developed formalism. All algorithms were implemented in FORTRAN code. Practical experience shows that the proposed formalism may significantly improve the productivity of the design and coding process.
Poisson image reconstruction with Hessian Schatten-norm regularization.
Lefkimmiatis, Stamatios; Unser, Michael
2013-11-01
Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.
Li, Haichen; Yaron, David J
2016-11-08
A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.
Inverse consistent non-rigid image registration based on robust point set matching
2014-01-01
Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number. The determinant of the Jacobian matrix of the deformation field is used to analyse the smoothness of the forward and backward transformations. The forward and backward transformations estimated by our algorithm are smooth for small deformation. For registration of lung slices and individual brain slices, large or small determinant of the Jacobian matrix of the deformation fields are observed. Conclusions Results indicate the improvement of the proposed algorithm in bi-directional image registration and the decrease of the inverse consistent errors of the forward and the reverse transformations between two images. PMID:25559889
Towards "Inverse" Character Tables? A One-Step Method for Decomposing Reducible Representations
ERIC Educational Resources Information Center
Piquemal, J.-Y.; Losno, R.; Ancian, B.
2009-01-01
In the framework of group theory, a new procedure is described for a one-step automated reduction of reducible representations. The matrix inversion tool, provided by standard spreadsheet software, is applied to the central part of the character table that contains the characters of the irreducible representation. This method is not restricted to…
ERIC Educational Resources Information Center
Richardson, Peter; Thomas, Steven
2013-01-01
Pay compression and inversion are significant problems for many organizations and are often severe in schools of business in particular. At the same time, there is more insistence on showing accountability and paying employees based on performance. The authors explain and show a detailed example of how to use a Compensation Equity/ Performance…
NASA Astrophysics Data System (ADS)
Itahashi, S.; Yumimoto, K.; Uno, I.; Kim, S.
2012-12-01
Air quality studies based on the chemical transport model have been provided many important results for promoting our knowledge of air pollution phenomena, however, discrepancies between modeling results and observation data are still important issue to overcome. One of the concerning issue would be an over-prediction of summertime tropospheric ozone in remote area of Japan. This problem has been pointed out in the model comparison study of both regional scale (e.g., MICS-Asia) and global scale model (e.g., TH-FTAP). Several reasons for this issue can be listed as, (i) the modeled reproducibility on the penetration of clean oceanic air mass, (ii) correct estimation of the anthropogenic NOx / VOC emissions over East Asia, (iii) the chemical reaction scheme used in model simulation. In this study, we attempt to inverse estimation of some important chemical reactions based on the combining system of DDM (decoupled direct method) sensitivity analysis and modeled Green's function approach. The decoupled direct method (DDM) is an efficient and accurate way of performing sensitivity analysis to model inputs, calculates sensitivity coefficients representing the responsiveness of atmospheric chemical concentrations to perturbations in a model input or parameter. The inverse solutions with the Green's functions are given by a linear, least-squares method but are still robust against nonlinearities, To construct the response matrix (i.e., Green's functions), we can directly use the results of DDM sensitivity analysis. The solution of chemical reaction constants which have relatively large uncertainties are determined with constraints of observed ozone concentration data over the remote area in Japan. Our inversed estimation demonstrated that the underestimation of reaction constant to produce HNO3 (NO2 + OH + M → HNO3 + M) in SAPRC99 chemical scheme, and the inversed results indicated the +29.0 % increment to this reaction. This estimation has good agreement when compared with the CB4 and CB5, and also to the SAPRC07 estimation. For the NO2 photolysis rates, 49.4 % reduction was pronounced. This result indicates the importance of heavy aerosol effect for the change of photolysis rate must be incorporated in the numerical study.
NASA Astrophysics Data System (ADS)
Xu, Guo-Ming; Ni, Si-Dao
1998-11-01
The `auxiliary' symmetry properties of the system matrix (symmetry with respect to the trailing diagonal) for a general anisotropic dissipative medium and the special form for a monoclinic medium are revealed by rearranging the motion-stress vector. The propagator matrix of a single-layer general anisotropic dissipative medium is also shown to have auxiliary symmetry. For the multilayered case, a relatively simple matrix method is utilized to obtain the inverse of the propagator matrix. Further, Woodhouse's inverse of the propagator matrix for a transversely isotropic medium is extended in a clearer form to handle the monoclinic symmetric medium. The properties of a periodic layer system are studied through its system matrix Aly , which is computed from the propagator matrix P. The matrix Aly is then compared with Aeq , the system matrix for the long-wavelength equivalent medium of the periodic isotropic layers. Then we can find how the periodic layered medium departs from its long-wavelength equivalent medium when the wavelength decreases. In our numerical example, the results show that, when λ/D decreases to 6-8, the components of the two matrices will depart from each other. The component ratio of these two matrices increases to its maximum (more than 15 in our numerical test) when λ/D is reduced to 2.3, and then oscillates with λ/D when it is further reduced. The eigenvalues of the system matrix Aly show that the velocities of P and S waves decrease when λ/D is reduced from 6-8 and reach their minimum values when λ/D is reduced to 2.3 and then oscillate afterwards. We compute the time shifts between the peaks of the transmitted waves and the incident waves. The resulting velocity curves show a similar variation to those computed from the eigenvalues of the system matrix Aly , but on a smaller scale. This can be explained by the spectrum width of the incident waves.
On the cross-stream spectral method for the Orr-Sommerfeld equation
NASA Technical Reports Server (NTRS)
Zorumski, William E.; Hodge, Steven L.
1993-01-01
Cross-stream models are defined as solutions to the Orr-Sommerfeld equation which are propagating normal to the flow direction. These models are utilized as a basis for a Hilbert space to approximate the spectrum of the Orr-Sommerfeld equation with plane Poiseuille flow. The cross-stream basis leads to a standard eigenvalue problem for the frequencies of Poiseuille flow instability waves. The coefficient matrix in the eigenvalue problem is shown to be the sum of a real matrix and a negative-imaginary diagonal matrix which represents the frequencies of the cross-stream modes. The real coefficient matrix is shown to approach a Toeplitz matrix when the row and column indices are large. The Toeplitz matrix is diagonally dominant, and the diagonal elements vary inversely in magnitude with diagonal position. The Poiseuille flow eigenvalues are shown to lie within Gersgorin disks with radii bounded by the product of the average flow speed and the axial wavenumber. It is shown that the eigenvalues approach the Gersgorin disk centers when the mode index is large, so that the method may be used to compute spectra with an essentially unlimited number of elements. When the mode index is large, the real part of the eigenvalue is the product of the axial wavenumber and the average flow speed, and the imaginary part of the eigen value is identical to the corresponding cross-stream mode frequency. The cross-stream method is numerically well-conditioned in comparison to Chebyshev based methods, providing equivalent accuracy for small mode indices and superior accuracy for large indices.
Quantitative framework for preferential flow initiation and partitioning
Nimmo, John R.
2016-01-01
A model for preferential flow in macropores is based on the short-range spatial distribution of soil matrix infiltrability. It uses elementary areas at two different scales. One is the traditional representative elementary area (REA), which includes a sufficient heterogeneity to typify larger areas, as for measuring field-scale infiltrability. The other, called an elementary matrix area (EMA), is smaller, but large enough to represent the local infiltrability of soil matrix material, between macropores. When water is applied to the land surface, each EMA absorbs water up to the rate of its matrix infiltrability. Excess water flows into a macropore, becoming preferential flow. The land surface then can be represented by a mesoscale (EMA-scale) distribution of matrix infiltrabilities. Total preferential flow at a given depth is the sum of contributions from all EMAs. Applying the model, one case study with multi-year field measurements of both preferential and diffuse fluxes at a specific depth was used to obtain parameter values by inverse calculation. The results quantify the preferential–diffuse partition of flow from individual storms that differed in rainfall amount, intensity, antecedent soil water, and other factors. Another case study provided measured values of matrix infiltrability to estimate parameter values for comparison and illustrative predictions. These examples give a self-consistent picture from the combination of parameter values, directions of sensitivities, and magnitudes of differences caused by different variables. One major practical use of this model is to calculate the dependence of preferential flow on climate-related factors, such as varying soil wetness and rainfall intensity.
Heuett, William J; Beard, Daniel A; Qian, Hong
2008-05-15
Several approaches, including metabolic control analysis (MCA), flux balance analysis (FBA), correlation metric construction (CMC), and biochemical circuit theory (BCT), have been developed for the quantitative analysis of complex biochemical networks. Here, we present a comprehensive theory of linear analysis for nonequilibrium steady-state (NESS) biochemical reaction networks that unites these disparate approaches in a common mathematical framework and thermodynamic basis. In this theory a number of relationships between key matrices are introduced: the matrix A obtained in the standard, linear-dynamic-stability analysis of the steady-state can be decomposed as A = SRT where R and S are directly related to the elasticity-coefficient matrix for the fluxes and chemical potentials in MCA, respectively; the control-coefficients for the fluxes and chemical potentials can be written in terms of RTBS and STBS respectively where matrix B is the inverse of A; the matrix S is precisely the stoichiometric matrix in FBA; and the matrix eAt plays a central role in CMC. One key finding that emerges from this analysis is that the well-known summation theorems in MCA take different forms depending on whether metabolic steady-state is maintained by flux injection or concentration clamping. We demonstrate that if rate-limiting steps exist in a biochemical pathway, they are the steps with smallest biochemical conductances and largest flux control-coefficients. We hypothesize that biochemical networks for cellular signaling have a different strategy for minimizing energy waste and being efficient than do biochemical networks for biosynthesis. We also discuss the intimate relationship between MCA and biochemical systems analysis (BSA).
Overcoming Challenges in Kinetic Modeling of Magnetized Plasmas and Vacuum Electronic Devices
NASA Astrophysics Data System (ADS)
Omelchenko, Yuri; Na, Dong-Yeop; Teixeira, Fernando
2017-10-01
We transform the state-of-the art of plasma modeling by taking advantage of novel computational techniques for fast and robust integration of multiscale hybrid (full particle ions, fluid electrons, no displacement current) and full-PIC models. These models are implemented in 3D HYPERS and axisymmetric full-PIC CONPIC codes. HYPERS is a massively parallel, asynchronous code. The HYPERS solver does not step fields and particles synchronously in time but instead executes local variable updates (events) at their self-adaptive rates while preserving fundamental conservation laws. The charge-conserving CONPIC code has a matrix-free explicit finite-element (FE) solver based on a sparse-approximate inverse (SPAI) algorithm. This explicit solver approximates the inverse FE system matrix (``mass'' matrix) using successive sparsity pattern orders of the original matrix. It does not reduce the set of Maxwell's equations to a vector-wave (curl-curl) equation of second order but instead utilizes the standard coupled first-order Maxwell's system. We discuss the ability of our codes to accurately and efficiently account for multiscale physical phenomena in 3D magnetized space and laboratory plasmas and axisymmetric vacuum electronic devices.
Mathematical investigation of one-way transform matrix options.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooper, James Arlin
2006-01-01
One-way transforms have been used in weapon systems processors since the mid- to late-1970s in order to help recognize insertion of correct pre-arm information while maintaining abnormal-environment safety. Level-One, Level-Two, and Level-Three transforms have been designed. The Level-One and Level-Two transforms have been implemented in weapon systems, and both of these transforms are equivalent to matrix multiplication applied to the inserted information. The Level-Two transform, utilizing a 6 x 6 matrix, provided the basis for the ''System 2'' interface definition for Unique-Signal digital communication between aircraft and attached weapons. The investigation described in this report was carried out to findmore » out if there were other size matrices that would be equivalent to the 6 x 6 Level-Two matrix. One reason for the investigation was to find out whether or not other dimensions were possible, and if so, to derive implementation options. Another important reason was to more fully explore the potential for inadvertent inversion. The results were that additional implementation methods were discovered, but no inversion weaknesses were revealed.« less
Computationally Efficient Modeling and Simulation of Large Scale Systems
NASA Technical Reports Server (NTRS)
Jain, Jitesh (Inventor); Koh, Cheng-Kok (Inventor); Balakrishnan, Vankataramanan (Inventor); Cauley, Stephen F (Inventor); Li, Hong (Inventor)
2014-01-01
A system for simulating operation of a VLSI interconnect structure having capacitive and inductive coupling between nodes thereof, including a processor, and a memory, the processor configured to perform obtaining a matrix X and a matrix Y containing different combinations of passive circuit element values for the interconnect structure, the element values for each matrix including inductance L and inverse capacitance P, obtaining an adjacency matrix A associated with the interconnect structure, storing the matrices X, Y, and A in the memory, and performing numerical integration to solve first and second equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Druskin, V.; Lee, Ping; Knizhnerman, L.
There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.
Tabuchi, Mari; Seo, Makoto; Inoue, Takayuki; Ikeda, Takeshi; Kogure, Akinori; Inoue, Ikuo; Katayama, Shigehiro; Matsunaga, Toshiyuki; Hara, Akira; Komoda, Tsugikazu
2011-02-01
The increasing number of patients with metabolic syndrome is a critical global problem. In this study, we describe a novel geometrical electrophoretic separation method using a bioformulated-fiber matrix to analyze high-density lipoprotein (HDL) particles. HDL particles are generally considered to be a beneficial component of the cholesterol fraction. Conventional electrophoresis is widely used but is not necessarily suitable for analyzing HDL particles. Furthermore, a higher HDL density is generally believed to correlate with a smaller particle size. Here, we use a novel geometrical separation technique incorporating recently developed nanotechnology (Nata de Coco) to contradict this belief. A dyslipidemia patient given a 1-month treatment of fenofibrate showed an inverse relationship between HDL density and size. Direct microscopic observation and morphological observation of fractionated HDL particles confirmed a lack of relationship between particle density and size. This new technique may improve diagnostic accuracy and medical treatment for lipid related diseases.
NASA Astrophysics Data System (ADS)
Wei, Yimin; Wu, Hebing
2001-12-01
In this paper, the perturbation and subproper splittings for the generalized inverse AT,S(2), the unique matrix X such that XAX=X, R(X)=T and N(X)=S, are considered. We present lower and upper bounds for the perturbation of AT,S(2). Convergence of subproper splittings for computing the special solution AT,S(2)b of restricted rectangular linear system Ax=b, x[set membership, variant]T, are studied. For the solution AT,S(2)b we develop a characterization. Therefore, we give a unified treatment of the related problems considered in literature by Ben-Israel, Berman, Hanke, Neumann, Plemmons, etc.
W-phase estimation of first-order rupture distribution for megathrust earthquakes
NASA Astrophysics Data System (ADS)
Benavente, Roberto; Cummins, Phil; Dettmer, Jan
2014-05-01
Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of their slip distributions. Also, reliable solutions are generally obtained with data in a 30-minute window following the origin time, suggesting that a real-time system could obtain solutions in less than one hour following the origin time.
Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set
NASA Astrophysics Data System (ADS)
Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.
2017-12-01
In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.
NASA Astrophysics Data System (ADS)
Ojo, A. O.; Xie, Jun; Olorunfemi, M. O.
2018-01-01
To reduce ambiguity related to nonlinearities in the resistivity model-data relationships, an efficient direct-search scheme employing the Neighbourhood Algorithm (NA) was implemented to solve the 1-D resistivity problem. In addition to finding a range of best-fit models which are more likely to be global minimums, this method investigates the entire multi-dimensional model space and provides additional information about the posterior model covariance matrix, marginal probability density function and an ensemble of acceptable models. This provides new insights into how well the model parameters are constrained and make assessing trade-offs between them possible, thus avoiding some common interpretation pitfalls. The efficacy of the newly developed program is tested by inverting both synthetic (noisy and noise-free) data and field data from other authors employing different inversion methods so as to provide a good base for comparative performance. In all cases, the inverted model parameters were in good agreement with the true and recovered model parameters from other methods and remarkably correlate with the available borehole litho-log and known geology for the field dataset. The NA method has proven to be useful whilst a good starting model is not available and the reduced number of unknowns in the 1-D resistivity inverse problem makes it an attractive alternative to the linearized methods. Hence, it is concluded that the newly developed program offers an excellent complementary tool for the global inversion of the layered resistivity structure.
Robotic Compliant Motion Control for Aircraft Refueling Applications
1988-12-01
J. DUVALL 29 SEP 88 C-26 SUBROUTINE IMPCONST(CONST,MINV, BMAT ) Abstract: This subroutine calculates the 25 constants used by the Fortran subroutine...mass with center of gravity along the joint 6 axis. The desired mass and the damping ( BMAT ) matrices are assumed to be diagonal. Joints angles 4,5...constants. MINV -- A 2x2 matrix containing the elements of the inverse desired mass matrix (diagonal). BMAT -- A 2x2 matrix of damping coefficents (diagonal
Computing Generalized Matrix Inverse on Spiking Neural Substrate
Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen
2018-01-01
Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines. PMID:29593483
Liao, C M
1997-01-01
A quantification analysis for evaluation of gaseous pollutant volatilization as a result of mass transfer from stored swine manure is presented from the viewpoint of residence time distribution. The method is based on evaluating the moments of concentration vs. time curves of both air and gaseous pollutants. The concept of moments of concentration histories is applicable to characterize the dispersal of the supplied air or gaseous pollutant in a ventilated system. The mean age or residence time of airflow can be calculated from an inverse system state matrix [B]-1 of a linear dynamic equation describing the dynamics of gaseous pollutant in a ventilated airspace. The sum elements in an arbitrary row i in matrix [B]-1 is equal to the mean age of airflow in airspace i. The mean age of gaseous pollutant in airspace i can be obtained from the area under the concentration profile divided by the equilibrium concentration reading in that space caused by gaseous pollutant sources. Matrix [B]-1 can also be represented in terms of the inverse local airflow rate matrix ([W]-1), transition probability matrix ([P]), and air volume matrix ([V]) as, [B]-1 = [W]-1[P][V]. Finally the mean age of airflow in a ventilated airspace can be interpreted by the physical characteristics of matrices [W] and [P]. The practical use of the concepts is also applied in a typical pig unit.
Information matrix estimation procedures for cognitive diagnostic models.
Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei
2018-03-06
Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.
An Analytical State Transition Matrix for Orbits Perturbed by an Oblate Spheroid
NASA Technical Reports Server (NTRS)
Mueller, A. C.
1977-01-01
An analytical state transition matrix and its inverse, which include the short period and secular effects of the second zonal harmonic, were developed from the nonsingular PS satellite theory. The fact that the independent variable in the PS theory is not time is in no respect disadvantageous, since any explicit analytical solution must be expressed in the true or eccentric anomaly. This is shown to be the case for the simple conic matrix. The PS theory allows for a concise, accurate, and algorithmically simple state transition matrix. The improvement over the conic matrix ranges from 2 to 4 digits accuracy.
Wavelet-like bases for thin-wire integral equations in electromagnetics
NASA Astrophysics Data System (ADS)
Francomano, E.; Tortorici, A.; Toscano, E.; Ala, G.; Viola, F.
2005-03-01
In this paper, wavelets are used in solving, by the method of moments, a modified version of the thin-wire electric field integral equation, in frequency domain. The time domain electromagnetic quantities, are obtained by using the inverse discrete fast Fourier transform. The retarded scalar electric and vector magnetic potentials are employed in order to obtain the integral formulation. The discretized model generated by applying the direct method of moments via point-matching procedure, results in a linear system with a dense matrix which have to be solved for each frequency of the Fourier spectrum of the time domain impressed source. Therefore, orthogonal wavelet-like basis transform is used to sparsify the moment matrix. In particular, dyadic and M-band wavelet transforms have been adopted, so generating different sparse matrix structures. This leads to an efficient solution in solving the resulting sparse matrix equation. Moreover, a wavelet preconditioner is used to accelerate the convergence rate of the iterative solver employed. These numerical features are used in analyzing the transient behavior of a lightning protection system. In particular, the transient performance of the earth termination system of a lightning protection system or of the earth electrode of an electric power substation, during its operation is focused. The numerical results, obtained by running a complex structure, are discussed and the features of the used method are underlined.
Iterative computation of generalized inverses, with an application to CMG steering laws
NASA Technical Reports Server (NTRS)
Steincamp, J. W.
1971-01-01
A cubically convergent iterative method for computing the generalized inverse of an arbitrary M X N matrix A is developed and a FORTRAN subroutine by which the method was implemented for real matrices on a CDC 3200 is given, with a numerical example to illustrate accuracy. Application to a redundant single-gimbal CMG assembly steering law is discussed.
Sparse Matrix Motivated Reconstruction of Far-Field Radiation Patterns
2015-03-01
method for base - station antenna radiation patterns. IEEE Antennas Propagation Magazine. 2001;43(2):132. 4. Vasiliadis TG, Dimitriou D, Sergiadis JD...algorithm based on sparse representations of radiation patterns using the inverse Discrete Fourier Transform (DFT) and the inverse Discrete Cosine...patterns using a Model- Based Parameter Estimation (MBPE) technique that reduces the computational time required to model radiation patterns. Another
Fourier transformation microwave spectroscopy of the methyl glycolate-H2O complex
NASA Astrophysics Data System (ADS)
Fujitake, Masaharu; Tanaka, Toshihiro; Ohashi, Nobukimi
2018-01-01
The rotational spectrum of one conformer of the methyl glycolate-H2O complex has been measured by means of the pulsed jet Fourier transform microwave spectrometer. The observed a- and b-type transitions exhibit doublet splittings due to the internal rotation of the methyl group. On the other hand, most of the c-type transitions exhibit quartet splittings arising from the methyl internal rotation and the inversion motion between two equivalent conformations. The spectrum was analyzed using parameterized expressions of the Hamiltonian matrix elements derived by applying the tunneling matrix formalism. Based on the results obtained from ab initio calculation, the observed complex of methyl glycolate-H2O was assigned to the most stable conformer of the insertion complex, in which a non-planer seven membered-ring structure is formed by the intermolecular hydrogen bonds between methyl glycolate and H2O subunits. The inversion motion observed in the c-type transitions is therefore a kind of ring-inversion motion between two equivalent conformations. Conformational flexibility, which corresponds to the ring-inversion between two equivalent conformations and to the isomerization between two possible conformers of the insertion complex, was investigated with the help of the ab initio calculation.
Haider, Mansoor A.; Guilak, Farshid
2009-01-01
Articular cartilage exhibits viscoelasticity in response to mechanical loading that is well described using biphasic or poroelastic continuum models. To date, boundary element methods (BEMs) have not been employed in modeling biphasic tissue mechanics. A three dimensional direct poroelastic BEM, formulated in the Laplace transform domain, is applied to modeling stress relaxation in cartilage. Macroscopic stress relaxation of a poroelastic cylinder in uni-axial confined compression is simulated and validated against a theoretical solution. Microscopic cell deformation due to poroelastic stress relaxation is also modeled. An extended Laplace inversion method is employed to accurately represent mechanical responses in the time domain. PMID:19851478
Haider, Mansoor A; Guilak, Farshid
2007-06-15
Articular cartilage exhibits viscoelasticity in response to mechanical loading that is well described using biphasic or poroelastic continuum models. To date, boundary element methods (BEMs) have not been employed in modeling biphasic tissue mechanics. A three dimensional direct poroelastic BEM, formulated in the Laplace transform domain, is applied to modeling stress relaxation in cartilage. Macroscopic stress relaxation of a poroelastic cylinder in uni-axial confined compression is simulated and validated against a theoretical solution. Microscopic cell deformation due to poroelastic stress relaxation is also modeled. An extended Laplace inversion method is employed to accurately represent mechanical responses in the time domain.
Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models
NASA Astrophysics Data System (ADS)
Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing
2018-06-01
The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Kreutz, K.
1988-01-01
This report advances a linear operator approach for analyzing the dynamics of systems of joint-connected rigid bodies.It is established that the mass matrix M for such a system can be factored as M=(I+H phi L)D(I+H phi L) sup T. This yields an immediate inversion M sup -1=(I-H psi L) sup T D sup -1 (I-H psi L), where H and phi are given by known link geometric parameters, and L, psi and D are obtained recursively by a spatial discrete-step Kalman filter and by the corresponding Riccati equation associated with this filter. The factors (I+H phi L) and (I-H psi L) are lower triangular matrices which are inverses of each other, and D is a diagonal matrix. This factorization and inversion of the mass matrix leads to recursive algortihms for forward dynamics based on spatially recursive filtering and smoothing. The primary motivation for advancing the operator approach is to provide a better means to formulate, analyze and understand spatial recursions in multibody dynamics. This is achieved because the linear operator notation allows manipulation of the equations of motion using a very high-level analytical framework (a spatial operator algebra) that is easy to understand and use. Detailed lower-level recursive algorithms can readily be obtained for inspection from the expressions involving spatial operators. The report consists of two main sections. In Part 1, the problem of serial chain manipulators is analyzed and solved. Extensions to a closed-chain system formed by multiple manipulators moving a common task object are contained in Part 2. To retain ease of exposition in the report, only these two types of multibody systems are considered. However, the same methods can be easily applied to arbitrary multibody systems formed by a collection of joint-connected regid bodies.
Principal Component Geostatistical Approach for large-dimensional inverse problems
Kitanidis, P K; Lee, J
2014-01-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113
NASA Astrophysics Data System (ADS)
Kumenko, A. I.; Kostyukov, V. N.; Kuz'minykh, N. Yu.; Timin, A. V.; Boichenko, S. N.
2017-09-01
Examples of using the method developed for the earlier proposed concept of the monitoring system of the technical condition of a turbounit are presented. The solution methods of the inverse problem—the calculation of misalignments of supports based on the measurement results of positions of rotor pins in the borings of bearings during the operation of a turbounit—are demonstrated. The results of determination of static responses of supports at operation misalignments are presented. The examples of simulation and calculation of misalignments of supports are made for the three-bearing "high-pressure rotor-middle-pressure rotor" (HPR-MPR) system of a turbounit with 250 MW capacity and for 14-supporting shafting of a turbounit with 1000 MW capacity. The calculation results of coefficients of the stiffness matrix of shaftings and testing of methods for solving the inverse problem by modeling are presented. The high accuracy of the solution of the inverse problem at the inversion of the stiffness matrix of shafting used for determining the correcting centerings of rotors of multisupporting shafting is revealed. The stiffness matrix can be recommended to analyze the influence of displacements of one or several supports on changing the support responses of shafting of the turbounit during adjustment after assembling or repair. It is proposed to use the considered methods of evaluation of misalignments in the monitoring systems of changing the mutual position of supports and centerings of rotors by half-couplings of turbounits, especially for seismically dangerous regions and regions with increased sagging of foundations due to watering of soils.
Principal Component Geostatistical Approach for large-dimensional inverse problems.
Kitanidis, P K; Lee, J
2014-07-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m , and the number of observations, n , is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m 2 n , though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n . The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m 2 as in the textbook approach. For problems of very large m , this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.
Singularity and Nonnormality in the Classification of Compositional Data
Bohling, Geoffrey C.; Davis, J.C.; Olea, R.A.; Harff, Jan
1998-01-01
Geologists may want to classify compositional data and express the classification as a map. Regionalized classification is a tool that can be used for this purpose, but it incorporates discriminant analysis, which requires the computation and inversion of a covariance matrix. Covariance matrices of compositional data always will be singular (noninvertible) because of the unit-sum constraint. Fortunately, discriminant analyses can be calculated using a pseudo-inverse of the singular covariance matrix; this is done automatically by some statistical packages such as SAS. Granulometric data from the Darss Sill region of the Baltic Sea is used to explore how the pseudo-inversion procedure influences discriminant analysis results, comparing the algorithm used by SAS to the more conventional Moore-Penrose algorithm. Logratio transforms have been recommended to overcome problems associated with analysis of compositional data, including singularity. A regionalized classification of the Darss Sill data after logratio transformation is different only slightly from one based on raw granulometric data, suggesting that closure problems do not influence severely regionalized classification of compositional data.
An Efficient Spectral Method for Ordinary Differential Equations with Rational Function Coefficients
NASA Technical Reports Server (NTRS)
Coutsias, Evangelos A.; Torres, David; Hagstrom, Thomas
1994-01-01
We present some relations that allow the efficient approximate inversion of linear differential operators with rational function coefficients. We employ expansions in terms of a large class of orthogonal polynomial families, including all the classical orthogonal polynomials. These families obey a simple three-term recurrence relation for differentiation, which implies that on an appropriately restricted domain the differentiation operator has a unique banded inverse. The inverse is an integration operator for the family, and it is simply the tridiagonal coefficient matrix for the recurrence. Since in these families convolution operators (i.e. matrix representations of multiplication by a function) are banded for polynomials, we are able to obtain a banded representation for linear differential operators with rational coefficients. This leads to a method of solution of initial or boundary value problems that, besides having an operation count that scales linearly with the order of truncation N, is computationally well conditioned. Among the applications considered is the use of rational maps for the resolution of sharp interior layers.
NASA Astrophysics Data System (ADS)
Wu, Sheng-Jhih; Chu, Moody T.
2017-08-01
An inverse eigenvalue problem usually entails two constraints, one conditioned upon the spectrum and the other on the structure. This paper investigates the problem where triple constraints of eigenvalues, singular values, and diagonal entries are imposed simultaneously. An approach combining an eclectic mix of skills from differential geometry, optimization theory, and analytic gradient flow is employed to prove the solvability of such a problem. The result generalizes the classical Mirsky, Sing-Thompson, and Weyl-Horn theorems concerning the respective majorization relationships between any two of the arrays of main diagonal entries, eigenvalues, and singular values. The existence theory fills a gap in the classical matrix theory. The problem might find applications in wireless communication and quantum information science. The technique employed can be implemented as a first-step numerical method for constructing the matrix. With slight modification, the approach might be used to explore similar types of inverse problems where the prescribed entries are at general locations.
Cuenca, Jacques; Göransson, Peter
2012-08-01
This paper presents a method for simultaneously identifying both the elastic and anelastic properties of the porous frame of anisotropic open-cell foams. The approach is based on an inverse estimation procedure of the complex stiffness matrix of the frame by performing a model fit of a set of transfer functions of a sample of material subjected to compression excitation in vacuo. The material elastic properties are assumed to have orthotropic symmetry and the anelastic properties are described using a fractional-derivative model within the framework of an augmented Hooke's law. The inverse estimation problem is formulated as a numerical optimization procedure and solved using the globally convergent method of moving asymptotes. To show the feasibility of the approach a numerically generated target material is used here as a benchmark. It is shown that the method provides the full frequency-dependent orthotropic complex stiffness matrix within a reasonable degree of accuracy.
NASA Technical Reports Server (NTRS)
Boulet, C.; Ma, Q.
2016-01-01
Line mixing effects have been calculated in the ?1 parallel band of self-broadened NH3. The theoretical approach is an extension of a semi-classical model to symmetric-top molecules with inversion symmetry developed in the companion paper [Q. Ma and C. Boulet, J. Chem. Phys. 144, 224303 (2016)]. This model takes into account line coupling effects and hence enables the calculation of the entire relaxation matrix. A detailed analysis of the various coupling mechanisms is carried out for Q and R inversion doublets. The model has been applied to the calculation of the shape of the Q branch and of some R manifolds for which an obvious signature of line mixing effects has been experimentally demonstrated. Comparisons with measurements show that the present formalism leads to an accurate prediction of the available experimental line shapes. Discrepancies between the experimental and theoretical sets of first order mixing parameters are discussed as well as some extensions of both theory and experiment.
NASA Astrophysics Data System (ADS)
Ogiso, M.
2017-12-01
Heterogeneous attenuation structure is important for not only understanding the earth structure and seismotectonics, but also ground motion prediction. Attenuation of ground motion in high frequency range is often characterized by the distribution of intrinsic and scattering attenuation parameters (intrinsic Q and scattering coefficient). From the viewpoint of ground motion prediction, both intrinsic and scattering attenuation affect the maximum amplitude of ground motion while scattering attenuation also affect the duration time of ground motion. Hence, estimation of both attenuation parameters will lead to sophisticate the ground motion prediction. In this study, we try to estimate both parameters in southwestern Japan in a tomographic manner. We will conduct envelope fitting of seismic coda since coda has sensitivity to both intrinsic attenuation and scattering coefficients. Recently, Takeuchi (2016) successfully calculated differential envelope when these parameters have fluctuations. We adopted his equations to calculate partial derivatives of these parameters since we did not need to assume homogeneous velocity structure. Matrix for inversion of structural parameters would become too huge to solve in a straightforward manner. Hence, we adopted ART-type Bayesian Reconstruction Method (Hirahara, 1998) to project the difference of envelopes to structural parameters iteratively. We conducted checkerboard reconstruction test. We assumed checkerboard pattern of 0.4 degree interval in horizontal direction and 20 km in depth direction. Reconstructed structures well reproduced the assumed pattern in shallower part while not in deeper part. Since the inversion kernel has large sensitivity around source and stations, resolution in deeper part would be limited due to the sparse distribution of earthquakes. To apply the inversion method which described above to actual waveforms, we have to correct the effects of source and site amplification term. We consider these issues to estimate the actual intrinsic and scattering structures of the target region.Acknowledgment We used the waveforms of Hi-net, NIED. This study was supported by the Earthquake Research Institute of the University of Tokyo cooperative research program.
A Generic Guidance and Control Structure for Six-Degree-of-Freedom Conceptual Aircraft Design
NASA Technical Reports Server (NTRS)
Cotting, M. Christopher; Cox, Timothy H.
2005-01-01
A control system framework is presented for both real-time and batch six-degree-of-freedom simulation. This framework allows stabilization and control with multiple command options, from body rate control to waypoint guidance. Also, pilot commands can be used to operate the simulation in a pilot-in-the-loop environment. This control system framework is created by using direct vehicle state feedback with nonlinear dynamic inversion. A direct control allocation scheme is used to command aircraft effectors. Online B-matrix estimation is used in the control allocation algorithm for maximum algorithm flexibility. Primary uses for this framework include conceptual design and early preliminary design of aircraft, where vehicle models change rapidly and a knowledge of vehicle six-degree-of-freedom performance is required. A simulated airbreathing hypersonic vehicle and a simulated high performance fighter are controlled to demonstrate the flexibility and utility of the control system.
NASA Astrophysics Data System (ADS)
Zhou, Bing; Greenhalgh, S. A.
2011-10-01
2.5-D modeling and inversion techniques are much closer to reality than the simple and traditional 2-D seismic wave modeling and inversion. The sensitivity kernels required in full waveform seismic tomographic inversion are the Fréchet derivatives of the displacement vector with respect to the independent anisotropic model parameters of the subsurface. They give the sensitivity of the seismograms to changes in the model parameters. This paper applies two methods, called `the perturbation method' and `the matrix method', to derive the sensitivity kernels for 2.5-D seismic waveform inversion. We show that the two methods yield the same explicit expressions for the Fréchet derivatives using a constant-block model parameterization, and are available for both the line-source (2-D) and the point-source (2.5-D) cases. The method involves two Green's function vectors and their gradients, as well as the derivatives of the elastic modulus tensor with respect to the independent model parameters. The two Green's function vectors are the responses of the displacement vector to the two directed unit vectors located at the source and geophone positions, respectively; they can be generally obtained by numerical methods. The gradients of the Green's function vectors may be approximated in the same manner as the differential computations in the forward modeling. The derivatives of the elastic modulus tensor with respect to the independent model parameters can be obtained analytically, dependent on the class of medium anisotropy. Explicit expressions are given for two special cases—isotropic and tilted transversely isotropic (TTI) media. Numerical examples are given for the latter case, which involves five independent elastic moduli (or Thomsen parameters) plus one angle defining the symmetry axis.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Reconfigurable Control with Neural Network Augmentation for a Modified F-15 Aircraft
NASA Technical Reports Server (NTRS)
Burken, John J.
2007-01-01
This paper describes the performance of a simplified dynamic inversion controller with neural network supplementation. This 6 DOF (Degree-of-Freedom) simulation study focuses on the results with and without adaptation of neural networks using a simulation of the NASA modified F-15 which has canards. One area of interest is the performance of a simulated surface failure while attempting to minimize the inertial cross coupling effect of a [B] matrix failure (a control derivative anomaly associated with a jammed or missing control surface). Another area of interest and presented is simulated aerodynamic failures ([A] matrix) such as a canard failure. The controller uses explicit models to produce desired angular rate commands. The dynamic inversion calculates the necessary surface commands to achieve the desired rates. The simplified dynamic inversion uses approximate short period and roll axis dynamics. Initial results indicated that the transient response for a [B] matrix failure using a Neural Network (NN) improved the control behavior when compared to not using a neural network for a given failure, However, further evaluation of the controller was comparable, with objections io the cross coupling effects (after changes were made to the controller). This paper describes the methods employed to reduce the cross coupling effect and maintain adequate tracking errors. The IA] matrix failure results show that control of the aircraft without adaptation is more difficult [leas damped) than with active neural networks, Simulation results show Neural Network augmentation of the controller improves performance in terms of backing error and cross coupling reduction and improved performance with aerodynamic-type failures.
ZnFe2O4 nanoparticles dispersed in a highly porous silica aerogel matrix: a magnetic study.
Bullita, S; Casu, A; Casula, M F; Concas, G; Congiu, F; Corrias, A; Falqui, A; Loche, D; Marras, C
2014-03-14
We report the detailed structural characterization and magnetic investigation of nanocrystalline zinc ferrite nanoparticles supported on a silica aerogel porous matrix which differ in size (in the range 4-11 nm) and the inversion degree (from 0.4 to 0.2) as compared to bulk zinc ferrite which has a normal spinel structure. The samples were investigated by zero-field-cooling-field-cooling, thermo-remnant DC magnetization measurements, AC magnetization investigation and Mössbauer spectroscopy. The nanocomposites are superparamagnetic at room temperature; the temperature of the superparamagnetic transition in the samples decreases with the particle size and therefore it is mainly determined by the inversion degree rather than by the particle size, which would give an opposite effect on the blocking temperature. The contribution of particle interaction to the magnetic behavior of the nanocomposites decreases significantly in the sample with the largest particle size. The values of the anisotropy constant give evidence that the anisotropy constant decreases upon increasing the particle size of the samples. All these results clearly indicate that, even when dispersed with low concentration in a non-magnetic and highly porous and insulating matrix, the zinc ferrite nanoparticles show a magnetic behavior similar to that displayed when they are unsupported or dispersed in a similar but denser matrix, and with higher loading. The effective anisotropy measured for our samples appears to be systematically higher than that measured for supported zinc ferrite nanoparticles of similar size, indicating that this effect probably occurs as a consequence of the high inversion degree.
Inverse eigenproblem for R-symmetric matrices and their approximation
NASA Astrophysics Data System (ADS)
Yuan, Yongxin
2009-11-01
Let be a nontrivial involution, i.e., R=R-1[not equal to]±In. We say that is R-symmetric if RGR=G. The set of all -symmetric matrices is denoted by . In this paper, we first give the solvability condition for the following inverse eigenproblem (IEP): given a set of vectors in and a set of complex numbers , find a matrix such that and are, respectively, the eigenvalues and eigenvectors of A. We then consider the following approximation problem: Given an n×n matrix , find such that , where is the solution set of IEP and ||[dot operator]|| is the Frobenius norm. We provide an explicit formula for the best approximation solution by means of the canonical correlation decomposition.
Real time evolution at finite temperatures with operator space matrix product states
NASA Astrophysics Data System (ADS)
Pižorn, Iztok; Eisler, Viktor; Andergassen, Sabine; Troyer, Matthias
2014-07-01
We propose a method to simulate the real time evolution of one-dimensional quantum many-body systems at finite temperature by expressing both the density matrices and the observables as matrix product states. This allows the calculation of expectation values and correlation functions as scalar products in operator space. The simulations of density matrices in inverse temperature and the local operators in the Heisenberg picture are independent and result in a grid of expectation values for all intermediate temperatures and times. Simulations can be performed using real arithmetics with only polynomial growth of computational resources in inverse temperature and time for integrable systems. The method is illustrated for the XXZ model and the single impurity Anderson model.
NASA Astrophysics Data System (ADS)
Schmoldt, Jan-Philipp; Jones, Alan G.
2013-12-01
The key result of this study is the development of a novel inversion approach for cases of orthogonal, or close to orthogonal, geoelectric strike directions at different depth ranges, for example, crustal and mantle depths. Oblique geoelectric strike directions are a well-known issue in commonly employed isotropic 2-D inversion of MT data. Whereas recovery of upper (crustal) structures can, in most cases, be achieved in a straightforward manner, deriving lower (mantle) structures is more challenging with isotropic 2-D inversion in the case of an overlying region (crust) with different geoelectric strike direction. Thus, investigators may resort to computationally expensive and more limited 3-D inversion in order to derive the electric resistivity distribution at mantle depths. In the novel approaches presented in this paper, electric anisotropy is used to image 2-D structures in one depth range, whereas the other region is modelled with an isotropic 1-D or 2-D approach, as a result significantly reducing computational costs of the inversion in comparison with 3-D inversion. The 1- and 2-D versions of the novel approach were tested using a synthetic 3-D subsurface model with orthogonal strike directions at crust and mantle depths and their performance was compared to results of isotropic 2-D inversion. Structures at crustal depths were reasonably well recovered by all inversion approaches, whereas recovery of mantle structures varied significantly between the different approaches. Isotropic 2-D inversion models, despite decomposition of the electric impedance tensor and using a wide range of inversion parameters, exhibited severe artefacts thereby confirming the requirement of either an enhanced or a higher dimensionality inversion approach. With the anisotropic 1-D inversion approach, mantle structures of the synthetic model were recovered reasonably well with anisotropy values parallel to the mantle strike direction (in this study anisotropy was assigned to the mantle region), indicating applicability of the novel approach for basic subsurface cases. For the more complex subsurface cases, however, the anisotropic 1-D inversion approach is likely to yield implausible models of the electric resistivity distribution due to inapplicability of the 1-D approximation. Owing to the higher number of degrees of freedom, the anisotropic 2-D inversion approach can cope with more complex subsurface cases and is the recommended tool for real data sets recorded in regions with orthogonal geoelectric strike directions.
NASA Astrophysics Data System (ADS)
Siegel, Z.; Siegel, Edward Carl-Ludwig
2011-03-01
RANDOMNESS of Numbers cognitive-semantics DEFINITION VIA Cognition QUERY: WHAT???, NOT HOW?) VS. computer-``science" mindLESS number-crunching (Harrel-Sipser-...) algorithmics Goldreich "PSEUDO-randomness"[Not.AMS(02)] mea-culpa is ONLY via MAXWELL-BOLTZMANN CLASSICAL-STATISTICS(NOT FDQS!!!) "hot-plasma" REPULSION VERSUS Newcomb(1881)-Weyl(1914;1916)-Benford(1938) "NeWBe" logarithmic-law digit-CLUMPING/ CLUSTERING NON-Randomness simple Siegel[AMS Joint.Mtg.(02)-Abs. # 973-60-124] algebraic-inversion to THE QUANTUM and ONLY BEQS preferentially SEQUENTIALLY lower-DIGITS CLUMPING/CLUSTERING with d = 0 BEC, is ONLY VIA Siegel-Baez FUZZYICS=CATEGORYICS (SON OF TRIZ)/"Category-Semantics"(C-S), latter intersection/union of Lawvere(1964)-Siegel(1964)] category-theory (matrix: MORPHISMS V FUNCTORS) "+" cognitive-semantics'' (matrix: ANTONYMS V SYNONYMS) yields Siegel-Baez FUZZYICS=CATEGORYICS/C-S tabular list-format matrix truth-table analytics: MBCS RANDOMNESS TRUTH/EMET!!!
Neutrino and C P -even Higgs boson masses in a nonuniversal U (1 )' extension
NASA Astrophysics Data System (ADS)
Mantilla, S. F.; Martinez, R.; Ochoa, F.
2017-05-01
We propose a new anomaly-free and family nonuniversal U (1 )' extension of the standard model with the addition of two scalar singlets and a new scalar doublet. The quark sector is extended by adding three exotic quark singlets, while the lepton sector includes two exotic charged lepton singlets, three right-handed neutrinos, and three sterile Majorana leptons to obtain the fermionic mass spectrum of the standard model. The lepton sector also reproduces the elements of the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix and the squared-mass differences data from neutrino oscillation experiments. Also, analytical relations of the PMNS matrix are derived via the inverse seesaw mechanism, and numerical predictions of the parameters in both normal and inverse order scheme for the mass of the phenomenological neutrinos are obtained. We employed a simple seesawlike method to obtain analytical mass eigenstates of the C P -even 3 ×3 mass matrix of the scalar sector.
Precision estimate for Odin-OSIRIS limb scatter retrievals
NASA Astrophysics Data System (ADS)
Bourassa, A. E.; McLinden, C. A.; Bathgate, A. F.; Elash, B. J.; Degenstein, D. A.
2012-02-01
The limb scatter measurements made by the Optical Spectrograph and Infrared Imaging System (OSIRIS) instrument on the Odin spacecraft are used to routinely produce vertically resolved trace gas and aerosol extinction profiles. Version 5 of the ozone and stratospheric aerosol extinction retrievals, which are available for download, are performed using a multiplicative algebraic reconstruction technique (MART). The MART inversion is a type of relaxation method, and as such the covariance of the retrieved state is estimated numerically, which, if done directly, is a computationally heavy task. Here we provide a methodology for the derivation of a numerical estimate of the covariance matrix for the retrieved state using the MART inversion that is sufficiently efficient to perform for each OSIRIS measurement. The resulting precision is compared with the variability in a large set of pairs of OSIRIS measurements that are close in time and space in the tropical stratosphere where the natural atmospheric variability is weak. These results are found to be highly consistent and thus provide confidence in the numerical estimate of the precision in the retrieved profiles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Boian S.; Vesselinov, Velimir V.; Stanev, Valentin
The ShiftNMFk1.2 code, or as we call it, GreenNMFk, represents a hybrid algorithm combining unsupervised adaptive machine learning and Green's function inverse method. GreenNMFk allows an efficient and high performance de-mixing and feature extraction of a multitude of nonnegative signals that change their shape propagating through the medium. The signals are mixed and recorded by a network of uncorrelated sensors. The code couples Non-negative Matrix Factorization (NMF) and inverse-analysis Green's functions method. GreenNMF synergistically performs decomposition of the recorded mixtures, finds the number of the unknown sources and uses the Green's function of the governing partial differential equation to identifymore » the unknown sources and their charecteristics. GreenNMF can be applied directly to any problem controlled by a known partial-differential parabolic equation where mixtures of an unknown number of sources are measured at multiple locations. Full GreenNMFk method is a subject LANL U.S. Patent application S133364.000 August, 2017. The ShiftNMFk 1.2 version here is a toy version of this method that can work with a limited number of unknown sources (4 or less).« less
Gas Hydrate Estimation Using Rock Physics Modeling and Seismic Inversion
NASA Astrophysics Data System (ADS)
Dai, J.; Dutta, N.; Xu, H.
2006-05-01
ABSTRACT We conducted a theoretical study of the effects of gas hydrate saturation on the acoustic properties (P- and S- wave velocities, and bulk density) of host rocks, using wireline log data from the Mallik wells in the Mackenzie Delta in Northern Canada. We evaluated a number of gas hydrate rock physics models that correspond to different rock textures. Our study shows that, among the existing rock physics models, the one that treats gas hydrate as part of the solid matrix best fits the measured data. This model was also tested on gas hydrate hole 995B of ODP leg 164 drilling at Blake Ridge, which shows adequate match. Based on the understanding of rock models of gas hydrates and properties of shallow sediments, we define a procedure that quantifies gas hydrate using rock physics modeling and seismic inversion. The method allows us to estimate gas hydrate directly from seismic information only. This paper will show examples of gas hydrates quantification from both 1D profile and 3D volume in the deepwater of Gulf of Mexico.
Heuett, William J; Beard, Daniel A; Qian, Hong
2008-01-01
Background Several approaches, including metabolic control analysis (MCA), flux balance analysis (FBA), correlation metric construction (CMC), and biochemical circuit theory (BCT), have been developed for the quantitative analysis of complex biochemical networks. Here, we present a comprehensive theory of linear analysis for nonequilibrium steady-state (NESS) biochemical reaction networks that unites these disparate approaches in a common mathematical framework and thermodynamic basis. Results In this theory a number of relationships between key matrices are introduced: the matrix A obtained in the standard, linear-dynamic-stability analysis of the steady-state can be decomposed as A = SRT where R and S are directly related to the elasticity-coefficient matrix for the fluxes and chemical potentials in MCA, respectively; the control-coefficients for the fluxes and chemical potentials can be written in terms of RTBS and STBS respectively where matrix B is the inverse of A; the matrix S is precisely the stoichiometric matrix in FBA; and the matrix eAt plays a central role in CMC. Conclusion One key finding that emerges from this analysis is that the well-known summation theorems in MCA take different forms depending on whether metabolic steady-state is maintained by flux injection or concentration clamping. We demonstrate that if rate-limiting steps exist in a biochemical pathway, they are the steps with smallest biochemical conductances and largest flux control-coefficients. We hypothesize that biochemical networks for cellular signaling have a different strategy for minimizing energy waste and being efficient than do biochemical networks for biosynthesis. We also discuss the intimate relationship between MCA and biochemical systems analysis (BSA). PMID:18482450
NASA Astrophysics Data System (ADS)
Kaporin, I. E.
2012-02-01
In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.
Actomyosin tension as a determinant of metastatic cancer mechanical tropism
NASA Astrophysics Data System (ADS)
McGrail, Daniel J.; Kieu, Quang Minh N.; Iandoli, Jason A.; Dawson, Michelle R.
2015-04-01
Despite major advances in the characterization of molecular regulators of cancer growth and metastasis, patient survival rates have largely stagnated. Recent studies have shown that mechanical cues from the extracellular matrix can drive the transition to a malignant phenotype. Moreover, it is also known that the metastatic process, which results in over 90% of cancer-related deaths, is governed by intracellular mechanical forces. To better understand these processes, we identified metastatic tumor cells originating from different locations which undergo inverse responses to altered matrix elasticity: MDA-MB-231 breast cancer cells that prefer rigid matrices and SKOV-3 ovarian cancer cells that prefer compliant matrices as characterized by parameters such as tumor cell proliferation, chemoresistance, and migration. Transcriptomic analysis revealed higher expression of genes associated with cytoskeletal tension and contractility in cells that prefer stiff environments, both when comparing MDA-MB-231 to SKOV-3 cells as well as when comparing bone-metastatic to lung-metastatic MDA-MB-231 subclones. Using small molecule inhibitors, we found that blocking the activity of these pathways mitigated rigidity-dependent behavior in both cell lines. Probing the physical forces exerted by cells on the underlying substrates revealed that though force magnitude may not directly correlate with functional outcomes, other parameters such as force polarization do correlate directly with cell motility. Finally, this biophysical analysis demonstrates that intrinsic levels of cell contractility determine the matrix rigidity for maximal cell function, possibly influencing tissue sites for metastatic cancer cell engraftment during dissemination. By increasing our understanding of the physical interactions of cancer cells with their microenvironment, these studies may help develop novel therapeutic strategies.
Coupled near-field and far-field exposure assessment framework for chemicals in consumer products.
Fantke, Peter; Ernstoff, Alexi S; Huang, Lei; Csiszar, Susan A; Jolliet, Olivier
2016-09-01
Humans can be exposed to chemicals in consumer products through product use and environmental emissions over the product life cycle. Exposure pathways are often complex, where chemicals can transfer directly from products to humans during use or exchange between various indoor and outdoor compartments until sub-fractions reach humans. To consistently evaluate exposure pathways along product life cycles, a flexible mass balance-based assessment framework is presented structuring multimedia chemical transfers in a matrix of direct inter-compartmental transfer fractions. By matrix inversion, we quantify cumulative multimedia transfer fractions and exposure pathway-specific product intake fractions defined as chemical mass taken in by humans per unit mass of chemical in a product. Combining product intake fractions with chemical mass in the product yields intake estimates for use in life cycle impact assessment and chemical alternatives assessment, or daily intake doses for use in risk-based assessment and high-throughput screening. Two illustrative examples of chemicals used in personal care products and flooring materials demonstrate how this matrix-based framework offers a consistent and efficient way to rapidly compare exposure pathways for adult and child users and for the general population. This framework constitutes a user-friendly approach to develop, compare and interpret multiple human exposure scenarios in a coupled system of near-field ('user' environment), far-field and human intake compartments, and helps understand the contribution of individual pathways to overall human exposure in various product application contexts to inform decisions in different science-policy fields for which exposure quantification is relevant. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Gole, James L; Ozdemir, Serdar
2010-08-23
A concept, complementary to that of hard and soft acid-base interactions (HSAB-dominant chemisorption) and consistent with dominant physisorption to a semiconductor interface, is presented. We create a matrix of sensitivities and interactions with several basic gases. The concept, based on the reversible interaction of hard-acid surfaces with soft bases, hard-base surfaces with soft acids, or vice versa, corresponds 1) to the inverse of the HSAB concept and 2) to the selection of a combination of semiconductor interface and analyte materials, which can be used to direct a physisorbed vs chemisorbed interaction. The technology, implemented on nanopore coated porous silicon micropores, results in the coupling of acid-base chemistry with the depletion or enhancement of majority carriers in an extrinsic semiconductor. Using the inverse-HSAB (IHSAB) concept, significant and predictable changes in interface sensitivity for a variety of gases can be implemented. Nanostructured metal oxide particle depositions provide selectivity and complement a highly efficient electrical contact to a porous silicon nanopore covered microporous interface. The application of small quantities (much less than a monolayer) of nanostructured metals, metal oxides, and catalysts which focus the physisorbtive and chemisorbtive interactions of the interface, can be made to create a range of notably higher sensitivities for reversible physisorption. This is exemplified by an approach to reversible, sensitive, and selective interface responses. Nanostructured metal oxides developed from electroless gold (Au(x)O), tin (SnO(2)), copper (Cu(x)O), and nickel (NiO) depositions, nanoalumina, and nanotitania are used to demonstrate the IHSAB concept and provide for the detection of gases, including NH(3), PH(3), CO, NO, and H(2)S, in an array-based format to the sub-ppm level.
Migration of scattered teleseismic body waves
NASA Astrophysics Data System (ADS)
Bostock, M. G.; Rondenay, S.
1999-06-01
The retrieval of near-receiver mantle structure from scattered waves associated with teleseismic P and S and recorded on three-component, linear seismic arrays is considered in the context of inverse scattering theory. A Ray + Born formulation is proposed which admits linearization of the forward problem and economy in the computation of the elastic wave Green's function. The high-frequency approximation further simplifies the problem by enabling (1) the use of an earth-flattened, 1-D reference model, (2) a reduction in computations to 2-D through the assumption of 2.5-D experimental geometry, and (3) band-diagonalization of the Hessian matrix in the inverse formulation. The final expressions are in a form reminiscent of the classical diffraction stack of seismic migration. Implementation of this procedure demands an accurate estimate of the scattered wave contribution to the impulse response, and thus requires the removal of both the reference wavefield and the source time signature from the raw record sections. An approximate separation of direct and scattered waves is achieved through application of the inverse free-surface transfer operator to individual station records and a Karhunen-Loeve transform to the resulting record sections. This procedure takes the full displacement field to a wave vector space wherein the first principal component of the incident wave-type section is identified with the direct wave and is used as an estimate of the source time function. The scattered displacement field is reconstituted from the remaining principal components using the forward free-surface transfer operator, and may be reduced to a scattering impulse response upon deconvolution of the source estimate. An example employing pseudo-spectral synthetic seismograms demonstrates an application of the methodology.
NASA Astrophysics Data System (ADS)
Klees, R.; Slobbe, D. C.; Farahani, H. H.
2018-03-01
The posed question arises for instance in regional gravity field modelling using weighted least-squares techniques if the gravity field functionals are synthesised from the spherical harmonic coefficients of a satellite-only global gravity model (GGM), and are used as one of the noisy datasets. The associated noise covariance matrix, appeared to be extremely ill-conditioned with a singular value spectrum that decayed gradually to zero without any noticeable gap. We analysed three methods to deal with the ill-conditioned noise covariance matrix: Tihonov regularisation of the noise covariance matrix in combination with the standard formula for the weighted least-squares estimator, a formula of the weighted least-squares estimator, which does not involve the inverse noise covariance matrix, and an estimator based on Rao's unified theory of least-squares. Our analysis was based on a numerical experiment involving a set of height anomalies synthesised from the GGM GOCO05s, which is provided with a full noise covariance matrix. We showed that the three estimators perform similar, provided that the two regularisation parameters each method knows were chosen properly. As standard regularisation parameter choice rules do not apply here, we suggested a new parameter choice rule, and demonstrated its performance. Using this rule, we found that the differences between the three least-squares estimates were within noise. For the standard formulation of the weighted least-squares estimator with regularised noise covariance matrix, this required an exceptionally strong regularisation, much larger than one expected from the condition number of the noise covariance matrix. The preferred method is the inversion-free formulation of the weighted least-squares estimator, because of its simplicity with respect to the choice of the two regularisation parameters.
2D data-space cross-gradient joint inversion of MT, gravity and magnetic data
NASA Astrophysics Data System (ADS)
Pak, Yong-Chol; Li, Tonglin; Kim, Gang-Sop
2017-08-01
We have developed a data-space multiple cross-gradient joint inversion algorithm, and validated it through synthetic tests and applied it to magnetotelluric (MT), gravity and magnetic datasets acquired along a 95 km profile in Benxi-Ji'an area of northeastern China. To begin, we discuss a generalized cross-gradient joint inversion for multiple datasets and model parameters sets, and formulate it in data space. The Lagrange multiplier required for the structural coupling in the data-space method is determined using an iterative solver to avoid calculation of the inverse matrix in solving the large system of equations. Next, using model-space and data-space methods, we inverted the synthetic data and field data. Based on our result, the joint inversion in data-space not only delineates geological bodies more clearly than the separate inversion, but also yields nearly equal results with the one in model-space while consuming much less memory.
NASA Astrophysics Data System (ADS)
Friedrich, Oliver; Eifler, Tim
2018-01-01
Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.
Arkudas, Andreas; Pryymachuk, Galyna; Hoereth, Tobias; Beier, Justus P; Polykandriotis, Elias; Bleiziffer, Oliver; Gulle, Heinz; Horch, Raymund E; Kneser, Ulrich
2012-07-01
In this study, different fibrin sealants with varying concentrations of the fibrin components were evaluated in terms of matrix degradation and vascularization in the arteriovenous loop (AVL) model of the rat. An AVL was placed in a Teflon isolation chamber filled with 500 μl fibrin gel. The matrix was composed of commercially available fibrin gels, namely Beriplast (Behring GmbH, Marburg, Germany) (group A), Evicel (Omrix Biopharmaceuticals S.A., Somerville, New Jersey, USA) (group B), Tisseel VH S/D (Baxter, Vienna, Austria) with a thrombin concentration of 4 IU/ml and a fibrinogen concentration of 80 mg/ml [Tisseel S F80 (Baxter), group C] and with an fibrinogen concentration of 20 mg/ml [Tisseel S F20 (Baxter), group D]. After 2 and 4 weeks, five constructs per group and time point were investigated using micro-computed tomography, and histological and morphometrical analysis techniques. The aprotinin, factor XIII and thrombin concentration did not affect the degree of clot degradation. An inverse relationship was found between fibrin matrix degradation and sprouting of blood vessels. By reducing the fibrinogen concentration in group D, a significantly decreased construct weight and an increased generation of vascularized connective tissue were detected. There was an inverse relationship between matrix degradation and vascularization detectable. Fibrinogen as the major matrix component showed a significant impact on the matrix properties. Alteration of fibrin gel properties might optimize formation of blood vessels.
Bayesian estimation of a source term of radiation release with approximately known nuclide ratios
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek
2016-04-01
We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
NASA Astrophysics Data System (ADS)
Ma, Sangback
In this paper we compare various parallel preconditioners such as Point-SSOR (Symmetric Successive OverRelaxation), ILU(0) (Incomplete LU) in the Wavefront ordering, ILU(0) in the Multi-color ordering, Multi-Color Block SOR (Successive OverRelaxation), SPAI (SParse Approximate Inverse) and pARMS (Parallel Algebraic Recursive Multilevel Solver) for solving large sparse linear systems arising from two-dimensional PDE (Partial Differential Equation)s on structured grids. Point-SSOR is well-known, and ILU(0) is one of the most popular preconditioner, but it is inherently serial. ILU(0) in the Wavefront ordering maximizes the parallelism in the natural order, but the lengths of the wave-fronts are often nonuniform. ILU(0) in the Multi-color ordering is a simple way of achieving a parallelism of the order N, where N is the order of the matrix, but its convergence rate often deteriorates as compared to that of natural ordering. We have chosen the Multi-Color Block SOR preconditioner combined with direct sparse matrix solver, since for the Laplacian matrix the SOR method is known to have a nondeteriorating rate of convergence when used with the Multi-Color ordering. By using block version we expect to minimize the interprocessor communications. SPAI computes the sparse approximate inverse directly by least squares method. Finally, ARMS is a preconditioner recursively exploiting the concept of independent sets and pARMS is the parallel version of ARMS. Experiments were conducted for the Finite Difference and Finite Element discretizations of five two-dimensional PDEs with large meshsizes up to a million on an IBM p595 machine with distributed memory. Our matrices are real positive, i. e., their real parts of the eigenvalues are positive. We have used GMRES(m) as our outer iterative method, so that the convergence of GMRES(m) for our test matrices are mathematically guaranteed. Interprocessor communications were done using MPI (Message Passing Interface) primitives. The results show that in general ILU(0) in the Multi-Color ordering ahd ILU(0) in the Wavefront ordering outperform the other methods but for symmetric and nearly symmetric 5-point matrices Multi-Color Block SOR gives the best performance, except for a few cases with a small number of processors.
An order (n) algorithm for the dynamics simulation of robotic systems
NASA Technical Reports Server (NTRS)
Chun, H. M.; Turner, J. D.; Frisch, Harold P.
1989-01-01
The formulation of an Order (n) algorithm for DISCOS (Dynamics Interaction Simulation of Controls and Structures), which is an industry-standard software package for simulation and analysis of flexible multibody systems is presented. For systems involving many bodies, the new Order (n) version of DISCOS is much faster than the current version. Results of the experimental validation of the dynamics software are also presented. The experiment is carried out on a seven-joint robot arm at NASA's Goddard Space Flight Center. The algorithm used in the current version of DISCOS requires the inverse of a matrix whose dimension is equal to the number of constraints in the system. Generally, the number of constraints in a system is roughly proportional to the number of bodies in the system, and matrix inversion requires O(p exp 3) operations, where p is the dimension of the matrix. The current version of DISCOS is therefore considered an Order (n exp 3) algorithm. In contrast, the Order (n) algorithm requires inversion of matrices which are small, and the number of matrices to be inverted increases only linearly with the number of bodies. The newly-developed Order (n) DISCOS is currently capable of handling chain and tree topologies as well as multiple closed loops. Continuing development will extend the capability of the software to deal with typical robotics applications such as put-and-place, multi-arm hand-off and surface sliding.
Time-resolved diffusion tomographic 2D and 3D imaging in highly scattering turbid media
NASA Technical Reports Server (NTRS)
Alfano, Robert R. (Inventor); Cai, Wei (Inventor); Liu, Feng (Inventor); Lax, Melvin (Inventor); Das, Bidyut B. (Inventor)
1999-01-01
A method for imaging objects in highly scattering turbid media. According to one embodiment of the invention, the method involves using a plurality of intersecting source/detectors sets and time-resolving equipment to generate a plurality of time-resolved intensity curves for the diffusive component of light emergent from the medium. For each of the curves, the intensities at a plurality of times are then inputted into the following inverse reconstruction algorithm to form an image of the medium: ##EQU1## wherein W is a matrix relating output at source and detector positions r.sub.s and r.sub.d, at time t, to position r, .LAMBDA. is a regularization matrix, chosen for convenience to be diagonal, but selected in a way related to the ratio of the noise,
Time-resolved diffusion tomographic 2D and 3D imaging in highly scattering turbid media
NASA Technical Reports Server (NTRS)
Alfano, Robert R. (Inventor); Cai, Wei (Inventor); Gayen, Swapan K. (Inventor)
2000-01-01
A method for imaging objects in highly scattering turbid media. According to one embodiment of the invention, the method involves using a plurality of intersecting source/detectors sets and time-resolving equipment to generate a plurality of time-resolved intensity curves for the diffusive component of light emergent from the medium. For each of the curves, the intensities at a plurality of times are then inputted into the following inverse reconstruction algorithm to form an image of the medium: wherein W is a matrix relating output at source and detector positions r.sub.s and r.sub.d, at time t, to position r, .LAMBDA. is a regularization matrix, chosen for convenience to be diagonal, but selected in a way related to the ratio of the noise,
Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion
NASA Astrophysics Data System (ADS)
Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.
2017-01-01
We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.
Panchapagesan, Sankaran; Alwan, Abeer
2011-01-01
In this paper, a quantitative study of acoustic-to-articulatory inversion for vowel speech sounds by analysis-by-synthesis using the Maeda articulatory model is performed. For chain matrix calculation of vocal tract (VT) acoustics, the chain matrix derivatives with respect to area function are calculated and used in a quasi-Newton method for optimizing articulatory trajectories. The cost function includes a distance measure between natural and synthesized first three formants, and parameter regularization and continuity terms. Calibration of the Maeda model to two speakers, one male and one female, from the University of Wisconsin x-ray microbeam (XRMB) database, using a cost function, is discussed. Model adaptation includes scaling the overall VT and the pharyngeal region and modifying the outer VT outline using measured palate and pharyngeal traces. The inversion optimization is initialized by a fast search of an articulatory codebook, which was pruned using XRMB data to improve inversion results. Good agreement between estimated midsagittal VT outlines and measured XRMB tongue pellet positions was achieved for several vowels and diphthongs for the male speaker, with average pellet-VT outline distances around 0.15 cm, smooth articulatory trajectories, and less than 1% average error in the first three formants. PMID:21476670
Parallel solution of the symmetric tridiagonal eigenproblem. Research report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1989-10-01
This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less
Parallel solution of the symmetric tridiagonal eigenproblem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1989-01-01
This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less
Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
A gradient based algorithm to solve inverse plane bimodular problems of identification
NASA Astrophysics Data System (ADS)
Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing
2018-02-01
This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.
Effects of the oceans on polar motion: Extended investigations
NASA Technical Reports Server (NTRS)
Dickman, Steven R.
1986-01-01
A method was found for expressing the tide current velocities in terms of the tide height (with all variables expanded in spherical harmonics). All time equations were then combined into a single, nondifferential matrix equation involving only the unknown tide height. The pole tide was constrained so that no tidewater flows across continental boundaries. The constraint was derived for the case of turbulent oceans; with the tide velocities expressed in terms of the tide height. The two matrix equations were combined. Simple matrix inversion then yielded the constrained solution. Programs to construct and invert the matrix equations were written. Preliminary results were obtained and are discussed.
NASA Astrophysics Data System (ADS)
Lawrence, Chris C.; Febbraro, Michael; Flaska, Marek; Pozzi, Sara A.; Becchetti, F. D.
2016-08-01
Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrix condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.
A fast reconstruction algorithm for fluorescence optical diffusion tomography based on preiteration.
Song, Xiaolei; Xiong, Xiaoyun; Bai, Jing
2007-01-01
Fluorescence optical diffusion tomography in the near-infrared (NIR) bandwidth is considered to be one of the most promising ways for noninvasive molecular-based imaging. Many reconstructive approaches to it utilize iterative methods for data inversion. However, they are time-consuming and they are far from meeting the real-time imaging demands. In this work, a fast preiteration algorithm based on the generalized inverse matrix is proposed. This method needs only one step of matrix-vector multiplication online, by pushing the iteration process to be executed offline. In the preiteration process, the second-order iterative format is employed to exponentially accelerate the convergence. Simulations based on an analytical diffusion model show that the distribution of fluorescent yield can be well estimated by this algorithm and the reconstructed speed is remarkably increased.
Atmospheric particulate analysis using angular light scattering
NASA Technical Reports Server (NTRS)
Hansen, M. Z.
1980-01-01
Using the light scattering matrix elements measured by a polar nephelometer, a procedure for estimating the characteristics of atmospheric particulates was developed. A theoretical library data set of scattering matrices derived from Mie theory was tabulated for a range of values of the size parameter and refractive index typical of atmospheric particles. Integration over the size parameter yielded the scattering matrix elements for a variety of hypothesized particulate size distributions. A least squares curve fitting technique was used to find a best fit from the library data for the experimental measurements. This was used as a first guess for a nonlinear iterative inversion of the size distributions. A real index of 1.50 and an imaginary index of -0.005 are representative of the smoothed inversion results for the near ground level atmospheric aerosol in Tucson.
Divergence and Necessary Conditions for Extremums
NASA Technical Reports Server (NTRS)
Quirein, J. A.
1973-01-01
The problem is considered of finding a dimension reducing transformation matrix B that maximizes the divergence in the reduced dimension for multi-class cases. A comparitively simple expression for the gradient of the average divergence with respect to B is developed. The developed expression for the gradient contains no eigenvectors or eigenvalues; also, all matrix inversions necessary to evaluate the gradient are available from computing the average divergence.
GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems
NASA Astrophysics Data System (ADS)
Goossens, Bart; Luong, Hiêp; Philips, Wilfried
2017-08-01
Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.
Inverting pump-probe spectroscopy for state tomography of excitonic systems.
Hoyer, Stephan; Whaley, K Birgitta
2013-04-28
We propose a two-step protocol for inverting ultrafast spectroscopy experiments on a molecular aggregate to extract the time-evolution of the excited state density matrix. The first step is a deconvolution of the experimental signal to determine a pump-dependent response function. The second step inverts this response function to obtain the quantum state of the system, given a model for how the system evolves following the probe interaction. We demonstrate this inversion analytically and numerically for a dimer model system, and evaluate the feasibility of scaling it to larger molecular aggregates such as photosynthetic protein-pigment complexes. Our scheme provides a direct alternative to the approach of determining all Hamiltonian parameters and then simulating excited state dynamics.
Are Low-order Covariance Estimates Useful in Error Analyses?
NASA Astrophysics Data System (ADS)
Baker, D. F.; Schimel, D.
2005-12-01
Atmospheric trace gas inversions, using modeled atmospheric transport to infer surface sources and sinks from measured concentrations, are most commonly done using least-squares techniques that return not only an estimate of the state (the surface fluxes) but also the covariance matrix describing the uncertainty in that estimate. Besides allowing one to place error bars around the estimate, the covariance matrix may be used in simulation studies to learn what uncertainties would be expected from various hypothetical observing strategies. This error analysis capability is routinely used in designing instrumentation, measurement campaigns, and satellite observing strategies. For example, Rayner, et al (2002) examined the ability of satellite-based column-integrated CO2 measurements to constrain monthly-average CO2 fluxes for about 100 emission regions using this approach. Exact solutions for both state vector and covariance matrix become computationally infeasible, however, when the surface fluxes are solved at finer resolution (e.g., daily in time, under 500 km in space). It is precisely at these finer scales, however, that one would hope to be able to estimate fluxes using high-density satellite measurements. Non-exact estimation methods such as variational data assimilation or the ensemble Kalman filter could be used, but they achieve their computational savings by obtaining an only approximate state estimate and a low-order approximation of the true covariance. One would like to be able to use this covariance matrix to do the same sort of error analyses as are done with the full-rank covariance, but is it correct to do so? Here we compare uncertainties and `information content' derived from full-rank covariance matrices obtained from a direct, batch least squares inversion to those from the incomplete-rank covariance matrices given by a variational data assimilation approach solved with a variable metric minimization technique (the Broyden-Fletcher- Goldfarb-Shanno algorithm). Two cases are examined: a toy problem in which CO2 fluxes for 3 latitude bands are estimated for only 2 time steps per year, and for the monthly fluxes for 22 regions across 1988-2003 solved for in the TransCom3 interannual flux inversion of Baker, et al (2005). The usefulness of the uncertainty estimates will be assessed as a function of the number of minimization steps used in the variational approach; this will help determine whether they will also be useful in the high-resolution cases that we would most like to apply the non-exact methods to. Baker, D.F., et al., TransCom3 inversion intercomparison: Impact of transport model errors on the interannual variability of regional CO2 fluxes, 1988-2003, Glob. Biogeochem. Cycles, doi:10.1029/2004GB002439, 2005, in press. Rayner, P.J., R.M. Law, D.M. O'Brien, T.M. Butler, and A.C. Dilley, Global observations of the carbon budget, 3, Initial assessment of the impact of satellite orbit, scan geometry, and cloud on measuring CO2 from space, J. Geophys. Res., 107(D21), 4557, doi:10.1029/2001JD000618, 2002.
Generating probabilistic Boolean networks from a prescribed transition probability matrix.
Ching, W-K; Chen, X; Tsing, N-K
2009-11-01
Probabilistic Boolean networks (PBNs) have received much attention in modeling genetic regulatory networks. A PBN can be regarded as a Markov chain process and is characterised by a transition probability matrix. In this study, the authors propose efficient algorithms for constructing a PBN when its transition probability matrix is given. The complexities of the algorithms are also analysed. This is an interesting inverse problem in network inference using steady-state data. The problem is important as most microarray data sets are assumed to be obtained from sampling the steady-state.
Quantum Support Vector Machine for Big Data Classification
NASA Astrophysics Data System (ADS)
Rebentrost, Patrick; Mohseni, Masoud; Lloyd, Seth
2014-09-01
Supervised machine learning is the classification of new data based on already classified training examples. In this work, we show that the support vector machine, an optimized binary classifier, can be implemented on a quantum computer, with complexity logarithmic in the size of the vectors and the number of training examples. In cases where classical sampling algorithms require polynomial time, an exponential speedup is obtained. At the core of this quantum big data algorithm is a nonsparse matrix exponentiation technique for efficiently performing a matrix inversion of the training data inner-product (kernel) matrix.
The shifting zoom: new possibilities for inverse scattering on electrically large domains
NASA Astrophysics Data System (ADS)
Persico, Raffaele; Ludeno, Giovanni; Soldovieri, Francesco; De Coster, Alberic; Lambot, Sebastien
2017-04-01
Inverse scattering is a subject of great interest in diagnostic problems, which are in their turn of interest for many applicative problems as investigation of cultural heritage, characterization of foundations or subservices, identification of unexploded ordnances and so on [1-4]. In particular, GPR data are usually focused by means of migration algorithms, essentially based on a linear approximation of the scattering phenomenon. Migration algorithms are popular because they are computationally efficient and do not require the inversion of a matrix, neither the calculation of the elements of a matrix. In fact, they are essentially based on the adjoint of the linearised scattering operator, which allows in the end to write the inversion formula as a suitably weighted integral of the data [5]. In particular, this makes a migration algorithm more suitable than a linear microwave tomography inversion algorithm for the reconstruction of an electrically large investigation domain. However, this computational challenge can be overcome by making use of investigation domains joined side by side, as proposed e.g. in ref. [3]. This allows to apply a microwave tomography algorithm even to large investigation domains. However, the joining side by side of sequential investigation domains introduces a problem of limited (and asymmetric) maximum view angle with regard to the targets occurring close to the edges between two adjacent domains, or possibly crossing these edges. The shifting zoom is a method that allows to overcome this difficulty by means of overlapped investigation and observation domains [6-7]. It requires more sequential inversion with respect to adjacent investigation domains, but the really required extra-time is minimal because the matrix to be inverted is calculated ones and for all, as well as its singular value decomposition: what is repeated more time is only a fast matrix-vector multiplication. References [1] M. Pieraccini, L. Noferini, D. Mecatti, C. Atzeni, R. Persico, F. Soldovieri, Advanced Processing Techniques for Step-frequency Continuous-Wave Penetrating Radar: the Case Study of "Palazzo Vecchio" Walls (Firenze, Italy), Research on Nondestructive Evaluation, vol. 17, pp. 71-83, 2006. [2] N. Masini, R. Persico, E. Rizzo, A. Calia, M. T. Giannotta, G. Quarta, A. Pagliuca, "Integrated Techniques for Analysis and Monitoring of Historical Monuments: the case of S.Giovanni al Sepolcro in Brindisi (Southern Italy)." Near Surface Geophysics, vol. 8 (5), pp. 423-432, 2010. [3] E. Pettinelli, A. Di Matteo, E. Mattei, L. Crocco, F. Soldovieri, J. D. Redman, and A. P. Annan, "GPR response from buried pipes: Measurement on field site and tomographic reconstructions", IEEE Transactions on Geoscience and Remote Sensing, vol. 47, n. 8, 2639-2645, Aug. 2009. [4] O. Lopera, E. C. Slob, N. Milisavljevic and S. Lambot, "Filtering soil surface and antenna effects from GPR data to enhance landmine detection", IEEE Transactions on Geoscience and Remote Sensing, vol. 45, n. 3, pp.707-717, 2007. [5] R. Persico, "Introduction to Ground Penetrating Radar: Inverse Scattering and Data Processing". Wiley, 2014. [6] R. Persico, J. Sala, "The problem of the investigation domain subdivision in 2D linear inversions for large scale GPR data", IEEE Geoscience and Remote Sensing Letters, vol. 11, n. 7, pp. 1215-1219, doi 10.1109/LGRS.2013.2290008, July 2014. [7] R. Persico, F. Soldovieri, S. Lambot, Shifting zoom in 2D linear inversions performed on GPR data gathered along an electrically large investigation domain, Proc. 16th International Conference on Ground Penetrating Radar GPR2016, Honk-Kong, June 13-16, 2016
NASA Technical Reports Server (NTRS)
Bloxham, Jeremy
1987-01-01
The method of stochastic inversion is extended to the simultaneous inversion of both main field and secular variation. In the present method, the time dependency is represented by an expansion in Legendre polynomials, resulting in a simple diagonal form for the a priori covariance matrix. The efficient preconditioned Broyden-Fletcher-Goldfarb-Shanno algorithm is used to solve the large system of equations resulting from expansion of the field spatially to spherical harmonic degree 14 and temporally to degree 8. Application of the method to observatory data spanning the 1900-1980 period results in a data fit of better than 30 nT, while providing temporally and spatially smoothly varying models of the magnetic field at the core-mantle boundary.
NASA Astrophysics Data System (ADS)
Boughariou, Jihene; Zouch, Wassim; Slima, Mohamed Ben; Kammoun, Ines; Hamida, Ahmed Ben
2015-11-01
Electroencephalography (EEG) and magnetic resonance imaging (MRI) are noninvasive neuroimaging modalities. They are widely used and could be complementary. The fusion of these modalities may enhance some emerging research fields targeting the exploration better brain activities. Such research attracted various scientific investigators especially to provide a convivial and helpful advanced clinical-aid tool enabling better neurological explorations. Our present research was, in fact, in the context of EEG inverse problem resolution and investigated an advanced estimation methodology for the localization of the cerebral activity. Our focus was, therefore, on the integration of temporal priors to low-resolution brain electromagnetic tomography (LORETA) formalism and to solve the inverse problem in the EEG. The main idea behind our proposed method was in the integration of a temporal projection matrix within the LORETA weighting matrix. A hyperparameter is the principal fact for such a temporal integration, and its importance would be obvious when obtaining a regularized smoothness solution. Our experimental results clearly confirmed the impact of such an optimization procedure adopted for the temporal regularization parameter comparatively to the LORETA method.
Learning the inverse kinetics of an octopus-like manipulator in three-dimensional space.
Giorelli, M; Renda, F; Calisti, M; Arienti, A; Ferri, G; Laschi, C
2015-05-13
This work addresses the inverse kinematics problem of a bioinspired octopus-like manipulator moving in three-dimensional space. The bioinspired manipulator has a conical soft structure that confers the ability of twirling around objects as a real octopus arm does. Despite the simple design, the soft conical shape manipulator driven by cables is described by nonlinear differential equations, which are difficult to solve analytically. Since exact solutions of the equations are not available, the Jacobian matrix cannot be calculated analytically and the classical iterative methods cannot be used. To overcome the intrinsic problems of methods based on the Jacobian matrix, this paper proposes a neural network learning the inverse kinematics of a soft octopus-like manipulator driven by cables. After the learning phase, a feed-forward neural network is able to represent the relation between manipulator tip positions and forces applied to the cables. Experimental results show that a desired tip position can be achieved in a short time, since heavy computations are avoided, with a degree of accuracy of 8% relative average error with respect to the total arm length.
Hypothesis testing for band size detection of high-dimensional banded precision matrices.
An, Baiguo; Guo, Jianhua; Liu, Yufeng
2014-06-01
Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.
Two-component quantum Hall effects in topological flat bands
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, Tian-Sheng; Zhu, Wei; Sheng, D. N.
2017-03-27
Here in this paper, we study quantum Hall states for two-component particles (hardcore bosons and fermions) loading in topological lattice models. By tuning the interplay of interspecies and intraspecies interactions, we demonstrate that two-component fractional quantum Hall states emerge at certain fractional filling factors ν = 1/2 for fermions (ν = 2/3 for bosons) in the lowest Chern band, classified by features from ground states including the unique Chern number matrix (inverse of the K matrix), the fractional charge and spin pumpings, and two parallel propagating edge modes. Moreover, we also apply our strategy to two-component fermions at integer fillingmore » factor ν = 2 , where a possible topological Neel antiferromagnetic phase is under intense debate very recently. For the typical π -flux checkerboard lattice, by tuning the onsite Hubbard repulsion, we establish a first-order phase transition directly from a two-component fermionic ν = 2 quantum Hall state at weak interaction to a topologically trivial antiferromagnetic insulator at strong interaction, and therefore exclude the possibility of an intermediate topological phase for our system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pieper, Andreas; Kreutzer, Moritz; Alvermann, Andreas, E-mail: alvermann@physik.uni-greifswald.de
2016-11-15
We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need formore » matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 10{sup 2} innermost eigenpairs of a topological insulator matrix with dimension 10{sup 9} derived from quantum physics applications.« less
The point-spread function measure of resolution for the 3-D electrical resistivity experiment
NASA Astrophysics Data System (ADS)
Oldenborger, Greg A.; Routh, Partha S.
2009-02-01
The solution appraisal component of the inverse problem involves investigation of the relationship between our estimated model and the actual model. However, full appraisal is difficult for large 3-D problems such as electrical resistivity tomography (ERT). We tackle the appraisal problem for 3-D ERT via the point-spread functions (PSFs) of the linearized resolution matrix. The PSFs represent the impulse response of the inverse solution and quantify our parameter-specific resolving capability. We implement an iterative least-squares solution of the PSF for the ERT experiment, using on-the-fly calculation of the sensitivity via an adjoint integral equation with stored Green's functions and subgrid reduction. For a synthetic example, analysis of individual PSFs demonstrates the truly 3-D character of the resolution. The PSFs for the ERT experiment are Gaussian-like in shape, with directional asymmetry and significant off-diagonal features. Computation of attributes representative of the blurring and localization of the PSF reveal significant spatial dependence of the resolution with some correlation to the electrode infrastructure. Application to a time-lapse ground-water monitoring experiment demonstrates the utility of the PSF for assessing feature discrimination, predicting artefacts and identifying model dependence of resolution. For a judicious selection of model parameters, we analyse the PSFs and their attributes to quantify the case-specific localized resolving capability and its variability over regions of interest. We observe approximate interborehole resolving capability of less than 1-1.5m in the vertical direction and less than 1-2.5m in the horizontal direction. Resolving capability deteriorates significantly outside the electrode infrastructure.
NASA Astrophysics Data System (ADS)
Babaee, Hessam; Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em
2017-09-01
We develop a new robust methodology for the stochastic Navier-Stokes equations based on the dynamically-orthogonal (DO) and bi-orthogonal (BO) methods [1-3]. Both approaches are variants of a generalized Karhunen-Loève (KL) expansion in which both the stochastic coefficients and the spatial basis evolve according to system dynamics, hence, capturing the low-dimensional structure of the solution. The DO and BO formulations are mathematically equivalent [3], but they exhibit computationally complimentary properties. Specifically, the BO formulation may fail due to crossing of the eigenvalues of the covariance matrix, while both BO and DO become unstable when there is a high condition number of the covariance matrix or zero eigenvalues. To this end, we combine the two methods into a robust hybrid framework and in addition we employ a pseudo-inverse technique to invert the covariance matrix. The robustness of the proposed method stems from addressing the following issues in the DO/BO formulation: (i) eigenvalue crossing: we resolve the issue of eigenvalue crossing in the BO formulation by switching to the DO near eigenvalue crossing using the equivalence theorem and switching back to BO when the distance between eigenvalues is larger than a threshold value; (ii) ill-conditioned covariance matrix: we utilize a pseudo-inverse strategy to invert the covariance matrix; (iii) adaptivity: we utilize an adaptive strategy to add/remove modes to resolve the covariance matrix up to a threshold value. In particular, we introduce a soft-threshold criterion to allow the system to adapt to the newly added/removed mode and therefore avoid repetitive and unnecessary mode addition/removal. When the total variance approaches zero, we show that the DO/BO formulation becomes equivalent to the evolution equation of the Optimally Time-Dependent modes [4]. We demonstrate the capability of the proposed methodology with several numerical examples, namely (i) stochastic Burgers equation: we analyze the performance of the method in the presence of eigenvalue crossing and zero eigenvalues; (ii) stochastic Kovasznay flow: we examine the method in the presence of a singular covariance matrix; and (iii) we examine the adaptivity of the method for an incompressible flow over a cylinder where for large stochastic forcing thirteen DO/BO modes are active.
Thieke, Christian; Nill, Simeon; Oelfke, Uwe; Bortfeld, Thomas
2002-05-01
In inverse planning for intensity-modulated radiotherapy, the dose calculation is a crucial element limiting both the maximum achievable plan quality and the speed of the optimization process. One way to integrate accurate dose calculation algorithms into inverse planning is to precalculate the dose contribution of each beam element to each voxel for unit fluence. These precalculated values are stored in a big dose calculation matrix. Then the dose calculation during the iterative optimization process consists merely of matrix look-up and multiplication with the actual fluence values. However, because the dose calculation matrix can become very large, this ansatz requires a lot of computer memory and is still very time consuming, making it not practical for clinical routine without further modifications. In this work we present a new method to significantly reduce the number of entries in the dose calculation matrix. The method utilizes the fact that a photon pencil beam has a rapid radial dose falloff, and has very small dose values for the most part. In this low-dose part of the pencil beam, the dose contribution to a voxel is only integrated into the dose calculation matrix with a certain probability. Normalization with the reciprocal of this probability preserves the total energy, even though many matrix elements are omitted. Three probability distributions were tested to find the most accurate one for a given memory size. The sampling method is compared with the use of a fully filled matrix and with the well-known method of just cutting off the pencil beam at a certain lateral distance. A clinical example of a head and neck case is presented. It turns out that a sampled dose calculation matrix with only 1/3 of the entries of the fully filled matrix does not sacrifice the quality of the resulting plans, whereby the cutoff method results in a suboptimal treatment plan.
Deghosting based on the transmission matrix method
NASA Astrophysics Data System (ADS)
Wang, Benfeng; Wu, Ru-Shan; Chen, Xiaohong
2017-12-01
As the developments of seismic exploration and subsequent seismic exploitation advance, marine acquisition systems with towed streamers become an important seismic data acquisition method. But the existing air-water reflective interface can generate surface related multiples, including ghosts, which can affect the accuracy and performance of the following seismic data processing algorithms. Thus, we derive a deghosting method from a new perspective, i.e. using the transmission matrix (T-matrix) method instead of inverse scattering series. The T-matrix-based deghosting algorithm includes all scattering effects and is convergent absolutely. Initially, the effectiveness of the proposed method is demonstrated using synthetic data obtained from a designed layered model, and its noise-resistant property is also illustrated using noisy synthetic data contaminated by random noise. Numerical examples on complicated data from the open SMAART Pluto model and field marine data further demonstrate the validity and flexibility of the proposed method. After deghosting, low frequency components are recovered reasonably and the fake high frequency components are attenuated, and the recovered low frequency components will be useful for the subsequent full waveform inversion. The proposed deghosting method is currently suitable for two-dimensional towed streamer cases with accurate constant depth information and its extension into variable-depth streamers in three-dimensional cases will be studied in the future.
Gianola, Daniel; Fariello, Maria I.; Naya, Hugo; Schön, Chris-Carolin
2016-01-01
Standard genome-wide association studies (GWAS) scan for relationships between each of p molecular markers and a continuously distributed target trait. Typically, a marker-based matrix of genomic similarities among individuals (G) is constructed, to account more properly for the covariance structure in the linear regression model used. We show that the generalized least-squares estimator of the regression of phenotype on one or on m markers is invariant with respect to whether or not the marker(s) tested is(are) used for building G, provided variance components are unaffected by exclusion of such marker(s) from G. The result is arrived at by using a matrix expression such that one can find many inverses of genomic relationship, or of phenotypic covariance matrices, stemming from removing markers tested as fixed, but carrying out a single inversion. When eigenvectors of the genomic relationship matrix are used as regressors with fixed regression coefficients, e.g., to account for population stratification, their removal from G does matter. Removal of eigenvectors from G can have a noticeable effect on estimates of genomic and residual variances, so caution is needed. Concepts were illustrated using genomic data on 599 wheat inbred lines, with grain yield as target trait, and on close to 200 Arabidopsis thaliana accessions. PMID:27520956
Zhou, Xiaolong; Wang, Xina; Feng, Xi; Zhang, Kun; Peng, Xiaoniu; Wang, Hanbin; Liu, Chunlei; Han, Yibo; Wang, Hao; Li, Quan
2017-07-12
Carbon dots (C dots, size < 10 nm) have been conventionally decorated onto semiconductor matrixes for photocatalytic H 2 evolution, but the efficiency is largely limited by the low loading ratio of the C dots on the photocatalyst. Here, we propose an inverse structure of Cd 0.5 Zn 0.5 S quantum dots (QDs) loaded onto the onionlike carbon (OLC) matrix for noble metal-free photocatalytic H 2 evolution. Cd 0.5 Zn 0.5 S QDs (6.9 nm) were uniformly distributed on an OLC (30 nm) matrix with both upconverted and downconverted photoluminescence property. Such an inverse structure allows the full optimization of the QD/OLC interfaces for effective energy transfer and charge separation, both of which contribute to efficient H 2 generation. An optimized H 2 generation rate of 2018 μmol/h/g (under the irradiation of visible light) and 58.6 μmol/h/g (under the irradiation of 550-900 nm light) was achieved in the Cd 0.5 Zn 0.5 S/OLC composite samples. The present work shows that using the OLC matrix in such a reverse construction is a promising strategy for noble metal-free solar hydrogen production.
Fabrication of cell-benign inverse opal hydrogels for three-dimensional cell culture.
Im, Pilseon; Ji, Dong Hwan; Kim, Min Kyung; Kim, Jaeyun
2017-05-15
Inverse opal hydrogels (IOHs) for cell culture were fabricated and optimized using calcium-crosslinked alginate microbeads as sacrificial template and gelatin as a matrix. In contrast to traditional three-dimensional (3D) scaffolds, the gelatin IOHs allowed the utilization of both the macropore surface and inner matrix for cell co-culture. In order to remove templates efficiently for the construction of 3D interconnected macropores and to maintain high cell viability during the template removal process using EDTA solution, various factors in fabrication, including alginate viscosity, alginate concentration, alginate microbeads size, crosslinking calcium concentration, and gelatin network density were investigated. Low viscosity alginate, lower crosslinking calcium ion concentration, and lower concentration of alginate and gelatin were found to obtain high viability of cells encapsulated in the gelatin matrix after removal of the alginate template by EDTA treatment by allowing rapid dissociation and diffusion of alginate polymers. Based on the optimized fabrication conditions, gelatin IOHs showed good potential as a cell co-culture system, applicable to tissue engineering and cancer research. Copyright © 2017 Elsevier Inc. All rights reserved.
Spectral Calculation of ICRF Wave Propagation and Heating in 2-D Using Massively Parallel Computers
NASA Astrophysics Data System (ADS)
Jaeger, E. F.; D'Azevedo, E.; Berry, L. A.; Carter, M. D.; Batchelor, D. B.
2000-10-01
Spectral calculations of ICRF wave propagation in plasmas have the natural advantage that they require no assumption regarding the smallness of the ion Larmor radius ρ relative to wavelength λ. Results are therefore applicable to all orders in k_bot ρ where k_bot = 2π/λ. But because all modes in the spectral representation are coupled, the solution requires inversion of a large dense matrix. In contrast, finite difference algorithms involve only matrices that are sparse and banded. Thus, spectral calculations of wave propagation and heating in tokamak plasmas have so far been limited to 1-D. In this paper, we extend the spectral method to 2-D by taking advantage of new matrix inversion techniques that utilize massively parallel computers. By spreading the dense matrix over 576 processors on the ORNL IBM RS/6000 SP supercomputer, we are able to solve up to 120,000 coupled complex equations requiring 230 GBytes of memory and achieving over 500 Gflops/sec. Initial results for ASDEX and NSTX will be presented using up to 200 modes in both the radial and vertical dimensions.
A fast object-oriented Matlab implementation of the Reproducing Kernel Particle Method
NASA Astrophysics Data System (ADS)
Barbieri, Ettore; Meo, Michele
2012-05-01
Novel numerical methods, known as Meshless Methods or Meshfree Methods and, in a wider perspective, Partition of Unity Methods, promise to overcome most of disadvantages of the traditional finite element techniques. The absence of a mesh makes meshfree methods very attractive for those problems involving large deformations, moving boundaries and crack propagation. However, meshfree methods still have significant limitations that prevent their acceptance among researchers and engineers, namely the computational costs. This paper presents an in-depth analysis of computational techniques to speed-up the computation of the shape functions in the Reproducing Kernel Particle Method and Moving Least Squares, with particular focus on their bottlenecks, like the neighbour search, the inversion of the moment matrix and the assembly of the stiffness matrix. The paper presents numerous computational solutions aimed at a considerable reduction of the computational times: the use of kd-trees for the neighbour search, sparse indexing of the nodes-points connectivity and, most importantly, the explicit and vectorized inversion of the moment matrix without using loops and numerical routines.
Parallel halftoning technique using dot diffusion optimization
NASA Astrophysics Data System (ADS)
Molina-Garcia, Javier; Ponomaryov, Volodymyr I.; Reyes-Reyes, Rogelio; Cruz-Ramos, Clara
2017-05-01
In this paper, a novel approach for halftone images is proposed and implemented for images that are obtained by the Dot Diffusion (DD) method. Designed technique is based on an optimization of the so-called class matrix used in DD algorithm and it consists of generation new versions of class matrix, which has no baron and near-baron in order to minimize inconsistencies during the distribution of the error. Proposed class matrix has different properties and each is designed for two different applications: applications where the inverse-halftoning is necessary, and applications where this method is not required. The proposed method has been implemented in GPU (NVIDIA GeForce GTX 750 Ti), multicore processors (AMD FX(tm)-6300 Six-Core Processor and in Intel core i5-4200U), using CUDA and OpenCV over a PC with linux. Experimental results have shown that novel framework generates a good quality of the halftone images and the inverse halftone images obtained. The simulation results using parallel architectures have demonstrated the efficiency of the novel technique when it is implemented in real-time processing.
Direct discriminant locality preserving projection with Hammerstein polynomial expansion.
Chen, Xi; Zhang, Jiashu; Li, Defang
2012-12-01
Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.
Research on allocation efficiency of the daisy chain allocation algorithm
NASA Astrophysics Data System (ADS)
Shi, Jingping; Zhang, Weiguo
2013-03-01
With the improvement of the aircraft performance in reliability, maneuverability and survivability, the number of the control effectors increases a lot. How to distribute the three-axis moments into the control surfaces reasonably becomes an important problem. Daisy chain method is simple and easy to be carried out in the design of the allocation system. But it can not solve the allocation problem for entire attainable moment subset. For the lateral-directional allocation problem, the allocation efficiency of the daisy chain can be directly measured by the area of its subset of attainable moments. Because of the non-linear allocation characteristic, the subset of attainable moments of daisy-chain method is a complex non-convex polygon, and it is difficult to solve directly. By analyzing the two-dimensional allocation problems with a "micro-element" idea, a numerical calculation algorithm is proposed to compute the area of the non-convex polygon. In order to improve the allocation efficiency of the algorithm, a genetic algorithm with the allocation efficiency chosen as the fitness function is proposed to find the best pseudo-inverse matrix.
A Hybrid Seismic Inversion Method for V P/V S Ratio and Its Application to Gas Identification
NASA Astrophysics Data System (ADS)
Guo, Qiang; Zhang, Hongbing; Han, Feilong; Xiao, Wei; Shang, Zuoping
2018-03-01
The ratio of compressional wave velocity to shear wave velocity (V P/V S ratio) has established itself as one of the most important parameters in identifying gas reservoirs. However, considering that seismic inversion process is highly non-linear and geological conditions encountered may be complex, a direct estimation of V P/V S ratio from pre-stack seismic data remains a challenging task. In this paper, we propose a hybrid seismic inversion method to estimate V P/V S ratio directly. In this method, post- and pre-stack inversions are combined in which the pre-stack inversion for V P/V S ratio is driven by the post-stack inversion results (i.e., V P and density). In particular, the V P/V S ratio is considered as a model parameter and is directly inverted from the pre-stack inversion based on the exact Zoeppritz equation. Moreover, anisotropic Markov random field is employed in order to regularise the inversion process as well as taking care of geological structures (boundaries) information. Aided by the proposed hybrid inversion strategy, the directional weighting coefficients incorporated in the anisotropic Markov random field neighbourhoods are quantitatively calculated by the anisotropic diffusion method. The synthetic test demonstrates the effectiveness of the proposed inversion method. In particular, given low quality of the pre-stack data and high heterogeneity of the target layers in the field data, the proposed inversion method reveals the detailed model of V P/V S ratio that can successfully identify the gas-bearing zones.
Simulating reservoir lithologies by an actively conditioned Markov chain model
NASA Astrophysics Data System (ADS)
Feng, Runhai; Luthi, Stefan M.; Gisolf, Dries
2018-06-01
The coupled Markov chain model can be used to simulate reservoir lithologies between wells, by conditioning them on the observed data in the cored wells. However, with this method, only the state at the same depth as the current cell is going to be used for conditioning, which may be a problem if the geological layers are dipping. This will cause the simulated lithological layers to be broken or to become discontinuous across the reservoir. In order to address this problem, an actively conditioned process is proposed here, in which a tolerance angle is predefined. The states contained in the region constrained by the tolerance angle will be employed for conditioning in the horizontal chain first, after which a coupling concept with the vertical chain is implemented. In order to use the same horizontal transition matrix for different future states, the tolerance angle has to be small. This allows the method to work in reservoirs without complex structures caused by depositional processes or tectonic deformations. Directional artefacts in the modeling process are avoided through a careful choice of the simulation path. The tolerance angle and dipping direction of the strata can be obtained from a correlation between wells, or from seismic data, which are available in most hydrocarbon reservoirs, either by interpretation or by inversion that can also assist the construction of a horizontal probability matrix.
Young inversion with multiple linked QTLs under selection in a hybrid zone.
Lee, Cheng-Ruei; Wang, Baosheng; Mojica, Julius P; Mandáková, Terezie; Prasad, Kasavajhala V S K; Goicoechea, Jose Luis; Perera, Nadeesha; Hellsten, Uffe; Hundley, Hope N; Johnson, Jenifer; Grimwood, Jane; Barry, Kerrie; Fairclough, Stephen; Jenkins, Jerry W; Yu, Yeisoo; Kudrna, Dave; Zhang, Jianwei; Talag, Jayson; Golser, Wolfgang; Ghattas, Kathryn; Schranz, M Eric; Wing, Rod; Lysak, Martin A; Schmutz, Jeremy; Rokhsar, Daniel S; Mitchell-Olds, Thomas
2017-04-03
Fixed chromosomal inversions can reduce gene flow and promote speciation in two ways: by suppressing recombination and by carrying locally favoured alleles at multiple loci. However, it is unknown whether favoured mutations slowly accumulate on older inversions or if young inversions spread because they capture pre-existing adaptive quantitative trait loci (QTLs). By genetic mapping, chromosome painting and genome sequencing, we have identified a major inversion controlling ecologically important traits in Boechera stricta. The inversion arose since the last glaciation and subsequently reached local high frequency in a hybrid speciation zone. Furthermore, the inversion shows signs of positive directional selection. To test whether the inversion could have captured existing, linked QTLs, we crossed standard, collinear haplotypes from the hybrid zone and found multiple linked phenology QTLs within the inversion region. These findings provide the first direct evidence that linked, locally adapted QTLs may be captured by young inversions during incipient speciation.
Young inversion with multiple linked QTLs under selection in a hybrid zone
Lee, Cheng-Ruei; Wang, Baosheng; Mojica, Julius; Mandáková, Terezie; Prasad, Kasavajhala V. S. K.; Goicoechea, Jose Luis; Perera, Nadeesha; Hellsten, Uffe; Hundley, Hope N.; Johnson, Jenifer; Grimwood, Jane; Barry, Kerrie; Fairclough, Stephen; Jenkins, Jerry W.; Yu, Yeisoo; Kudrna, Dave; Zhang, Jianwei; Talag, Jayson; Golser, Wolfgang; Ghattas, Katherine; Schranz, M. Eric; Wing, Rod; Lysak, Martin A.; Schmutz, Jeremy; Rokhsar, Daniel S.; Mitchell-Olds, Thomas
2017-01-01
Fixed chromosomal inversions can reduce gene flow and promote speciation in two ways: by suppressing recombination and by carrying locally favored alleles at multiple loci. However, it is unknown whether favored mutations slowly accumulate on older inversions or if young inversions spread because they capture preexisting adaptive Quantitative Trait Loci (QTLs). By genetic mapping, chromosome painting and genome sequencing we have identified a major inversion controlling ecologically important traits in Boechera stricta. The inversion arose since the last glaciation and subsequently reached local high frequency in a hybrid speciation zone. Furthermore, the inversion shows signs of positive directional selection. To test whether the inversion could have captured existing, linked QTLs, we crossed standard, collinear haplotypes from the hybrid zone and found multiple linked phenology QTLs within the inversion region. These findings provide the first direct evidence that linked, locally adapted QTLs may be captured by young inversions during incipient speciation. PMID:28812690
Recursive partitioned inversion of large (1500 x 1500) symmetric matrices
NASA Technical Reports Server (NTRS)
Putney, B. H.; Brownd, J. E.; Gomez, R. A.
1976-01-01
A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.
Redundant interferometric calibration as a complex optimization problem
NASA Astrophysics Data System (ADS)
Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.
2018-05-01
Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.
NASA Astrophysics Data System (ADS)
Schumacher, F.; Friederich, W.
2015-12-01
We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion.
Das, Anup; Sampson, Aaron L.; Lainscsek, Claudia; Muller, Lyle; Lin, Wutu; Doyle, John C.; Cash, Sydney S.; Halgren, Eric; Sejnowski, Terrence J.
2017-01-01
The correlation method from brain imaging has been used to estimate functional connectivity in the human brain. However, brain regions might show very high correlation even when the two regions are not directly connected due to the strong interaction of the two regions with common input from a third region. One previously proposed solution to this problem is to use a sparse regularized inverse covariance matrix or precision matrix (SRPM) assuming that the connectivity structure is sparse. This method yields partial correlations to measure strong direct interactions between pairs of regions while simultaneously removing the influence of the rest of the regions, thus identifying regions that are conditionally independent. To test our methods, we first demonstrated conditions under which the SRPM method could indeed find the true physical connection between a pair of nodes for a spring-mass example and an RC circuit example. The recovery of the connectivity structure using the SRPM method can be explained by energy models using the Boltzmann distribution. We then demonstrated the application of the SRPM method for estimating brain connectivity during stage 2 sleep spindles from human electrocorticography (ECoG) recordings using an 8 × 8 electrode array. The ECoG recordings that we analyzed were from a 32-year-old male patient with long-standing pharmaco-resistant left temporal lobe complex partial epilepsy. Sleep spindles were automatically detected using delay differential analysis and then analyzed with SRPM and the Louvain method for community detection. We found spatially localized brain networks within and between neighboring cortical areas during spindles, in contrast to the case when sleep spindles were not present. PMID:28095202
3D Magnetization Vector Inversion of Magnetic Data: Improving and Comparing Methods
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Zhang, Henglei; Geng, Meixia; Zuo, Boxin
2017-12-01
Magnetization vector inversion is an useful approach to invert for magnetic anomaly in the presence of significant remanent magnetization and self-demagnetization. However, magnetizations are usually obtained in many different directions under the influences of geophysical non-uniqueness. We propose an iteration algorithm of magnetization vector inversion (M-IDI) that one couple of magnetization direction is iteratively computed after the magnetization intensity is recovered from the magnitude magnetic anomaly. And we compare it with previous methods of (1) three orthogonal components inversion of total magnetization vector at Cartesian framework (MMM), (2) intensity, inclination and declination inversion at spherical framework (MID), (3) directly recovering the magnetization inclination and declination (M-IDCG) and (4) estimating the magnetization direction using correlation method (M-IDC) at the sequential inversion frameworks. The synthetic examples indicate that MMM returns multiply magnetization directions and MID results are strongly dependent on initial model and parameter weights. M-IDI computes faster than M-IDC and achieves a constant magnetization direction compared with M-IDCG. Additional priori information constraints can improve the results of MMM, MID and M-IDCG. Obtaining one magnetization direction, M-IDC and M-IDI are suitable for single and isolated anomaly. Finally, M-IDI and M-IDC are used to invert and interpret the magnetic anomaly of the Galinge iron-ore deposit (NW China) and the results are verified by information from drillholes and physical properties measurements of ore and rock samples. Magnetization vector inversion provides a comprehensive way to evaluate and investigate the remanent magnetization and self-demagnetization.
Matrix of moments of the Legendre polynomials and its application to problems of electrostatics
NASA Astrophysics Data System (ADS)
Savchenko, A. O.
2017-01-01
In this work, properties of the matrix of moments of the Legendre polynomials are presented and proven. In particular, the explicit form of the elements of the matrix inverse to the matrix of moments is found and theorems of the linear combination and orthogonality are proven. On the basis of these properties, the total charge and the dipole moment of a conducting ball in a nonuniform electric field, the charge distribution over the surface of the conducting ball, its multipole moments, and the force acting on a conducting ball situated on the axis of a nonuniform axisymmetric electric field are determined. All assertions are formulated in theorems, the proofs of which are based on the properties of the matrix of moments of the Legendre polynomials.
Near constant-time optimal piecewise LDR to HDR inverse tone mapping
NASA Astrophysics Data System (ADS)
Chen, Qian; Su, Guan-Ming; Yin, Peng
2015-02-01
In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.
NASA Astrophysics Data System (ADS)
Kuo, Chih-Hao
Efficient and accurate modeling of electromagnetic scattering from layered rough surfaces with buried objects finds applications ranging from detection of landmines to remote sensing of subsurface soil moisture. The formulation of a hybrid numerical/analytical solution to electromagnetic scattering from layered rough surfaces is first presented in this dissertation. The solution to scattering from each rough interface is sought independently based on the extended boundary condition method (EBCM), where the scattered fields of each rough interface are expressed as a summation of plane waves and then cast into reflection/transmission matrices. To account for interactions between multiple rough boundaries, the scattering matrix method (SMM) is applied to recursively cascade reflection and transmission matrices of each rough interface and obtain the composite reflection matrix from the overall scattering medium. The validation of this method against the Method of Moments (MoM) and Small Perturbation Method (SPM) is addressed and the numerical results which investigate the potential of low frequency radar systems in estimating deep soil moisture are presented. Computational efficiency of the proposed method is also discussed. In order to demonstrate the capability of this method in modeling coherent multiple scattering phenomena, the proposed method has been employed to analyze backscattering enhancement and satellite peaks due to surface plasmon waves from layered rough surfaces. Numerical results which show the appearance of enhanced backscattered peaks and satellite peaks are presented. Following the development of the EBCM/SMM technique, a technique which incorporates a buried object in layered rough surfaces by employing the T-matrix method and the cylindrical-to-spatial harmonics transformation is proposed. Validation and numerical results are provided. Finally, a multi-frequency polarimetric inversion algorithm for the retrieval of subsurface soil properties using VHF/UHF band radar measurements is devised. The top soil dielectric constant is first determined using an L-band inversion algorithm. For the retrieval of subsurface properties, a time-domain inversion technique is employed together with a parameter optimization for the pulse shape of time delay echoes from VHF/UHF band radar observations. Numerical studies to investigate the accuracy of the proposed inversion technique in presence of errors are addressed.
NASA Technical Reports Server (NTRS)
Morgera, S. D.; Cooper, D. B.
1976-01-01
The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.
Mesh-matrix analysis method for electromagnetic launchers
NASA Technical Reports Server (NTRS)
Elliott, David G.
1989-01-01
The mesh-matrix method is a procedure for calculating the current distribution in the conductors of electromagnetic launchers with coil or flat-plate geometry. Once the current distribution is known the launcher performance can be calculated. The method divides the conductors into parallel current paths, or meshes, and finds the current in each mesh by matrix inversion. The author presents procedures for writing equations for the current and voltage relations for a few meshes to serve as a pattern for writing the computer code. An available subroutine package provides routines for field and flux coefficients and equation solution.
An experimental SMI adaptive antenna array simulator for weak interfering signals
NASA Technical Reports Server (NTRS)
Dilsavor, Ronald S.; Gupta, Inder J.
1991-01-01
An experimental sample matrix inversion (SMI) adaptive antenna array for suppressing weak interfering signals is described. The experimental adaptive array uses a modified SMI algorithm to increase the interference suppression. In the modified SMI algorithm, the sample covariance matrix is redefined to reduce the effect of thermal noise on the weights of an adaptive array. This is accomplished by subtracting a fraction of the smallest eigenvalue of the original covariance matrix from its diagonal entries. The test results obtained using the experimental system are compared with theoretical results. The two show a good agreement.
Users manual for the Variable dimension Automatic Synthesis Program (VASP)
NASA Technical Reports Server (NTRS)
White, J. S.; Lee, H. Q.
1971-01-01
A dictionary and some problems for the Variable Automatic Synthesis Program VASP are submitted. The dictionary contains a description of each subroutine and instructions on its use. The example problems give the user a better perspective on the use of VASP for solving problems in modern control theory. These example problems include dynamic response, optimal control gain, solution of the sampled data matrix Ricatti equation, matrix decomposition, and pseudo inverse of a matrix. Listings of all subroutines are also included. The VASP program has been adapted to run in the conversational mode on the Ames 360/67 computer.
Random matrix theory and portfolio optimization in Moroccan stock exchange
NASA Astrophysics Data System (ADS)
El Alaoui, Marwane
2015-09-01
In this work, we use random matrix theory to analyze eigenvalues and see if there is a presence of pertinent information by using Marčenko-Pastur distribution. Thus, we study cross-correlation among stocks of Casablanca Stock Exchange. Moreover, we clean correlation matrix from noisy elements to see if the gap between predicted risk and realized risk would be reduced. We also analyze eigenvectors components distributions and their degree of deviations by computing the inverse participation ratio. This analysis is a way to understand the correlation structure among stocks of Casablanca Stock Exchange portfolio.
Diagonal dominance for the multivariable Nyquist array using function minimization
NASA Technical Reports Server (NTRS)
Leininger, G. G.
1977-01-01
A new technique for the design of multivariable control systems using the multivariable Nyquist array method was developed. A conjugate direction function minimization algorithm is utilized to achieve a diagonal dominant condition over the extended frequency range of the control system. The minimization is performed on the ratio of the moduli of the off-diagonal terms to the moduli of the diagonal terms of either the inverse or direct open loop transfer function matrix. Several new feedback design concepts were also developed, including: (1) dominance control parameters for each control loop; (2) compensator normalization to evaluate open loop conditions for alternative design configurations; and (3) an interaction index to determine the degree and type of system interaction when all feedback loops are closed simultaneously. This new design capability was implemented on an IBM 360/75 in a batch mode but can be easily adapted to an interactive computer facility. The method was applied to the Pratt and Whitney F100 turbofan engine.
Variability simulations with a steady, linearized primitive equations model
NASA Technical Reports Server (NTRS)
Kinter, J. L., III; Nigam, S.
1985-01-01
Solutions of the steady, primitive equations on a sphere, linearized about a zonally symmetric basic state are computed for the purpose of simulating monthly mean variability in the troposphere. The basic states are observed, winter monthly mean, zonal means of zontal and meridional velocities, temperatures and surface pressures computed from the 15 year NMC time series. A least squares fit to a series of Legendre polynomials is used to compute the basic states between 20 H and the equator, and the hemispheres are assumed symmetric. The model is spectral in the zonal direction, and centered differences are employed in the meridional and vertical directions. Since the model is steady and linear, the solution is obtained by inversion of a block, pente-diagonal matrix. The model simulates the climatology of the GFDL nine level, spectral general circulation model quite closely, particularly in middle latitudes above the boundary layer. This experiment is an extension of that simulation to examine variability of the steady, linear solution.
Phase composition, texture, and anisotropy of the properties of Al-Cu-Li-Mg alloy sheets
NASA Astrophysics Data System (ADS)
Betsofen, S. Ya.; Antipov, V. V.; Serebrennikova, N. Yu.; Dolgova, M. I.; Kabanova, Yu. A.
2017-10-01
The formation of the anisotropy of the mechanical properties, the texture, and the phase composition of thin-sheet Al-4.3Cu-1.4Li-0.4Mg and Al-1.8Li-1.8Cu-0.9 Mg alloys have been studied by X-ray diffraction and tensile tests. Various types of anisotropy of the strength properties of the alloys have been revealed: normal anisotropy (strength in the longitudinal direction is higher than that in the transverse direction) in the Al-4.3Cu-1.4Li-0.4Mg alloy and inverse anisotropy in the Al-1.8Li-1.8Cu-0.9Mg alloy. It is shown that the anisotropy of the strength properties is dependent not only on the texture of a solid solution, but also on the content and the texture of the δ' (Al3Li) and T1 (Al2CuLi) phases and their coherency and compatibility of deformation with the matrix.
Bayesian source term determination with unknown covariance of measurements
NASA Astrophysics Data System (ADS)
Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav
2017-04-01
Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
ALARA: The next link in a chain of activation codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, P.P.H.; Henderson, D.L.
1996-12-31
The Adaptive Laplace and Analytic Radioactivity Analysis [ALARA] code has been developed as the next link in the chain of DKR radioactivity codes. Its methods address the criticisms of DKR while retaining its best features. While DKR ignored loops in the transmutation/decay scheme to preserve the exactness of the mathematical solution, ALARA incorporates new computational approaches without jeopardizing the most important features of DKR`s physical modelling and mathematical methods. The physical model uses `straightened-loop, linear chains` to achieve the same accuracy in the loop solutions as is demanded in the rest of the scheme. In cases where a chain hasmore » no loops, the exact DKR solution is used. Otherwise, ALARA adaptively chooses between a direct Laplace inversion technique and a Laplace expansion inversion technique to optimize the accuracy and speed of the solution. All of these methods result in matrix solutions which allow the fastest and most accurate solution of exact pulsing histories. Since the entire history is solved for each chain as it is created, ALARA achieves the optimum combination of high accuracy, high speed and low memory usage. 8 refs., 2 figs.« less
Mathematical design of a novel input/instruction device using a moving acoustic emitter
NASA Astrophysics Data System (ADS)
Wang, Xianchao; Guo, Yukun; Li, Jingzhi; Liu, Hongyu
2017-10-01
This paper is concerned with the mathematical design of a novel input/instruction device using a moving emitter. The emitter acts as a point source and can be installed on a digital pen or worn on the finger of the human being who desires to interact/communicate with the computer. The input/instruction can be recognized by identifying the moving trajectory of the emitter performed by the human being from the collected wave field data. The identification process is modelled as an inverse source problem where one intends to identify the trajectory of a moving point source. There are several salient features of our study which distinguish our result from the existing ones in the literature. First, the point source is moving in an inhomogeneous background medium, which models the human body. Second, the dynamical wave field data are collected in a limited aperture. Third, the reconstruction method is independent of the background medium, and it is totally direct without any matrix inversion. Hence, it is efficient and robust with respect to the measurement noise. Both theoretical justifications and computational experiments are presented to verify our novel findings.
NASA Astrophysics Data System (ADS)
Ahn, Chi Young; Jeon, Kiwan; Park, Won-Kwang
2015-06-01
This study analyzes the well-known MUltiple SIgnal Classification (MUSIC) algorithm to identify unknown support of thin penetrable electromagnetic inhomogeneity from scattered field data collected within the so-called multi-static response matrix in limited-view inverse scattering problems. The mathematical theories of MUSIC are partially discovered, e.g., in the full-view problem, for an unknown target of dielectric contrast or a perfectly conducting crack with the Dirichlet boundary condition (Transverse Magnetic-TM polarization) and so on. Hence, we perform further research to analyze the MUSIC-type imaging functional and to certify some well-known but theoretically unexplained phenomena. For this purpose, we establish a relationship between the MUSIC imaging functional and an infinite series of Bessel functions of integer order of the first kind. This relationship is based on the rigorous asymptotic expansion formula in the existence of a thin inhomogeneity with a smooth supporting curve. Various results of numerical simulation are presented in order to support the identified structure of MUSIC. Although a priori information of the target is needed, we suggest a least condition of range of incident and observation directions to apply MUSIC in the limited-view problem.
Calculating broad neutron resonances in a cut-off Woods-Saxon potential
NASA Astrophysics Data System (ADS)
Baran, Á.; Noszály, Cs.; Salamon, P.; Vertse, T.
2015-07-01
In a cut-off Woods-Saxon (CWS) potential with realistic depth S -matrix poles being far from the imaginary wave number axis form a sequence where the distances of the consecutive resonances are inversely proportional with the cut-off radius value, which is an unphysical parameter. Other poles lying closer to the imaginary wave number axis might have trajectories with irregular shapes as the depth of the potential increases. Poles being close repel each other, and their repulsion is responsible for the changes of the directions of the corresponding trajectories. The repulsion might cause that certain resonances become antibound and later resonances again when they collide on the imaginary axis. The interaction is extremely sensitive to the cut-off radius value, which is an apparent handicap of the CWS potential.
Application of kernel method in fluorescence molecular tomography
NASA Astrophysics Data System (ADS)
Zhao, Yue; Baikejiang, Reheman; Li, Changqing
2017-02-01
Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.
NASA Technical Reports Server (NTRS)
Mach, D. M.; Koshak, W. J.
2007-01-01
A matrix calibration procedure has been developed that uniquely relates the electric fields measured at the aircraft with the external vector electric field and net aircraft charge. The calibration method can be generalized to any reasonable combination of electric field measurements and aircraft. A calibration matrix is determined for each aircraft that represents the individual instrument responses to the external electric field. The aircraft geometry and configuration of field mills (FMs) uniquely define the matrix. The matrix can then be inverted to determine the external electric field and net aircraft charge from the FM outputs. A distinct advantage of the method is that if one or more FMs need to be eliminated or deemphasized [e.g., due to a malfunction), it is a simple matter to reinvert the matrix without the malfunctioning FMs. To demonstrate the calibration technique, data are presented from several aircraft programs (ER-2, DC-8, Altus, and Citation).
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey B.; Raymond, C.; Smrekar, S.; Millbury, C.
2004-01-01
This viewgraph presentation reviews a Bayesian approach to the inversion of gravity and magnetic data with specific application to the Ismenius Area of Mars. Many inverse problems encountered in geophysics and planetary science are well known to be non-unique (i.e. inversion of gravity the density structure of a body). In hopes of reducing the non-uniqueness of solutions, there has been interest in the joint analysis of data. An example is the joint inversion of gravity and magnetic data, with the assumption that the same physical anomalies generate both the observed magnetic and gravitational anomalies. In this talk, we formulate the joint analysis of different types of data in a Bayesian framework and apply the formalism to the inference of the density and remanent magnetization structure for a local region in the Ismenius area of Mars. The Bayesian approach allows prior information or constraints in the solutions to be incorporated in the inversion, with the "best" solutions those whose forward predictions most closely match the data while remaining consistent with assumed constraints. The application of this framework to the inversion of gravity and magnetic data on Mars reveals two typical challenges - the forward predictions of the data have a linear dependence on some of the quantities of interest, and non-linear dependence on others (termed the "linear" and "non-linear" variables, respectively). For observations with Gaussian noise, a Bayesian approach to inversion for "linear" variables reduces to a linear filtering problem, with an explicitly computable "error" matrix. However, for models whose forward predictions have non-linear dependencies, inference is no longer given by such a simple linear problem, and moreover, the uncertainty in the solution is no longer completely specified by a computable "error matrix". It is therefore important to develop methods for sampling from the full Bayesian posterior to provide a complete and statistically consistent picture of model uncertainty, and what has been learned from observations. We will discuss advanced numerical techniques, including Monte Carlo Markov
Masuda, Y; Misztal, I; Tsuruta, S; Legarra, A; Aguilar, I; Lourenco, D A L; Fragomeni, B O; Lawlor, T J
2016-03-01
The objectives of this study were to develop and evaluate an efficient implementation in the computation of the inverse of genomic relationship matrix with the recursion algorithm, called the algorithm for proven and young (APY), in single-step genomic BLUP. We validated genomic predictions for young bulls with more than 500,000 genotyped animals in final score for US Holsteins. Phenotypic data included 11,626,576 final scores on 7,093,380 US Holstein cows, and genotypes were available for 569,404 animals. Daughter deviations for young bulls with no classified daughters in 2009, but at least 30 classified daughters in 2014 were computed using all the phenotypic data. Genomic predictions for the same bulls were calculated with single-step genomic BLUP using phenotypes up to 2009. We calculated the inverse of the genomic relationship matrix GAPY(-1) based on a direct inversion of genomic relationship matrix on a small subset of genotyped animals (core animals) and extended that information to noncore animals by recursion. We tested several sets of core animals including 9,406 bulls with at least 1 classified daughter, 9,406 bulls and 1,052 classified dams of bulls, 9,406 bulls and 7,422 classified cows, and random samples of 5,000 to 30,000 animals. Validation reliability was assessed by the coefficient of determination from regression of daughter deviation on genomic predictions for the predicted young bulls. The reliabilities were 0.39 with 5,000 randomly chosen core animals, 0.45 with the 9,406 bulls, and 7,422 cows as core animals, and 0.44 with the remaining sets. With phenotypes truncated in 2009 and the preconditioned conjugate gradient to solve mixed model equations, the number of rounds to convergence for core animals defined by bulls was 1,343; defined by bulls and cows, 2,066; and defined by 10,000 random animals, at most 1,629. With complete phenotype data, the number of rounds decreased to 858, 1,299, and at most 1,092, respectively. Setting up GAPY(-1) for 569,404 genotyped animals with 10,000 core animals took 1.3h and 57 GB of memory. The validation reliability with APY reaches a plateau when the number of core animals is at least 10,000. Predictions with APY have little differences in reliability among definitions of core animals. Single-step genomic BLUP with APY is applicable to millions of genotyped animals. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.
2017-12-01
Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.
Recursive inverse kinematics for robot arms via Kalman filtering and Bryson-Frazier smoothing
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1987-01-01
This paper applies linear filtering and smoothing theory to solve recursively the inverse kinematics problem for serial multilink manipulators. This problem is to find a set of joint angles that achieve a prescribed tip position and/or orientation. A widely applicable numerical search solution is presented. The approach finds the minimum of a generalized distance between the desired and the actual manipulator tip position and/or orientation. Both a first-order steepest-descent gradient search and a second-order Newton-Raphson search are developed. The optimal relaxation factor required for the steepest descent method is computed recursively using an outward/inward procedure similar to those used typically for recursive inverse dynamics calculations. The second-order search requires evaluation of a gradient and an approximate Hessian. A Gauss-Markov approach is used to approximate the Hessian matrix in terms of products of first-order derivatives. This matrix is inverted recursively using a two-stage process of inward Kalman filtering followed by outward smoothing. This two-stage process is analogous to that recently developed by the author to solve by means of spatial filtering and smoothing the forward dynamics problem for serial manipulators.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.; Kuvshinov, Alexey V.
2016-05-01
This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lin; Dai, Zhenxue; Gong, Huili
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
Efficient hierarchical trans-dimensional Bayesian inversion of magnetotelluric data
NASA Astrophysics Data System (ADS)
Xiang, Enming; Guo, Rongwen; Dosso, Stan E.; Liu, Jianxin; Dong, Hao; Ren, Zhengyong
2018-06-01
This paper develops an efficient hierarchical trans-dimensional (trans-D) Bayesian algorithm to invert magnetotelluric (MT) data for subsurface geoelectrical structure, with unknown geophysical model parameterization (the number of conductivity-layer interfaces) and data-error models parameterized by an auto-regressive (AR) process to account for potential error correlations. The reversible-jump Markov-chain Monte Carlo algorithm, which adds/removes interfaces and AR parameters in birth/death steps, is applied to sample the trans-D posterior probability density for model parameterization, model parameters, error variance and AR parameters, accounting for the uncertainties of model dimension and data-error statistics in the uncertainty estimates of the conductivity profile. To provide efficient sampling over the multiple subspaces of different dimensions, advanced proposal schemes are applied. Parameter perturbations are carried out in principal-component space, defined by eigen-decomposition of the unit-lag model covariance matrix, to minimize the effect of inter-parameter correlations and provide effective perturbation directions and length scales. Parameters of new layers in birth steps are proposed from the prior, instead of focused distributions centred at existing values, to improve birth acceptance rates. Parallel tempering, based on a series of parallel interacting Markov chains with successively relaxed likelihoods, is applied to improve chain mixing over model dimensions. The trans-D inversion is applied in a simulation study to examine the resolution of model structure according to the data information content. The inversion is also applied to a measured MT data set from south-central Australia.
Variational methods for direct/inverse problems of atmospheric dynamics and chemistry
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena
2013-04-01
We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
Polynomial compensation, inversion, and approximation of discrete time linear systems
NASA Technical Reports Server (NTRS)
Baram, Yoram
1987-01-01
The least-squares transformation of a discrete-time multivariable linear system into a desired one by convolving the first with a polynomial system yields optimal polynomial solutions to the problems of system compensation, inversion, and approximation. The polynomial coefficients are obtained from the solution to a so-called normal linear matrix equation, whose coefficients are shown to be the weighting patterns of certain linear systems. These, in turn, can be used in the recursive solution of the normal equation.
Spatial operator approach to flexible multibody system dynamics and control
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1991-01-01
The inverse and forward dynamics problems for flexible multibody systems were solved using the techniques of spatially recursive Kalman filtering and smoothing. These algorithms are easily developed using a set of identities associated with mass matrix factorization and inversion. These identities are easily derived using the spatial operator algebra developed by the author. Current work is aimed at computational experiments with the described algorithms and at modelling for control design of limber manipulator systems. It is also aimed at handling and manipulation of flexible objects.
NASA Astrophysics Data System (ADS)
Lonchakov, A. T.
2011-04-01
A negative paramagnetic contribution to the dynamic elastic moduli is identified in AIIBVI:3d wide band-gap compounds for the first time. It appears as a paramagnetic elastic, or, briefly, paraelastic, susceptibility. These compounds are found to have a linear temperature dependence for the inverse paraelastic susceptibility. This is explained by a contribution from the diagonal matrix elements of the orbit-lattice interaction operators in the energy of the spin-orbital states of the 3d-ion as a function of applied stress (by analogy with the Curie contribution to the magnetic susceptibility). The inverse paraelastic susceptibility of AIIBVI crystals containing non-Kramers 3d-ions is found to deviate from linearity with decreasing temperature and reaches saturation. This effect is explained by a contribution from nondiagonal matrix elements (analogous to the well known van Vleck contribution to the magnetic susceptibility of paramagnets).
A fast time-difference inverse solver for 3D EIT with application to lung imaging.
Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut
2016-08-01
A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.
Bhanot, Gyan [Princeton, NJ; Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY
2009-09-08
Class network routing is implemented in a network such as a computer network comprising a plurality of parallel compute processors at nodes thereof. Class network routing allows a compute processor to broadcast a message to a range (one or more) of other compute processors in the computer network, such as processors in a column or a row. Normally this type of operation requires a separate message to be sent to each processor. With class network routing pursuant to the invention, a single message is sufficient, which generally reduces the total number of messages in the network as well as the latency to do a broadcast. Class network routing is also applied to dense matrix inversion algorithms on distributed memory parallel supercomputers with hardware class function (multicast) capability. This is achieved by exploiting the fact that the communication patterns of dense matrix inversion can be served by hardware class functions, which results in faster execution times.
Reconstructing Images in Astrophysics, an Inverse Problem Point of View
NASA Astrophysics Data System (ADS)
Theys, Céline; Aime, Claude
2016-04-01
After a short introduction, a first section provides a brief tutorial to the physics of image formation and its detection in the presence of noises. The rest of the chapter focuses on the resolution of the inverse problem
Wieland, D C F; Krywka, C; Mick, E; Willumeit-Römer, R; Bader, R; Kluess, D
2015-10-01
In the present paper we have investigated the impact of electro stimulation on microstructural parameters of the major constituents of bone, hydroxyapatite and collagen. Therapeutic approaches exhibit an improved healing rate under electric fields. However, the underlying mechanism is not fully understood so far. In this context one possible effect which could be responsible is the inverse piezo electric effect at bone structures. Therefore, we have carried out scanning X-ray microdiffraction experiments, i.e. we recorded X-ray diffraction data with micrometer resolution using synchrotron radiation from trabecular bone samples in order to investigate how the bone matrix reacts to an applied electric field. Different samples were investigated, where the orientation of the collagen matrix differed with respect to the applied electric field. Our experiments aimed to determine whether the inverse piezo electric effect could have a significant impact on the improved bone regeneration owing to electrostimulative therapy. Our data suggest that strain is in fact induced in bone by the collagen matrix via the inverse piezo electric effect which occurs in the presence of an adequately oriented electric field. The magnitude of the underlying strain is in a range where bone cells are able to detect it. In our study we report on the piezoelectric effect in bone which was already discovered and explored on a macro scale in the 1950. Clinical approaches utilize successfully electro stimulation to enhance bone healing but the exact mechanisms taking place are still a matter of debate. We have measured the stress distribution with micron resolution in trabecular bone to determine the piezo electric induced stress. Our results show that the magnitude of the induced stress is big enough to be sensed by cells and therefore, could be a trigger for bone remodeling and growth. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Method of multivariate spectral analysis
Keenan, Michael R.; Kotula, Paul G.
2004-01-06
A method of determining the properties of a sample from measured spectral data collected from the sample by performing a multivariate spectral analysis. The method can include: generating a two-dimensional matrix A containing measured spectral data; providing a weighted spectral data matrix D by performing a weighting operation on matrix A; factoring D into the product of two matrices, C and S.sup.T, by performing a constrained alternating least-squares analysis of D=CS.sup.T, where C is a concentration intensity matrix and S is a spectral shapes matrix; unweighting C and S by applying the inverse of the weighting used previously; and determining the properties of the sample by inspecting C and S. This method can be used to analyze X-ray spectral data generated by operating a Scanning Electron Microscope (SEM) with an attached Energy Dispersive Spectrometer (EDS).
NASA Astrophysics Data System (ADS)
Mu, Tingkui; Bao, Donghao; Zhang, Chunmin; Chen, Zeyu; Song, Jionghui
2018-07-01
During the calibration of the system matrix of a Stokes polarimeter using reference polarization states (RPSs) and pseudo-inversion estimation method, the measurement intensities are usually noised by the signal-independent additive Gaussian noise or signal-dependent Poisson shot noise, the precision of the estimated system matrix is degraded. In this paper, we present a paradigm for selecting RPSs to improve the precision of the estimated system matrix in the presence of both types of noise. The analytical solution of the precision of the system matrix estimated with the RPSs are derived. Experimental measurements from a general Stokes polarimeter show that accurate system matrix is estimated with the optimal RPSs, which are generated using two rotating quarter-wave plates. The advantage of using optimal RPSs is a reduction in measurement time with high calibration precision.
Angle dependence in slow photon photocatalysis using TiO2 inverse opals
NASA Astrophysics Data System (ADS)
Curti, Mariano; Zvitco, Gonzalo; Grela, María Alejandra; Mendive, Cecilia B.
2018-03-01
The slow photon effect was studied by means of the photocatalytic degradation of stearic acid over TiO2 inverse opals. The comparison of the degradation rates over inverse opals with those obtained over disordered structures at different irradiation angles showed that the irradiation at the blue edge of the stopband leads to the activation of the effect, evidenced by an improvement factor of 1.8 ± 0.6 in the reaction rate for irradiation at 40°. The rigorous coupled-wave analysis (RCWA) method was employed to confirm the source of the enhancement; simulated spectra showed an enhancement in the absorption of the TiO2 matrix that composes the inverse opal at a 40° irradiation angle, owing to an appropriate position of the stopband in relation to the absorption onset of TiO2.
A Hybrid Algorithm for Non-negative Matrix Factorization Based on Symmetric Information Divergence
Devarajan, Karthik; Ebrahimi, Nader; Soofi, Ehsan
2017-01-01
The objective of this paper is to provide a hybrid algorithm for non-negative matrix factorization based on a symmetric version of Kullback-Leibler divergence, known as intrinsic information. The convergence of the proposed algorithm is shown for several members of the exponential family such as the Gaussian, Poisson, gamma and inverse Gaussian models. The speed of this algorithm is examined and its usefulness is illustrated through some applied problems. PMID:28868206
Load cell having strain gauges of arbitrary location
Spletzer, Barry [Albuquerque, NM
2007-03-13
A load cell utilizes a plurality of strain gauges mounted upon the load cell body such that there are six independent load-strain relations. Load is determined by applying the inverse of a load-strain sensitivity matrix to a measured strain vector. The sensitivity matrix is determined by performing a multivariate regression technique on a set of known loads correlated to the resulting strains. Temperature compensation is achieved by configuring the strain gauges as co-located orthogonal pairs.
Geoelectric Characterization of Thermal Water Aquifers Using 2.5D Inversion of VES Measurements
NASA Astrophysics Data System (ADS)
Gyulai, Á.; Szűcs, P.; Turai, E.; Baracza, M. K.; Fejes, Z.
2017-03-01
This paper presents a short theoretical summary of the series expansion-based 2.5D combined geoelectric weighted inversion (CGWI) method and highlights the advantageous way with which the number of unknowns can be decreased due to the simultaneous characteristic of this inversion. 2.5D CGWI is an approximate inversion method for the determination of 3D structures, which uses the joint 2D forward modeling of dip and strike direction data. In the inversion procedure, the Steiner's most frequent value method is applied to the automatic separation of dip and strike direction data and outliers. The workflow of inversion and its practical application are presented in the study. For conventional vertical electrical sounding (VES) measurements, this method can determine the parameters of complex structures more accurately than the single inversion method. Field data show that the 2.5D CGWI which was developed can determine the optimal location for drilling an exploratory thermal water prospecting well. The novelty of this research is that the measured VES data in dip and strike direction are jointly inverted by the 2.5D CGWI method.
Gianola, Daniel; Fariello, Maria I; Naya, Hugo; Schön, Chris-Carolin
2016-10-13
Standard genome-wide association studies (GWAS) scan for relationships between each of p molecular markers and a continuously distributed target trait. Typically, a marker-based matrix of genomic similarities among individuals ( G: ) is constructed, to account more properly for the covariance structure in the linear regression model used. We show that the generalized least-squares estimator of the regression of phenotype on one or on m markers is invariant with respect to whether or not the marker(s) tested is(are) used for building G,: provided variance components are unaffected by exclusion of such marker(s) from G: The result is arrived at by using a matrix expression such that one can find many inverses of genomic relationship, or of phenotypic covariance matrices, stemming from removing markers tested as fixed, but carrying out a single inversion. When eigenvectors of the genomic relationship matrix are used as regressors with fixed regression coefficients, e.g., to account for population stratification, their removal from G: does matter. Removal of eigenvectors from G: can have a noticeable effect on estimates of genomic and residual variances, so caution is needed. Concepts were illustrated using genomic data on 599 wheat inbred lines, with grain yield as target trait, and on close to 200 Arabidopsis thaliana accessions. Copyright © 2016 Gianola et al.
Design of Robust Adaptive Unbalance Response Controllers for Rotors with Magnetic Bearings
NASA Technical Reports Server (NTRS)
Knospe, Carl R.; Tamer, Samir M.; Fedigan, Stephen J.
1996-01-01
Experimental results have recently demonstrated that an adaptive open loop control strategy can be highly effective in the suppression of unbalance induced vibration on rotors supported in active magnetic bearings. This algorithm, however, relies upon a predetermined gain matrix. Typically, this matrix is determined by an optimal control formulation resulting in the choice of the pseudo-inverse of the nominal influence coefficient matrix as the gain matrix. This solution may result in problems with stability and performance robustness since the estimated influence coefficient matrix is not equal to the actual influence coefficient matrix. Recently, analysis tools have been developed to examine the robustness of this control algorithm with respect to structured uncertainty. Herein, these tools are extended to produce a design procedure for determining the adaptive law's gain matrix. The resulting control algorithm has a guaranteed convergence rate and steady state performance in spite of the uncertainty in the rotor system. Several examples are presented which demonstrate the effectiveness of this approach and its advantages over the standard optimal control formulation.
A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography
NASA Astrophysics Data System (ADS)
Sun, S.; Chen, C.; WANG, H.; Wang, Q.
2014-12-01
The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M. & Cella, F., 2013. Self-constrained inversion of potential fields, Geophys J Int.This research is supported by the Fundamental Research Funds for Institute for Geophysical and Geochemical Exploration, Chinese Academy of Geological Sciences (Grant Nos. WHS201210 and WHS201211).
Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing
2015-07-01
In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.
NASA Astrophysics Data System (ADS)
Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric
2013-04-01
Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.
A direct-inverse method for transonic and separated flows about airfoils
NASA Technical Reports Server (NTRS)
Carlson, K. D.
1985-01-01
A direct-inverse technique and computer program called TAMSEP that can be sued for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicing the flowfield about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.
A direct-inverse method for transonic and separated flows about airfoils
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1990-01-01
A direct-inverse technique and computer program called TAMSEP that can be used for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicting the flow field about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.
Suspension parameter estimation in the frequency domain using a matrix inversion approach
NASA Astrophysics Data System (ADS)
Thite, A. N.; Banvidi, S.; Ibicek, T.; Bennett, L.
2011-12-01
The dynamic lumped parameter models used to optimise the ride and handling of a vehicle require base values of the suspension parameters. These parameters are generally experimentally identified. The accuracy of identified parameters can depend on the measurement noise and the validity of the model used. The existing publications on suspension parameter identification are generally based on the time domain and use a limited degree of freedom. Further, the data used are either from a simulated 'experiment' or from a laboratory test on an idealised quarter or a half-car model. In this paper, a method is developed in the frequency domain which effectively accounts for the measurement noise. Additional dynamic constraining equations are incorporated and the proposed formulation results in a matrix inversion approach. The nonlinearities in damping are estimated, however, using a time-domain approach. Full-scale 4-post rig test data of a vehicle are used. The variations in the results are discussed using the modal resonant behaviour. Further, a method is implemented to show how the results can be improved when the matrix inverted is ill-conditioned. The case study shows a good agreement between the estimates based on the proposed frequency-domain approach and measurable physical parameters.
Ice Cores Dating With a New Inverse Method Taking Account of the Flow Modeling Errors
NASA Astrophysics Data System (ADS)
Lemieux-Dudon, B.; Parrenin, F.; Blayo, E.
2007-12-01
Deep ice cores extracted from Antarctica or Greenland recorded a wide range of past climatic events. In order to contribute to the Quaternary climate system understanding, the calculation of an accurate depth-age relationship is a crucial point. Up to now ice chronologies for deep ice cores estimated with inverse approaches are based on quite simplified ice-flow models that fail to reproduce flow irregularities and consequently to respect all available set of age markers. We describe in this paper, a new inverse method that takes into account the model uncertainty in order to circumvent the restrictions linked to the use of simplified flow models. This method uses first guesses on two flow physical entities, the ice thinning function and the accumulation rate and then identifies correction functions on both flow entities. We highlight two major benefits brought by this new method: first of all the ability to respect large set of observations and as a consequence, the feasibility to estimate a synchronized common ice chronology for several cores at the same time. This inverse approach relies on a bayesian framework. To respect the positive constraint on the searched correction functions, we assume lognormal probability distribution on one hand for the background errors, but also for one particular set of the observation errors. We test this new inversion method on three cores simultaneously (the two EPICA cores : DC and DML and the Vostok core) and we assimilate more than 150 observations (e.g.: age markers, stratigraphic links,...). We analyze the sensitivity of the solution with respect to the background information, especially the prior error covariance matrix. The confidence intervals based on the posterior covariance matrix calculation, are estimated on the correction functions and for the first time on the overall output chronologies.
Sparsistency and Rates of Convergence in Large Covariance Matrix Estimation.
Lam, Clifford; Fan, Jianqing
2009-01-01
This paper studies the sparsistency and rates of convergence for estimating sparse covariance and precision matrices based on penalized likelihood with nonconvex penalty functions. Here, sparsistency refers to the property that all parameters that are zero are actually estimated as zero with probability tending to one. Depending on the case of applications, sparsity priori may occur on the covariance matrix, its inverse or its Cholesky decomposition. We study these three sparsity exploration problems under a unified framework with a general penalty function. We show that the rates of convergence for these problems under the Frobenius norm are of order (s(n) log p(n)/n)(1/2), where s(n) is the number of nonzero elements, p(n) is the size of the covariance matrix and n is the sample size. This explicitly spells out the contribution of high-dimensionality is merely of a logarithmic factor. The conditions on the rate with which the tuning parameter λ(n) goes to 0 have been made explicit and compared under different penalties. As a result, for the L(1)-penalty, to guarantee the sparsistency and optimal rate of convergence, the number of nonzero elements should be small: sn'=O(pn) at most, among O(pn2) parameters, for estimating sparse covariance or correlation matrix, sparse precision or inverse correlation matrix or sparse Cholesky factor, where sn' is the number of the nonzero elements on the off-diagonal entries. On the other hand, using the SCAD or hard-thresholding penalty functions, there is no such a restriction.
NASA Astrophysics Data System (ADS)
Clarke, A. P.; Vannucchi, P.; Ougier-Simonin, A.; Morgan, J. P.
2017-12-01
Subduction zone interface layers are often conceived to be heterogeneous, polyrheological zones analogous to exhumed mélanges. Mélanges typically contain mechanically strong blocks within a weaker matrix. However, our geomechanical study of the Osa Mélange, SW Costa Rica shows that this mélange contains blocks of altered basalt which are now weaker in friction than their surrounding indurated volcanoclastic matrix. Triaxial deformation experiments were conducted on samples of both the altered basalt blocks and the indurated volcanoclastic matrix at confining pressures of 60 and 120 MPa. These revealed that the volcanoclastic matrix has a strength 7.5 times that of the altered basalt at 60 MPa and 4 times at 120 MPa, with the altered basalt experiencing multi-stage failure. The inverted strength relationship between weaker blocks and stronger matrix evolved during subduction and diagenesis of the melange unit by dewatering, compaction and diagenesis of the matrix and cataclastic brecciation and hydrothermal alteration of the basalt blocks. During the evolution of this material, the matrix progressively indurated until its plastic yield stress became greater than the brittle yield stress of the blocks. At this point, the typical rheological relationship found within melanges inverts and melange blocks can fail seismically as the weakest links along the subduction plate interface. The Osa Melange is currently in the forearc of the erosive Middle America Trench and is being incorporated into the subduction zone interface at the updip limit of seismogenesis. The presence of altered basalt blocks acting as weak inclusions within this rock unit weakens the mélange as a whole rock mass. Seismic fractures can nucleate at or within these weak inclusions and the size of the block may limit the size of initial microseismic rock failure. However, when fractures are able to bridge across the matrix between blocks, significantly larger rupture areas may be possible. While this mechanism is a promising candidate for the updip limit of the unusually shallow seismogenic zone beneath Osa, it remains to be seen whether analogous evolutionary strength-inversions control the updip limit of other subduction seismogenic zones.
Kim, Hye-Na; Yoo, Haemin; Moon, Jun Hyuk
2013-05-21
We demonstrated the preparation of graphene-embedded 3D inverse opal electrodes for use in DSSCs. The graphene was incorporated locally into the top layers of the inverse opal structures and was embedded into the TiO2 matrix via post-treatment of the TiO2 precursors. DSSCs comprising the bare and 1-5 wt% graphene-incorporated TiO2 inverse opal electrodes were compared. We observed that the local arrangement of graphene sheets effectively enhanced electron transport without significantly reducing light harvesting by the dye molecules. A high efficiency of 7.5% was achieved in DSSCs prepared with the 3 wt% graphene-incorporated TiO2 inverse opal electrodes, constituting a 50% increase over the efficiencies of DSSCs prepared without graphene. The increase in efficiency was mainly attributed to an increase in J(SC), as determined by the photovoltaic parameters and the electrochemical impedance spectroscopy analysis.
The trust-region self-consistent field method in Kohn-Sham density-functional theory.
Thøgersen, Lea; Olsen, Jeppe; Köhn, Andreas; Jørgensen, Poul; Sałek, Paweł; Helgaker, Trygve
2005-08-15
The trust-region self-consistent field (TRSCF) method is extended to the optimization of the Kohn-Sham energy. In the TRSCF method, both the Roothaan-Hall step and the density-subspace minimization step are replaced by trust-region optimizations of local approximations to the Kohn-Sham energy, leading to a controlled, monotonic convergence towards the optimized energy. Previously the TRSCF method has been developed for optimization of the Hartree-Fock energy, which is a simple quadratic function in the density matrix. However, since the Kohn-Sham energy is a nonquadratic function of the density matrix, the local energy functions must be generalized for use with the Kohn-Sham model. Such a generalization, which contains the Hartree-Fock model as a special case, is presented here. For comparison, a rederivation of the popular direct inversion in the iterative subspace (DIIS) algorithm is performed, demonstrating that the DIIS method may be viewed as a quasi-Newton method, explaining its fast local convergence. In the global region the convergence behavior of DIIS is less predictable. The related energy DIIS technique is also discussed and shown to be inappropriate for the optimization of the Kohn-Sham energy.
On regularizing the MCTDH equations of motion
NASA Astrophysics Data System (ADS)
Meyer, Hans-Dieter; Wang, Haobin
2018-03-01
The Multiconfiguration Time-Dependent Hartree (MCTDH) approach leads to equations of motion (EOM) which become singular when there are unoccupied so-called single-particle functions (SPFs). Starting from a Hartree product, all SPFs, except the first one, are unoccupied initially. To solve the MCTDH-EOMs numerically, one therefore has to remove the singularity by a regularization procedure. Usually the inverse of a density matrix is regularized. Here we argue and show that regularizing the coefficient tensor, which in turn regularizes the density matrix as well, leads to an improved performance of the EOMs. The initially unoccupied SPFs are rotated faster into their "correct direction" in Hilbert space and the final results are less sensitive to the choice of the value of the regularization parameter. For a particular example (a spin-boson system studied with a transformed Hamiltonian), we could even show that only with the new regularization scheme could one obtain correct results. Finally, in Appendix A, a new integration scheme for the MCTDH-EOMs developed by Lubich and co-workers is discussed. It is argued that this scheme does not solve the problem of the unoccupied natural orbitals because this scheme ignores the latter and does not propagate them at all.
Tuning Fractures With Dynamic Data
NASA Astrophysics Data System (ADS)
Yao, Mengbi; Chang, Haibin; Li, Xiang; Zhang, Dongxiao
2018-02-01
Flow in fractured porous media is crucial for production of oil/gas reservoirs and exploitation of geothermal energy. Flow behaviors in such media are mainly dictated by the distribution of fractures. Measuring and inferring the distribution of fractures is subject to large uncertainty, which, in turn, leads to great uncertainty in the prediction of flow behaviors. Inverse modeling with dynamic data may assist to constrain fracture distributions, thus reducing the uncertainty of flow prediction. However, inverse modeling for flow in fractured reservoirs is challenging, owing to the discrete and non-Gaussian distribution of fractures, as well as strong nonlinearity in the relationship between flow responses and model parameters. In this work, building upon a series of recent advances, an inverse modeling approach is proposed to efficiently update the flow model to match the dynamic data while retaining geological realism in the distribution of fractures. In the approach, the Hough-transform method is employed to parameterize non-Gaussian fracture fields with continuous parameter fields, thus rendering desirable properties required by many inverse modeling methods. In addition, a recently developed forward simulation method, the embedded discrete fracture method (EDFM), is utilized to model the fractures. The EDFM maintains computational efficiency while preserving the ability to capture the geometrical details of fractures because the matrix is discretized as structured grid, while the fractures being handled as planes are inserted into the matrix grids. The combination of Hough representation of fractures with the EDFM makes it possible to tune the fractures (through updating their existence, location, orientation, length, and other properties) without requiring either unstructured grids or regridding during updating. Such a treatment is amenable to numerous inverse modeling approaches, such as the iterative inverse modeling method employed in this study, which is capable of dealing with strongly nonlinear problems. A series of numerical case studies with increasing complexity are set up to examine the performance of the proposed approach.
Inverse opal carbons for counter electrode of dye-sensitized solar cells.
Kang, Da-Young; Lee, Youngshin; Cho, Chang-Yeol; Moon, Jun Hyuk
2012-05-01
We investigated the fabrication of inverse opal carbon counter electrodes using a colloidal templating method for DSSCs. Specifically, bare inverse opal carbon, mesopore-incoporated inverse opal carbon, and graphitized inverse opal carbon were synthesized and stably dispersed in ethanol solution for spray coating on a FTO substrate. The thickness of the electrode was controlled by the number of coatings, and the average relative thickness was evaluated by measuring the transmittance spectrum. The effect of the counter electrode thickness on the photovoltaic performance of the DSSCs was investigated and analyzed by interfacial charge transfer resistance (R(CT)) under EIS measurement. The effect of the surface area and conductivity of the inverse opal was also investigated by considering the increase in surface area due to the mesopore in the inverse opal carbon and conductivity by graphitization of the carbon matrix. The results showed that the FF and thereby the efficiency of DSSCs were increased as the electrode thickness increased. Consequently, the larger FF and thereby the greater efficiency of the DSSCs were achieved for mIOC and gIOC compared to IOC, which was attributed to the lower R(CT). Finally, compared to a conventional Pt counter electrode, the inverse opal-based carbon showed a comparable efficiency upon application to DSSCs.
The attitude inversion method of geostationary satellites based on unscented particle filter
NASA Astrophysics Data System (ADS)
Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao
2018-04-01
The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.
An improved Newton iteration for the generalized inverse of a matrix, with applications
NASA Technical Reports Server (NTRS)
Pan, Victor; Schreiber, Robert
1990-01-01
The purpose here is to clarify and illustrate the potential for the use of variants of Newton's method of solving problems of practical interest on highly personal computers. The authors show how to accelerate the method substantially and how to modify it successfully to cope with ill-conditioned matrices. The authors conclude that Newton's method can be of value for some interesting computations, especially in parallel and other computing environments in which matrix products are especially easy to work with.
Aspects of the inverse problem for the Toda chain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozlowski, K. K., E-mail: karol.kozlowski@u-bourgogne.fr
We generalize Babelon's approach to equations in dual variables so as to be able to treat new types of operators which we build out of the sub-constituents of the model's monodromy matrix. Further, we also apply Sklyanin's recent monodromy matrix identities so as to obtain equations in dual variables for yet other operators. The schemes discussed in this paper appear to be universal and thus, in principle, applicable to many models solvable through the quantum separation of variables.
A stochastic evolution model for residue Insertion-Deletion Independent from Substitution.
Lèbre, Sophie; Michel, Christian J
2010-12-01
We develop here a new class of stochastic models of gene evolution based on residue Insertion-Deletion Independent from Substitution (IDIS). Indeed, in contrast to all existing evolution models, insertions and deletions are modeled here by a concept in population dynamics. Therefore, they are not only independent from each other, but also independent from the substitution process. After a separate stochastic analysis of the substitution and the insertion-deletion processes, we obtain a matrix differential equation combining these two processes defining the IDIS model. By deriving a general solution, we give an analytical expression of the residue occurrence probability at evolution time t as a function of a substitution rate matrix, an insertion rate vector, a deletion rate and an initial residue probability vector. Various mathematical properties of the IDIS model in relation with time t are derived: time scale, time step, time inversion and sequence length. Particular expressions of the nucleotide occurrence probability at time t are given for classical substitution rate matrices in various biological contexts: equal insertion rate, insertion-deletion only and substitution only. All these expressions can be directly used for biological evolutionary applications. The IDIS model shows a strongly different stochastic behavior from the classical substitution only model when compared on a gene dataset. Indeed, by considering three processes of residue insertion, deletion and substitution independently from each other, it allows a more realistic representation of gene evolution and opens new directions and applications in this research field. Copyright © 2010 Elsevier Ltd. All rights reserved.
Schreiber, Roberto; Paim, Layde R; de Rossi, Guilherme; Matos-Souza, José R; Costa E Silva, Anselmo de A; Souza, Cristiane M; Borges, Mariane; Azevedo, Eliza R; Alonso, Karina C; Gorla, José I; Cliquet, Alberto; Nadruz, Wilson
2014-11-01
Subjects with spinal cord injury (SCI) exhibit impaired left ventricular (LV) diastolic function, which has been reported to be attenuated by regular physical activity. This study investigated the relationship between circulating matrix metalloproteinases (MMPs) and tissue inhibitors of MMPs (TIMPs) and echocardiographic parameters in SCI subjects and the role of physical activity in this regard. Forty-two men with SCI [19 sedentary (S-SCI) and 23 physically-active (PA-SCI)] were evaluated by clinical, anthropometric, laboratory, and echocardiographic analysis. Plasmatic pro-MMP-2, MMP-2, MMP-8, pro-MMP-9, MMP-9, TIMP-1 and TIMP-2 levels were determined by enzyme-linked immunosorbent assay and zymography. PA-SCI subjects presented lower pro-MMP-2 and pro-MMP-2/TIMP-2 levels and improved markers of LV diastolic function (lower E/Em and higher Em and E/A values) than S-SCI ones. Bivariate analysis showed that pro-MMP-2 correlated inversely with Em and directly with E/Em, while MMP-9 correlated directly with LV mass index and LV end-diastolic diameter in the whole sample. Following multiple regression analysis, pro-MMP-2, but not physical activity, remained associated with Em, while MMP-9 was associated with LV mass index in the whole sample. These findings suggest differing roles for MMPs in LV structure and function regulation and an interaction among pro-MMP-2, diastolic function and physical activity in SCI subjects. Copyright © 2014 Elsevier B.V. All rights reserved.
Matrix Rigidity Activates Wnt Signaling through Down-regulation of Dickkopf-1 Protein*
Barbolina, Maria V.; Liu, Yiuying; Gurler, Hilal; Kim, Mijung; Kajdacsy-Balla, Andre A.; Rooper, Lisa; Shepard, Jaclyn; Weiss, Michael; Shea, Lonnie D.; Penzes, Peter; Ravosa, Matthew J.; Stack, M. Sharon
2013-01-01
Cells respond to changes in the physical properties of the extracellular matrix with altered behavior and gene expression, highlighting the important role of the microenvironment in the regulation of cell function. In the current study, culture of epithelial ovarian cancer cells on three-dimensional collagen I gels led to a dramatic down-regulation of the Wnt signaling inhibitor dickkopf-1 with a concomitant increase in nuclear β-catenin and enhanced β-catenin/Tcf/Lef transcriptional activity. Increased three-dimensional collagen gel invasion was accompanied by transcriptional up-regulation of the membrane-tethered collagenase membrane type 1 matrix metalloproteinase, and an inverse relationship between dickkopf-1 and membrane type 1 matrix metalloproteinase was observed in human epithelial ovarian cancer specimens. Similar results were obtained in other tissue-invasive cells such as vascular endothelial cells, suggesting a novel mechanism for functional coupling of matrix adhesion with Wnt signaling. PMID:23152495
Matrix rigidity activates Wnt signaling through down-regulation of Dickkopf-1 protein.
Barbolina, Maria V; Liu, Yiuying; Gurler, Hilal; Kim, Mijung; Kajdacsy-Balla, Andre A; Rooper, Lisa; Shepard, Jaclyn; Weiss, Michael; Shea, Lonnie D; Penzes, Peter; Ravosa, Matthew J; Stack, M Sharon
2013-01-04
Cells respond to changes in the physical properties of the extracellular matrix with altered behavior and gene expression, highlighting the important role of the microenvironment in the regulation of cell function. In the current study, culture of epithelial ovarian cancer cells on three-dimensional collagen I gels led to a dramatic down-regulation of the Wnt signaling inhibitor dickkopf-1 with a concomitant increase in nuclear β-catenin and enhanced β-catenin/Tcf/Lef transcriptional activity. Increased three-dimensional collagen gel invasion was accompanied by transcriptional up-regulation of the membrane-tethered collagenase membrane type 1 matrix metalloproteinase, and an inverse relationship between dickkopf-1 and membrane type 1 matrix metalloproteinase was observed in human epithelial ovarian cancer specimens. Similar results were obtained in other tissue-invasive cells such as vascular endothelial cells, suggesting a novel mechanism for functional coupling of matrix adhesion with Wnt signaling.
Apparatus and system for multivariate spectral analysis
Keenan, Michael R.; Kotula, Paul G.
2003-06-24
An apparatus and system for determining the properties of a sample from measured spectral data collected from the sample by performing a method of multivariate spectral analysis. The method can include: generating a two-dimensional matrix A containing measured spectral data; providing a weighted spectral data matrix D by performing a weighting operation on matrix A; factoring D into the product of two matrices, C and S.sup.T, by performing a constrained alternating least-squares analysis of D=CS.sup.T, where C is a concentration intensity matrix and S is a spectral shapes matrix; unweighting C and S by applying the inverse of the weighting used previously; and determining the properties of the sample by inspecting C and S. This method can be used by a spectrum analyzer to process X-ray spectral data generated by a spectral analysis system that can include a Scanning Electron Microscope (SEM) with an Energy Dispersive Detector and Pulse Height Analyzer.
Numerical solution of quadratic matrix equations for free vibration analysis of structures
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1975-01-01
This paper is concerned with the efficient and accurate solution of the eigenvalue problem represented by quadratic matrix equations. Such matrix forms are obtained in connection with the free vibration analysis of structures, discretized by finite 'dynamic' elements, resulting in frequency-dependent stiffness and inertia matrices. The paper presents a new numerical solution procedure of the quadratic matrix equations, based on a combined Sturm sequence and inverse iteration technique enabling economical and accurate determination of a few required eigenvalues and associated vectors. An alternative procedure based on a simultaneous iteration procedure is also described when only the first few modes are the usual requirement. The employment of finite dynamic elements in conjunction with the presently developed eigenvalue routines results in a most significant economy in the dynamic analysis of structures.
NASA Astrophysics Data System (ADS)
Voronina, Tatyana; Romanenko, Alexey; Loskutov, Artem
2017-04-01
The key point in the state-of-the-art in the tsunami forecasting is constructing a reliable tsunami source. In this study, we present an application of the original numerical inversion technique to modeling the tsunami sources of the 16 September 2015 Chile tsunami. The problem of recovering a tsunami source from remote measurements of the incoming wave in the deep-water tsunameters is considered as an inverse problem of mathematical physics in the class of ill-posed problems. This approach is based on the least squares and the truncated singular value decomposition techniques. The tsunami wave propagation is considered within the scope of the linear shallow-water theory. As in inverse seismic problem, the numerical solutions obtained by mathematical methods become unstable due to the presence of noise in real data. A method of r-solutions makes it possible to avoid instability in the solution to the ill-posed problem under study. This method seems to be attractive from the computational point of view since the main efforts are required only once for calculating the matrix whose columns consist of computed waveforms for each harmonic as a source (an unknown tsunami source is represented as a part of a spatial harmonics series in the source area). Furthermore, analyzing the singular spectra of the matrix obtained in the course of numerical calculations one can estimate the future inversion by a certain observational system that will allow offering a more effective disposition for the tsunameters with the help of precomputations. In other words, the results obtained allow finding a way to improve the inversion by selecting the most informative set of available recording stations. The case study of the 6 February 2013 Solomon Islands tsunami highlights a critical role of arranging deep-water tsunameters for obtaining the inversion results. Implementation of the proposed methodology to the 16 September 2015 Chile tsunami has successfully produced tsunami source model. The function recovered by the method proposed can find practical applications both as an initial condition for various optimization approaches and for computer calculation of the tsunami wave propagation.
Søndergaard, Anders Aspegren; Shepperson, Benjamin; Stapelfeldt, Henrik
2017-07-07
We present an efficient, noise-robust method based on Fourier analysis for reconstructing the three-dimensional measure of the alignment degree, ⟨cos 2 θ⟩, directly from its two-dimensional counterpart, ⟨cos 2 θ 2D ⟩. The method applies to nonadiabatic alignment of linear molecules induced by a linearly polarized, nonresonant laser pulse. Our theoretical analysis shows that the Fourier transform of the time-dependent ⟨cos 2 θ 2D ⟩ trace over one molecular rotational period contains additional frequency components compared to the Fourier transform of ⟨cos 2 θ⟩. These additional frequency components can be identified and removed from the Fourier spectrum of ⟨cos 2 θ 2D ⟩. By rescaling of the remaining frequency components, the Fourier spectrum of ⟨cos 2 θ⟩ is obtained and, finally, ⟨cos 2 θ⟩ is reconstructed through inverse Fourier transformation. The method allows the reconstruction of the ⟨cos 2 θ⟩ trace from a measured ⟨cos 2 θ 2D ⟩ trace, which is the typical observable of many experiments, and thereby provides direct comparison to calculated ⟨cos 2 θ⟩ traces, which is the commonly used alignment metric in theoretical descriptions. We illustrate our method by applying it to the measurement of nonadiabatic alignment of I 2 molecules. In addition, we present an efficient algorithm for calculating the matrix elements of cos 2 θ 2D and any other observable in the symmetric top basis. These matrix elements are required in the rescaling step, and they allow for highly efficient numerical calculation of ⟨cos 2 θ 2D ⟩ and ⟨cos 2 θ⟩ in general.
NASA Technical Reports Server (NTRS)
1979-01-01
The quasi-one dimensional flow program was modified in two ways. The Runge-Kutta subroutine was replaced with a subroutine which used a modified divided difference form of the Adams Pece method and the matrix inversion routine was replaced with a pseudo inverse routine. Calculations were run using both the original and modified programs. Comparison of the calculations showed that the original Runge-Kutta routine could not detect singularity near the throat and was integrating across it. The modified version was able to detect the singularity and therefore gave more valid calculations.
Solving Large-Scale Inverse Magnetostatic Problems using the Adjoint Method
Bruckner, Florian; Abert, Claas; Wautischer, Gregor; Huber, Christian; Vogler, Christoph; Hinze, Michael; Suess, Dieter
2017-01-01
An efficient algorithm for the reconstruction of the magnetization state within magnetic components is presented. The occurring inverse magnetostatic problem is solved by means of an adjoint approach, based on the Fredkin-Koehler method for the solution of the forward problem. Due to the use of hybrid FEM-BEM coupling combined with matrix compression techniques the resulting algorithm is well suited for large-scale problems. Furthermore the reconstruction of the magnetization state within a permanent magnet as well as an optimal design application are demonstrated. PMID:28098851
The solubility parameter for biomedical polymers-Application of inverse gas chromatography.
Adamska, K; Voelkel, A; Berlińska, A
2016-08-05
The solubility parameter seems to be a useful tool for thermodynamic characterisation of different materials. The solubility parameter concept can be used to predict sufficient miscibility or solubility between a solvent and a polymer, as well as components of co-polymer matrix in composite biomaterials. The values of solubility parameter were determined for polycaprolactone (PCL), polylactic acid (PLA) and polyethylene glycol (PEG) by using different procedures and experimental data, collected by means of inverse gas chromatography. Copyright © 2016 Elsevier B.V. All rights reserved.
General Matrix Inversion for the Calibration of Electric Field Sensor Arrays on Aircraft Platforms
NASA Technical Reports Server (NTRS)
Mach, D. M.; Koshak, W. J.
2006-01-01
We have developed a matrix calibration procedure that uniquely relates the electric fields measured at the aircraft with the external vector electric field and net aircraft charge. Our calibration method is being used with all of our aircraft/electric field sensing combinations and can be generalized to any reasonable combination of electric field measurements and aircraft. We determine a calibration matrix that represents the individual instrument responses to the external electric field. The aircraft geometry and configuration of field mills (FMs) uniquely define the matrix. The matrix can then be inverted to determine the external electric field and net aircraft charge from the FM outputs. A distinct advantage of the method is that if one or more FMs need to be eliminated or de-emphasized (for example, due to a malfunction), it is a simple matter to reinvert the matrix without the malfunctioning FMs. To demonstrate our calibration technique, we present data from several of our aircraft programs (ER-2, DC-8, Altus, Citation).
Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao
2016-05-19
Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory
2017-04-01
The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.
NASA Astrophysics Data System (ADS)
Fischer, P.; Jardani, A.; Lecoq, N.
2018-02-01
In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.
Lowry, David B.; Willis, John H.
2010-01-01
The role of chromosomal inversions in adaptation and speciation is controversial. Historically, inversions were thought to contribute to these processes either by directly causing hybrid sterility or by facilitating the maintenance of co-adapted gene complexes. Because inversions suppress recombination when heterozygous, a recently proposed local adaptation mechanism predicts that they will spread if they capture alleles at multiple loci involved in divergent adaptation to contrasting environments. Many empirical studies have found inversion polymorphisms linked to putatively adaptive phenotypes or distributed along environmental clines. However, direct involvement of an inversion in local adaptation and consequent ecological reproductive isolation has not to our knowledge been demonstrated in nature. In this study, we discovered that a chromosomal inversion polymorphism is geographically widespread, and we test the extent to which it contributes to adaptation and reproductive isolation under natural field conditions. Replicated crosses between the prezygotically reproductively isolated annual and perennial ecotypes of the yellow monkeyflower, Mimulus guttatus, revealed that alternative chromosomal inversion arrangements are associated with life-history divergence over thousands of kilometers across North America. The inversion polymorphism affected adaptive flowering time divergence and other morphological traits in all replicated crosses between four pairs of annual and perennial populations. To determine if the inversion contributes to adaptation and reproductive isolation in natural populations, we conducted a novel reciprocal transplant experiment involving outbred lines, where alternative arrangements of the inversion were reciprocally introgressed into the genetic backgrounds of each ecotype. Our results demonstrate for the first time in nature the contribution of an inversion to adaptation, an annual/perennial life-history shift, and multiple reproductive isolating barriers. These results are consistent with the local adaptation mechanism being responsible for the distribution of the two inversion arrangements across the geographic range of M. guttatus and that locally adaptive inversion effects contribute directly to reproductive isolation. Such a mechanism may be partially responsible for the observation that closely related species often differ by multiple chromosomal rearrangements. PMID:20927411
NASA Technical Reports Server (NTRS)
Fijany, Amir
1993-01-01
In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.
Investigation of the Capability of Compact Polarimetric SAR Interferometry to Estimate Forest Height
NASA Astrophysics Data System (ADS)
Zhang, Hong; Xie, Lei; Wang, Chao; Chen, Jiehong
2013-08-01
The main objective of this paper is to investigate the capability of compact Polarimetric SAR Interferometry (C-PolInSAR) on forest height estimation. For this, the pseudo fully polarimetric interferomteric (F-PolInSAR) covariance matrix is firstly reconstructed, then the three- stage inversion algorithm, hybrid algorithm, Music and Capon algorithm are applied to both C-PolInSAR covariance matrix and pseudo F-PolInSAR covariance matrix. The availability of forest height estimation is demonstrated using L-band data generated by simulator PolSARProSim and X-band airborne data acquired by East China Research Institute of Electronic Engineering, China Electronics Technology Group Corporation.
Quantum algorithm for support matrix machines
NASA Astrophysics Data System (ADS)
Duan, Bojia; Yuan, Jiabin; Liu, Ying; Li, Dan
2017-09-01
We propose a quantum algorithm for support matrix machines (SMMs) that efficiently addresses an image classification problem by introducing a least-squares reformulation. This algorithm consists of two core subroutines: a quantum matrix inversion (Harrow-Hassidim-Lloyd, HHL) algorithm and a quantum singular value thresholding (QSVT) algorithm. The two algorithms can be implemented on a universal quantum computer with complexity O[log(npq) ] and O[log(pq)], respectively, where n is the number of the training data and p q is the size of the feature space. By iterating the algorithms, we can find the parameters for the SMM classfication model. Our analysis shows that both HHL and QSVT algorithms achieve an exponential increase of speed over their classical counterparts.
Inverse MDS: Inferring Dissimilarity Structure from Multiple Item Arrangements
Kriegeskorte, Nikolaus; Mur, Marieke
2012-01-01
The pairwise dissimilarities of a set of items can be intuitively visualized by a 2D arrangement of the items, in which the distances reflect the dissimilarities. Such an arrangement can be obtained by multidimensional scaling (MDS). We propose a method for the inverse process: inferring the pairwise dissimilarities from multiple 2D arrangements of items. Perceptual dissimilarities are classically measured using pairwise dissimilarity judgments. However, alternative methods including free sorting and 2D arrangements have previously been proposed. The present proposal is novel (a) in that the dissimilarity matrix is estimated by “inverse MDS” based on multiple arrangements of item subsets, and (b) in that the subsets are designed by an adaptive algorithm that aims to provide optimal evidence for the dissimilarity estimates. The subject arranges the items (represented as icons on a computer screen) by means of mouse drag-and-drop operations. The multi-arrangement method can be construed as a generalization of simpler methods: It reduces to pairwise dissimilarity judgments if each arrangement contains only two items, and to free sorting if the items are categorically arranged into discrete piles. Multi-arrangement combines the advantages of these methods. It is efficient (because the subject communicates many dissimilarity judgments with each mouse drag), psychologically attractive (because dissimilarities are judged in context), and can characterize continuous high-dimensional dissimilarity structures. We present two procedures for estimating the dissimilarity matrix: a simple weighted-aligned-average of the partial dissimilarity matrices and a computationally intensive algorithm, which estimates the dissimilarity matrix by iteratively minimizing the error of MDS-predictions of the subject’s arrangements. The Matlab code for interactive arrangement and dissimilarity estimation is available from the authors upon request. PMID:22848204
NASA Astrophysics Data System (ADS)
Bohm, Mirjam; Haberland, Christian; Asch, Günter
2013-04-01
We use local earthquake data observed by the amphibious, temporary seismic MERAMEX array to derive spatial variations of seismic attenuation (Qp) in the crust and upper mantle beneath Central Java. The path-averaged attenuation values (t∗) of a high quality subset of 84 local earthquakes were calculated by a spectral inversion technique. These 1929 t∗-values inverted by a least-squares tomographic inversion yield the 3D distribution of the specific attenuation (Qp). Analysis of the model resolution matrix and synthetic recovery tests were used to investigate the confidence of the Qp-model. We notice a prominent zone of increased attenuation beneath and north of the modern volcanic arc at depths down to 15 km. Most of this anomaly seems to be related to the Eocene-Miocene Kendeng Basin (mainly in the eastern part of the study area). Enhanced attenuation is also found in the upper crust in the direct vicinity of recent volcanoes pointing towards zones of partial melts, presence of fluids and increased temperatures in the middle to upper crust. The middle and lower crust seems not to be associated with strong heating and the presence of melts throughout the arc. Enhanced attenuation above the subducting slab beneath the marine forearc seems to be due to the presence of fluids.
A combined direct/inverse three-dimensional transonic wing design method for vector computers
NASA Technical Reports Server (NTRS)
Weed, R. A.; Carlson, L. A.; Anderson, W. K.
1984-01-01
A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.
Hybrid Adaptive Flight Control with Model Inversion Adaptation
NASA Technical Reports Server (NTRS)
Nguyen, Nhan
2011-01-01
This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.
A systematic linear space approach to solving partially described inverse eigenvalue problems
NASA Astrophysics Data System (ADS)
Hu, Sau-Lon James; Li, Haujun
2008-06-01
Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.
NASA Astrophysics Data System (ADS)
Ke, Rihuan; Ng, Michael K.; Sun, Hai-Wei
2015-12-01
In this paper, we study the block lower triangular Toeplitz-like with tri-diagonal blocks system which arises from the time-fractional partial differential equation. Existing fast numerical solver (e.g., fast approximate inversion method) cannot handle such linear system as the main diagonal blocks are different. The main contribution of this paper is to propose a fast direct method for solving this linear system, and to illustrate that the proposed method is much faster than the classical block forward substitution method for solving this linear system. Our idea is based on the divide-and-conquer strategy and together with the fast Fourier transforms for calculating Toeplitz matrix-vector multiplication. The complexity needs O (MNlog2 M) arithmetic operations, where M is the number of blocks (the number of time steps) in the system and N is the size (number of spatial grid points) of each block. Numerical examples from the finite difference discretization of time-fractional partial differential equations are also given to demonstrate the efficiency of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, Nadine; Prestel, S.; Ritzmann, M.
We present the first public implementation of antenna-based QCD initial- and final-state showers. The shower kernels are 2→3 antenna functions, which capture not only the collinear dynamics but also the leading soft (coherent) singularities of QCD matrix elements. We define the evolution measure to be inversely proportional to the leading poles, hence gluon emissions are evolved in a p ⊥ measure inversely proportional to the eikonal, while processes that only contain a single pole (e.g., g → qq¯) are evolved in virtuality. Non-ordered emissions are allowed, suppressed by an additional power of 1/Q 2. Recoils and kinematics are governed bymore » exact on-shell 2 → 3 phase-space factorisations. This first implementation is limited to massless QCD partons and colourless resonances. Tree-level matrix-element corrections are included for QCD up to O(α 4 s) (4 jets), and for Drell–Yan and Higgs production up to O(α 3 s) (V / H + 3 jets). Finally, the resulting algorithm has been made publicly available in Vincia 2.0.« less
A fast multigrid-based electromagnetic eigensolver for curved metal boundaries on the Yee mesh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauer, Carl A., E-mail: carl.bauer@colorado.edu; Werner, Gregory R.; Cary, John R.
For embedded boundary electromagnetics using the Dey–Mittra (Dey and Mittra, 1997) [1] algorithm, a special grad–div matrix constructed in this work allows use of multigrid methods for efficient inversion of Maxwell’s curl–curl matrix. Efficient curl–curl inversions are demonstrated within a shift-and-invert Krylov-subspace eigensolver (open-sourced at ([ofortt]https://github.com/bauerca/maxwell[cfortt])) on the spherical cavity and the 9-cell TESLA superconducting accelerator cavity. The accuracy of the Dey–Mittra algorithm is also examined: frequencies converge with second-order error, and surface fields are found to converge with nearly second-order error. In agreement with previous work (Nieter et al., 2009) [2], neglecting some boundary-cut cell faces (as is requiredmore » in the time domain for numerical stability) reduces frequency convergence to first-order and surface-field convergence to zeroth-order (i.e. surface fields do not converge). Additionally and importantly, neglecting faces can reduce accuracy by an order of magnitude at low resolutions.« less
Removal of Stationary Sinusoidal Noise from Random Vibration Signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian; Cap, Jerome S.
In random vibration environments, sinusoidal line noise may appear in the vibration signal and can affect analysis of the resulting data. We studied two methods which remove stationary sine tones from random noise: a matrix inversion algorithm and a chirp-z transform algorithm. In addition, we developed new methods to determine the frequency of the tonal noise. The results show that both of the removal methods can eliminate sine tones in prefabricated random vibration data when the sine-to-random ratio is at least 0.25. For smaller ratios down to 0.02 only the matrix inversion technique can remove the tones, but the metricsmore » to evaluate its effectiveness also degrade. We also found that using fast Fourier transforms best identified the tonal noise, and determined that band-pass-filtering the signals prior to the process improved sine removal. When applied to actual vibration test data, the methods were not as effective at removing harmonic tones, which we believe to be a result of mixed-phase sinusoidal noise.« less
Decorin modulates matrix mineralization in vitro
NASA Technical Reports Server (NTRS)
Mochida, Yoshiyuki; Duarte, Wagner R.; Tanzawa, Hideki; Paschalis, Eleftherios P.; Yamauchi, Mitsuo
2003-01-01
Decorin (DCN), a member of small leucine-rich proteoglycans, is known to modulate collagen fibrillogenesis. In order to investigate the potential roles of DCN in collagen matrix mineralization, several stable osteoblastic cell clones expressing higher (sense-DCN, S-DCN) and lower (antisense-DCN, As-DCN) levels of DCN were generated and the mineralized nodules formed by these clones were characterized. In comparison with control cells, the onset of mineralization by S-DCN clones was significantly delayed; whereas it was markedly accelerated and the number of mineralized nodules was significantly increased in As-DCN clones. The timing of mineralization was inversely correlated with the level of DCN synthesis. In these clones, the patterns of cell proliferation and differentiation appeared unaffected. These results suggest that DCN may act as an inhibitor of collagen matrix mineralization, thus modulating the timing of matrix mineralization.
Multi-Maneuver Clohessy-Wiltshire Targeting
NASA Technical Reports Server (NTRS)
Dannemiller, David P.
2011-01-01
Orbital rendezvous involves execution of a sequence of maneuvers by a chaser vehicle to bring the chaser to a desired state relative to a target vehicle while meeting intermediate and final relative constraints. Intermediate and final relative constraints are necessary to meet a multitude of requirements such as to control approach direction, ensure relative position is adequate for operation of space-to-space communication systems and relative sensors, provide fail-safe trajectory features, and provide contingency hold points. The effect of maneuvers on constraints is often coupled, so the maneuvers must be solved for as a set. For example, maneuvers that affect orbital energy change both the chaser's height and downrange position relative to the target vehicle. Rendezvous designers use experience and rules-of-thumb to design a sequence of maneuvers and constraints. A non-iterative method is presented for targeting a rendezvous scenario that includes a sequence of maneuvers and relative constraints. This method is referred to as Multi-Maneuver Clohessy-Wiltshire Targeting (MM_CW_TGT). When a single maneuver is targeted to a single relative position, the classic CW targeting solution is obtained. The MM_CW_TGT method involves manipulation of the CW state transition matrix to form a linear system. As a starting point for forming the algorithm, the effects of a series of impulsive maneuvers on the state are derived. Simple and moderately complex examples are used to demonstrate the pattern of the resulting linear system. The general form of the pattern results in an algorithm for formation of the linear system. The resulting linear system relates the effect of maneuver components and initial conditions on relative constraints specified by the rendezvous designer. Solution of the linear system includes the straight-forward inverse of a square matrix. Inversion of the square matrix is assured if the designer poses a controllable scenario - a scenario where the the constraints can be met by the sequence of maneuvers. Matrices in the linear system are dependent on selection of maneuvers and constraints by the designer, but the matrices are independent of the chaser's initial conditions. For scenarios where the sequence of maneuvers and constraints are fixed, the linear system can be formed and the square matrix inverted prior to real-time operations. Example solutions are presented for several rendezvous scenarios to illustrate the utility of the method. The MM_CW_TGT method has been used during the preliminary design of rendezvous scenarios and is expected to be useful for iterative methods in the generation of an initial guess and corrections.
NASA Astrophysics Data System (ADS)
Pinty, B.; Clerici, M.; Andredakis, I.; Kaminski, T.; Taberner, M.; Verstraete, M. M.; Gobron, N.; Plummer, S.; Widlowski, J.-L.
2011-05-01
The two-stream model parameters and associated uncertainties retrieved by inversion against MODIS broadband visible and near-infrared white sky surface albedos were discussed in a companion paper. The present paper concentrates on the partitioning of the solar radiation fluxes delivered by the Joint Research Centre Two-stream Inversion Package (JRC-TIP). The estimation of the various flux fractions related to the vegetation and the background layers separately capitalizes on the probability density functions of the model parameters discussed in the companion paper. The propagation of uncertainties from the observations to the model parameters is achieved via the Hessian of the cost function and yields a covariance matrix of posterior parameter uncertainties. This matrix is propagated to the radiation fluxes via the model's Jacobian matrix of first derivatives. Results exhibit a rather good spatiotemporal consistency given that the prior values on the model parameters are not specified as a function of land cover type and/or vegetation phenological states. A specific investigation based on a scenario imposing stringent conditions of leaf absorbing and scattering properties highlights the impact of such constraints that are, as a matter of fact, currently adopted in vegetation index approaches. Special attention is also given to snow-covered and snow-contaminated areas since these regions encompass significant reflectance changes that strongly affect land surface processes. A definite asset of the JRC-TIP lies in its capability to control and ultimately relax a number of assumptions that are often implicit in traditional approaches. These features greatly help us understand the discrepancies between the different data sets of land surface properties and fluxes that are currently available. Through a series of selected examples, the inverse procedure implemented in the JRC-TIP is shown to be robust, reliable, and compliant with large-scale processing requirements. Furthermore, this package ensures the physical consistency between the set of observations, the two-stream model parameters, and radiation fluxes. It also documents the retrieval of associated uncertainties.
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
Comparative study of inversion methods of three-dimensional NMR and sensitivity to fluids
NASA Astrophysics Data System (ADS)
Tan, Maojin; Wang, Peng; Mao, Keyu
2014-04-01
Three-dimensional nuclear magnetic resonance (3D NMR) logging can simultaneously measure transverse relaxation time (T2), longitudinal relaxation time (T1), and diffusion coefficient (D). These parameters can be used to distinguish fluids in the porous reservoirs. For 3D NMR logging, the relaxation mechanism and mathematical model, Fredholm equation, are introduced, and the inversion methods including Singular Value Decomposition (SVD), Butler-Reeds-Dawson (BRD), and Global Inversion (GI) methods are studied in detail, respectively. During one simulation test, multi-echo CPMG sequence activation is designed firstly, echo trains of the ideal fluid models are synthesized, then an inversion algorithm is carried on these synthetic echo trains, and finally T2-T1-D map is built. Futhermore, SVD, BRD, and GI methods are respectively applied into a same fluid model, and the computing speed and inversion accuracy are compared and analyzed. When the optimal inversion method and matrix dimention are applied, the inversion results are in good aggreement with the supposed fluid model, which indicates that the inversion method of 3D NMR is applieable for fluid typing of oil and gas reservoirs. Additionally, the forward modeling and inversion tests are made in oil-water and gas-water models, respectively, the sensitivity to the fluids in different magnetic field gradients is also examined in detail. The effect of magnetic gradient on fluid typing in 3D NMR logging is stuied and the optimal manetic gradient is choosen.
Using the in-line component for fixed-wing EM 1D inversion
NASA Astrophysics Data System (ADS)
Smiarowski, Adam
2015-09-01
Numerous authors have discussed the utility of multicomponent measurements. Generally speaking, for a vertical-oriented dipole source, the measured vertical component couples to horizontal planar bodies while the horizontal in-line component couples best to vertical planar targets. For layered-earth cases, helicopter EM systems have little or no in-line component response and as a result much of the in-line signal is due to receiver coil rotation and appears as noise. In contrast to this, the in-line component of a fixed-wing airborne electromagnetic (AEM) system with large transmitter-receiver offset can be substantial, exceeding the vertical component in conductive areas. This paper compares the in-line and vertical response of a fixed-wing airborne electromagnetic (AEM) system using a half-space model and calculates sensitivity functions. The a posteriori inversion model parameter uncertainty matrix is calculated for a bathymetry model (conductive layer over more resistive half-space) for two inversion cases; use of vertical component alone is compared to joint inversion of vertical and in-line components. The joint inversion is able to better resolve model parameters. An example is then provided using field data from a bathymetry survey to compare the joint inversion to vertical component only inversion. For each inversion set, the difference between the inverted water depth and ship-measured bathymetry is calculated. The result is in general agreement with that expected from the a posteriori inversion model parameter uncertainty calculation.
NASA Astrophysics Data System (ADS)
Zhang, Wenxu; Peng, Bin; Han, Fangbin; Wang, Qiuru; Soh, Wee Tee; Ong, Chong Kim; Zhang, Wanli
2016-03-01
We develop a method for universally resolving the important issue of separating the inverse spin Hall effect (ISHE) from the spin rectification effect (SRE) signal. This method is based on the consideration that the two effects depend on the spin injection direction: The ISHE is an odd function of the spin injection direction while the SRE is independent on it. Thus, the inversion of the spin injection direction changes the ISHE voltage signal, while the SRE voltage remains. It applies generally to analyzing the different voltage contributions without fitting them to special line shapes. This fast and simple method can be used in a wide frequency range and has the flexibility of sample preparation.
Intersection of Three Planes Revisited--An Algebraic Approach
ERIC Educational Resources Information Center
Trenkler, Götz; Trenkler, Dietrich
2017-01-01
Given three planes in space, a complete characterization of their intersection is provided. Special attention is paid to the case when the intersection set does not exist of one point only. Besides the vector cross product, the tool of generalized inverse of a matrix is used extensively.
Iterative combination of national phenotype, genotype, pedigree, and foreign information
USDA-ARS?s Scientific Manuscript database
Single step methods can combine all sources of information into accurate rankings for animals with and without genotypes. Equations that require inverting the genomic relationship matrix G work well with limited numbers of animals, but equivalent models without inversion are needed as numbers increa...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harms, Joseph; Wang, Tonghe; Petrongolo, Michael
Purpose: Dual-energy CT (DECT) expands applications of CT imaging in its capability to decompose CT images into material images. However, decomposition via direct matrix inversion leads to large noise amplification and limits quantitative use of DECT. Their group has previously developed a noise suppression algorithm via penalized weighted least-square optimization with edge-preservation regularization (PWLS-EPR). In this paper, the authors improve method performance using the same framework of penalized weighted least-square optimization but with similarity-based regularization (PWLS-SBR), which substantially enhances the quality of decomposed images by retaining a more uniform noise power spectrum (NPS). Methods: The design of PWLS-SBR is basedmore » on the fact that averaging pixels of similar materials gives a low-noise image. For each pixel, the authors calculate the similarity to other pixels in its neighborhood by comparing CT values. Using an empirical Gaussian model, the authors assign high/low similarity value to one neighboring pixel if its CT value is close/far to the CT value of the pixel of interest. These similarity values are organized in matrix form, such that multiplication of the similarity matrix to the image vector reduces image noise. The similarity matrices are calculated on both high- and low-energy CT images and averaged. In PWLS-SBR, the authors include a regularization term to minimize the L-2 norm of the difference between the images without and with noise suppression via similarity matrix multiplication. By using all pixel information of the initial CT images rather than just those lying on or near edges, PWLS-SBR is superior to the previously developed PWLS-EPR, as supported by comparison studies on phantoms and a head-and-neck patient. Results: On the line-pair slice of the Catphan{sup ©}600 phantom, PWLS-SBR outperforms PWLS-EPR and retains spatial resolution of 8 lp/cm, comparable to the original CT images, even at 90% reduction in noise standard deviation (STD). Similar performance on spatial resolution is observed on an anthropomorphic head phantom. In addition, results of PWLS-SBR show substantially improved image quality due to preservation of image NPS. On the Catphan{sup ©}600 phantom, NPS using PWLS-SBR has a correlation of 93% with that via direct matrix inversion, while the correlation drops to −52% for PWLS-EPR. Electron density measurement studies indicate high accuracy of PWLS-SBR. On seven different materials, the measured electron densities calculated from the decomposed material images using PWLS-SBR have a root-mean-square error (RMSE) of 1.20%, while the results of PWLS-EPR have a RMSE of 2.21%. In the study on a head-and-neck patient, PWLS-SBR is shown to reduce noise STD by a factor of 3 on material images with image qualities comparable to CT images, whereas fine structures are lost in the PWLS-EPR result. Additionally, PWLS-SBR better preserves low contrast on the tissue image. Conclusions: The authors propose improvements to the regularization term of an optimization framework which performs iterative image-domain decomposition for DECT with noise suppression. The regularization term avoids calculation of image gradient and is based on pixel similarity. The proposed method not only achieves a high decomposition accuracy, but also improves over the previous algorithm on NPS as well as spatial resolution.« less
Harms, Joseph; Wang, Tonghe; Petrongolo, Michael; Niu, Tianye; Zhu, Lei
2016-01-01
Purpose: Dual-energy CT (DECT) expands applications of CT imaging in its capability to decompose CT images into material images. However, decomposition via direct matrix inversion leads to large noise amplification and limits quantitative use of DECT. Their group has previously developed a noise suppression algorithm via penalized weighted least-square optimization with edge-preservation regularization (PWLS-EPR). In this paper, the authors improve method performance using the same framework of penalized weighted least-square optimization but with similarity-based regularization (PWLS-SBR), which substantially enhances the quality of decomposed images by retaining a more uniform noise power spectrum (NPS). Methods: The design of PWLS-SBR is based on the fact that averaging pixels of similar materials gives a low-noise image. For each pixel, the authors calculate the similarity to other pixels in its neighborhood by comparing CT values. Using an empirical Gaussian model, the authors assign high/low similarity value to one neighboring pixel if its CT value is close/far to the CT value of the pixel of interest. These similarity values are organized in matrix form, such that multiplication of the similarity matrix to the image vector reduces image noise. The similarity matrices are calculated on both high- and low-energy CT images and averaged. In PWLS-SBR, the authors include a regularization term to minimize the L-2 norm of the difference between the images without and with noise suppression via similarity matrix multiplication. By using all pixel information of the initial CT images rather than just those lying on or near edges, PWLS-SBR is superior to the previously developed PWLS-EPR, as supported by comparison studies on phantoms and a head-and-neck patient. Results: On the line-pair slice of the Catphan©600 phantom, PWLS-SBR outperforms PWLS-EPR and retains spatial resolution of 8 lp/cm, comparable to the original CT images, even at 90% reduction in noise standard deviation (STD). Similar performance on spatial resolution is observed on an anthropomorphic head phantom. In addition, results of PWLS-SBR show substantially improved image quality due to preservation of image NPS. On the Catphan©600 phantom, NPS using PWLS-SBR has a correlation of 93% with that via direct matrix inversion, while the correlation drops to −52% for PWLS-EPR. Electron density measurement studies indicate high accuracy of PWLS-SBR. On seven different materials, the measured electron densities calculated from the decomposed material images using PWLS-SBR have a root-mean-square error (RMSE) of 1.20%, while the results of PWLS-EPR have a RMSE of 2.21%. In the study on a head-and-neck patient, PWLS-SBR is shown to reduce noise STD by a factor of 3 on material images with image qualities comparable to CT images, whereas fine structures are lost in the PWLS-EPR result. Additionally, PWLS-SBR better preserves low contrast on the tissue image. Conclusions: The authors propose improvements to the regularization term of an optimization framework which performs iterative image-domain decomposition for DECT with noise suppression. The regularization term avoids calculation of image gradient and is based on pixel similarity. The proposed method not only achieves a high decomposition accuracy, but also improves over the previous algorithm on NPS as well as spatial resolution. PMID:27147376
NASA Astrophysics Data System (ADS)
Prakash, Sai Sivasankaran
2001-11-01
Time-sectioning cryogenic scanning electron microscopy (cryo-SEM) is a unique method of visualizing how the microstructure of liquid coatings evolves during processing. Time-sectioning means rapidly freezing (nearly) identical specimens at successively later stages of the process; doing this requires that coating and drying be well controlled in the dry phase inversion process, and solvents exchange likewise in the wet phase inversion process. With control, frozen specimens are fractured, etched by limited sublimation, sputter-coated, and imaged at temperatures of ca -175°C. The coatings examined were of cellulose acetate, of high and low molecular weights, and polysulfone in mixed solvents and nonsolvents: acetone and water with cellulose acetate undergoing dry phase inversion; and tetrahydrofuran, dimethylacetamide, ethanol with polysulfone undergoing dry-wet phase inversion. All coatings, cast on silicon substrates, were initially homogeneous. The initial compositions of the high and low molecular weight cellulose acetate ternary solutions were "off-critical" and "near-critical", respectively, connoting their proximities to the critical or plait point of the phase diagram. The initial composition of the polysulfone quaternary solution was located near the binodal of the pseudo-ternary phase diagram. It appeared that as the higher molecular weight cellulose acetate coating dries, it nucleates and grows polymer-poor droplets that coalesce into a bicontinuous structure underlying a thin, dense skin. Bicontinuity of structure was verified by stereomicroscopy of the dry sample. The lower molecular weight cellulose acetate coating phase-separates, seemingly spinodally, directly into a bicontinuous structure whose polymer-rich network, stressed by frustrated in-plane shrinkage, ruptures far beneath the skin in some locales to form macrovoids. When, after partial drying, the polysulfone coating was immersed in a bath of water, a nonsolvent, it appeared to swell in thickness as it phase-separates. A dense skin, thinner than a micron, appeared to overlie a two-phase substructure that is punctuated with pear-shaped macrovoids. At early immersion times, this substructure is visibly bicontinuous or open-celled near the bath-side, and dispersion-like (droplets dispersed in a polymeric matrix) or closed-celled near the substrate-side. Moreover, in the bicontinuous regions, length-scales of the individual phases seem to increase across the coating thickness from the bath-side to the substrate-side. After prolonged immersion, the substructure, excluding the macrovoids, is entirely bicontinuous. The bicontinuity presumably results from a combination of spinodal decomposition and nucleation and growth plus coalescence. Quite strikingly, macrovoids are present exclusively in regions where phases are bicontinuous, and are absent where droplets are dispersed in the polymeric matrix. Evidence suggests that macrovoids result from an instability caused by a progressive rupture of polymer-rich links deeper and deeper beneath the skin, aggravated by stress localization in the rupturing network and a buildup of pressure in the polymer-poor phase (the pore space), as suspected by Grobe and Meyer in 1959.
Inversion of airborne tensor VLF data using integral equations
NASA Astrophysics Data System (ADS)
Kamm, Jochen; Pedersen, Laust B.
2014-08-01
The Geological Survey of Sweden has been collecting airborne tensor very low frequency data (VLF) over several decades, covering large parts of the country. The data has been an invaluable source of information for identifying conductive structures that can among other things be related to water-filled fault zones, wet sediments that fill valleys or ore mineralizations. Because the method only uses two differently polarized plane waves of very similar frequency, vertical resolution is low and interpretation is in most cases limited to maps that are directly derived from the data. Occasionally, 2-D inversion is carried out along selected profiles. In this paper, we present for the first time a 3-D inversion for tensor VLF data in order to further increase the usefulness of the data set. The inversion is performed using a non-linear conjugate gradient scheme (Polak-Ribière) with an inexact line-search. The gradient is obtained by an algebraic adjoint method that requires one additional forward calculation involving the adjoint system matrix. The forward modelling is based on integral equations with an analytic formulation of the half-space Green's tensor. It avoids typically required Hankel transforms and is particularly amenable to singularity removal prior to the numerical integration over the volume elements. The system is solved iteratively, thus avoiding construction and storage of the dense system matrix. By using fast 3-D Fourier transforms on nested grids, subsequently farther away interactions are represented with less detail and therefore with less computational effort, enabling us to bridge the gap between the relatively short wavelengths of the fields (tens of metres) and the large model dimensions (several square kilometres). We find that the approximation of the fields can be off by several per cent, yet the transfer functions in the air are practically unaffected. We verify our code using synthetic calculations from well-established 2-D methods, and trade modelling accuracy off against computational effort in order to keep the inversion feasible in both respects. Our compromise is to limit the permissible resistivity to not fall below 100 Ωm to maintain computational domains as large as 10 × 10 km2 and computation times on the order of a few hours on standard PCs. We investigate the effect of possible local violations of these limits. Even though the conductivity magnitude can then not be recovered correctly, we do not observe any structural artefacts related to this in our tests. We invert a data set from northern Sweden, where we find an excellent agreement of known geological features, such as contacts or fault zones, with elongated conductive structures, while high resistivity is encountered in probably less disturbed geology, often related to topographic highs, which have survived predominantly glacial erosion processes. As expected from synthetic studies, the resolution is laterally high, but vertically limited down to the top of conductive structures.
An ultra-wideband microwave tomography system: preliminary results.
Gilmore, Colin; Mojabi, Puyan; Zakaria, Amer; Ostadrahimi, Majid; Kaye, Cam; Noghanian, Sima; Shafai, Lotfollah; Pistorius, Stephen; LoVetri, Joe
2009-01-01
We describe a 2D wide-band multi-frequency microwave imaging system intended for biomedical imaging. The system is capable of collecting data from 2-10 GHz, with 24 antenna elements connected to a vector network analyzer via a 2 x 24 port matrix switch. Through the use of two different nonlinear reconstruction schemes: the Multiplicative-Regularized Contrast Source Inversion method and an enhanced version of the Distorted Born Iterative Method, we show preliminary imaging results from dielectric phantoms where data were collected from 3-6 GHz. The early inversion results show that the system is capable of quantitatively reconstructing dielectric objects.
Decision Support Tools for Munitions Response Performance Prediction and Risk Assessment
2013-01-01
with G, the forward modeling matrix, implicitly dependent on target location. The least squares model estimate is then given by m̂ = ( GTG )−1GTdobs = G...dobs (6) with (7) G† = ( GTG )−1GT denoting the pseudo-inverse. When inverting observed field data for a sensor with tri-axial transmit and receive coils...ities can be expressed as cov(L̂) =β G†(r) cov(d) (G†(r))T βT =β G†(r) Geq α cov(L) α T GTeq (G †(r))T βT (53) where the pseudo-inverse is G† = ( GTG )−1G
[Treatment of surface burns with proteolytic enzymes: mathematic description of lysis kinetics].
Domogatskaia, A S; Domogatskiĭ, S P; Ruuge, E K
2003-01-01
The lysis of necrotic tissue by a proteolytic enzyme applied to the surface of a burn wound was studied. A mathematical model was proposed, which describes changes in the thickness of necrotic tissue as a function of the proteolytic activity of the enzyme. The model takes into account the inward-directed diffusion of the enzyme, the counterflow of interstitial fluid (exudates) containing specific inhibitors, and the extracellular matrix proteolysis. It was shown in terms of the quasi-stationary approach that the thickness of the necrotic tissue layer decreases exponentially with time; i.e., the lysis slows down as the thickness of the necrotic tissue layer decreases. The dependence of the characteristic time of this decrease on enzyme concentration was obtained. It was shown that, at high enzyme concentrations (more than 5 mg/ml), the entire time of lysis (after the establishment of quasi-stationary equilibrium) is inversely proportional to the concentration of the enzyme.
Vectorial laws of refraction and reflection using the cross product and dot product.
Tkaczyk, Eric R
2012-03-01
We demonstrate that published vectorial laws of reflection and refraction of light based solely on the cross product do not, in general, uniquely determine the direction of the reflected and refracted waves without additional information. This is because the cross product does not have a unique inverse operation, which is explained in this Letter in linear algebra terms. However, a vector is in fact uniquely determined if both the cross product (vector product) and dot product (scalar product) with a known vector are specified, which can be written as a single equation with a left-invertible matrix. It is thus possible to amend the vectorial laws of reflection and refraction to incorporate both the cross and dot products for a complete specification with unique solution. This enables highly efficient, unambiguous computation of reflected and refracted wave vectors from the incident wave and surface normal. © 2012 Optical Society of America
NASA Technical Reports Server (NTRS)
Siwakosit, W.; Hess, R. A.; Bacon, Bart (Technical Monitor); Burken, John (Technical Monitor)
2000-01-01
A multi-input, multi-output reconfigurable flight control system design utilizing a robust controller and an adaptive filter is presented. The robust control design consists of a reduced-order, linear dynamic inversion controller with an outer-loop compensation matrix derived from Quantitative Feedback Theory (QFT). A principle feature of the scheme is placement of the adaptive filter in series with the QFT compensator thus exploiting the inherent robustness of the nominal flight control system in the presence of plant uncertainties. An example of the scheme is presented in a pilot-in-the-loop computer simulation using a simplified model of the lateral-directional dynamics of the NASA F18 High Angle of Attack Research Vehicle (HARV) that included nonlinear anti-wind up logic and actuator limitations. Prediction of handling qualities and pilot-induced oscillation tendencies in the presence of these nonlinearities is included in the example.
Organo-erbium systems for optical amplification at telecommunications wavelengths.
Ye, H Q; Li, Z; Peng, Y; Wang, C C; Li, T Y; Zheng, Y X; Sapelkin, A; Adamopoulos, G; Hernández, I; Wyatt, P B; Gillin, W P
2014-04-01
Modern telecommunications rely on the transmission and manipulation of optical signals. Optical amplification plays a vital part in this technology, as all components in a real telecommunications system produce some loss. The two main issues with present amplifiers, which rely on erbium ions in a glass matrix, are the difficulty in integration onto a single substrate and the need of high pump power densities to produce gain. Here we show a potential organic optical amplifier material that demonstrates population inversion when pumped from above using low-power visible light. This system is integrated into an organic light-emitting diode demonstrating that electrical pumping can be achieved. This opens the possibility of direct electrically driven optical amplifiers and optical circuits. Our results provide an alternative approach to producing low-cost integrated optics that is compatible with existing silicon photonics and a different route to an effective integrated optics technology.
Full-wave multiscale anisotropy tomography in Southern California
NASA Astrophysics Data System (ADS)
Lin, Yu-Pin; Zhao, Li; Hung, Shu-Huei
2014-12-01
Understanding the spatial variation of anisotropy in the upper mantle is important for characterizing the lithospheric deformation and mantle flow dynamics. In this study, we apply a full-wave approach to image the upper-mantle anisotropy in Southern California using 5954 SKS splitting data. Three-dimensional sensitivity kernels combined with a wavelet-based model parameterization are adopted in a multiscale inversion. Spatial resolution lengths are estimated based on a statistical resolution matrix approach, showing a finest resolution length of ~25 km in regions with densely distributed stations. The anisotropic model displays structural fabric in relation to surface geologic features such as the Salton Trough, the Transverse Ranges, and the San Andreas Fault. The depth variation of anisotropy does not suggest a lithosphere-asthenosphere decoupling. At long wavelengths, the fast directions of anisotropy are aligned with the absolute plate motion inside the Pacific and North American plates.
Stable Estimation of a Covariance Matrix Guided by Nuclear Norm Penalties
Chi, Eric C.; Lange, Kenneth
2014-01-01
Estimation of a covariance matrix or its inverse plays a central role in many statistical methods. For these methods to work reliably, estimated matrices must not only be invertible but also well-conditioned. The current paper introduces a novel prior to ensure a well-conditioned maximum a posteriori (MAP) covariance estimate. The prior shrinks the sample covariance estimator towards a stable target and leads to a MAP estimator that is consistent and asymptotically efficient. Thus, the MAP estimator gracefully transitions towards the sample covariance matrix as the number of samples grows relative to the number of covariates. The utility of the MAP estimator is demonstrated in two standard applications – discriminant analysis and EM clustering – in this sampling regime. PMID:25143662
Incorporation of causative quantitative trait nucleotides in single-step GBLUP.
Fragomeni, Breno O; Lourenco, Daniela A L; Masuda, Yutaka; Legarra, Andres; Misztal, Ignacy
2017-07-26
Much effort is put into identifying causative quantitative trait nucleotides (QTN) in animal breeding, empowered by the availability of dense single nucleotide polymorphism (SNP) information. Genomic selection using traditional SNP information is easily implemented for any number of genotyped individuals using single-step genomic best linear unbiased predictor (ssGBLUP) with the algorithm for proven and young (APY). Our aim was to investigate whether ssGBLUP is useful for genomic prediction when some or all QTN are known. Simulations included 180,000 animals across 11 generations. Phenotypes were available for all animals in generations 6 to 10. Genotypes for 60,000 SNPs across 10 chromosomes were available for 29,000 individuals. The genetic variance was fully accounted for by 100 or 1000 biallelic QTN. Raw genomic relationship matrices (GRM) were computed from (a) unweighted SNPs, (b) unweighted SNPs and causative QTN, (c) SNPs and causative QTN weighted with results obtained with genome-wide association studies, (d) unweighted SNPs and causative QTN with simulated weights, (e) only unweighted causative QTN, (f-h) as in (b-d) but using only the top 10% causative QTN, and (i) using only causative QTN with simulated weight. Predictions were computed by pedigree-based BLUP (PBLUP) and ssGBLUP. Raw GRM were blended with 1 or 5% of the numerator relationship matrix, or 1% of the identity matrix. Inverses of GRM were obtained directly or with APY. Accuracy of breeding values for 5000 genotyped animals in the last generation with PBLUP was 0.32, and for ssGBLUP it increased to 0.49 with an unweighted GRM, 0.53 after adding unweighted QTN, 0.63 when QTN weights were estimated, and 0.89 when QTN weights were based on true effects known from the simulation. When the GRM was constructed from causative QTN only, accuracy was 0.95 and 0.99 with blending at 5 and 1%, respectively. Accuracies simulating 1000 QTN were generally lower, with a similar trend. Accuracies using the APY inverse were equal or higher than those with a regular inverse. Single-step GBLUP can account for causative QTN via a weighted GRM. Accuracy gains are maximum when variances of causative QTN are known and blending is at 1%.
Round-off errors in cutting plane algorithms based on the revised simplex procedure
NASA Technical Reports Server (NTRS)
Moore, J. E.
1973-01-01
This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.
Analysis of Raman lasing without inversion
NASA Astrophysics Data System (ADS)
Sheldon, Paul Martin
1999-12-01
Properties of lasing without inversion were studied analytically and numerically using Maple computer assisted algebra software. Gain for probe electromagnetic field without population inversion in detuned three level atomic schemes has been found. Matter density matrix dynamics and coherence is explored using Pauli matrices in 2-level systems and Gell-Mann matrices in 3-level systems. It is shown that extreme inversion produces no coherence and hence no lasing. Unitary transformation from the strict field-matter Hamiltonian to an effective two-photon Raman Hamiltonian for multilevel systems has been derived. Feynman diagrams inherent in the derivation show interesting physics. An additional picture change was achieved and showed cw gain possible. Properties of a Raman-like laser based on injection of 3- level coherently driven Λ-type atoms whose Hamiltonian contains the Raman Hamiltonian and microwave coupling the two bottom states have been studied in the limits of small and big photon numbers in the drive field. Another picture change removed the microwave coupler to all orders and simplified analysis. New possibilities of inversionless generation were found.
NASA Astrophysics Data System (ADS)
Schanz, Martin; Ye, Wenjing; Xiao, Jinyou
2016-04-01
Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.
Joint inversion of acoustic and resistivity data for the estimation of gas hydrate concentration
Lee, Myung W.
2002-01-01
Downhole log measurements, such as acoustic or electrical resistivity logs, are frequently used to estimate in situ gas hydrate concentrations in the pore space of sedimentary rocks. Usually the gas hydrate concentration is estimated separately based on each log measurement. However, measurements are related to each other through the gas hydrate concentration, so the gas hydrate concentrations can be estimated by jointly inverting available logs. Because the magnitude of slowness of acoustic and resistivity values differs by more than an order of magnitude, a least-squares method, weighted by the inverse of the observed values, is attempted. Estimating the resistivity of connate water and gas hydrate concentration simultaneously is problematic, because the resistivity of connate water is independent of acoustics. In order to overcome this problem, a coupling constant is introduced in the Jacobian matrix. In the use of different logs to estimate gas hydrate concentration, a joint inversion of different measurements is preferred to the averaging of each inversion result.
Regenerator filled with a matrix of polycrystalline iron whiskers
NASA Astrophysics Data System (ADS)
Eder, F. X.; Appel, H.
1982-08-01
In thermal regenerators, parameters were optimized: convection coefficient, surface of heat accumulating matrix, matrix density and heat capacity, and frequency of cycle inversions. The variation of heat capacity with working temperature was also computed. Polycrystalline iron whiskers prove a good compromise as matrix for heat regenerators at working temperatures ranging from 300 to 80 K. They were compared with wire mesh screens and microspheres of bronze and stainless steel. For theses structures and materials, thermal conductivity, pressure drop, heat transfer and yield were calculated and related to the experimental values. As transport heat gas, helium, argon, and dry nitrogen were applied at pressures up to 20 bar. Experimental and theoretical studies result in a set of formulas for calculating pressure drop, heat capacity, and heat transfer rate for a given thermal regenerator in function of mass flow. It is proved that a whisker matrix has an efficiency that depends strongly on gas pressure and composition. Iron whiskers make a good matrix with heat capacities of kW/cu cm per K, but their relative high pressure drop may, at low pressures, be a limitation. A regenerator expansion machine is described.
Teaching Linear Algebra: Proceeding More Efficiently by Staying Comfortably within Z
ERIC Educational Resources Information Center
Beaver, Scott
2015-01-01
For efficiency in a linear algebra course the instructor may wish to avoid the undue arithmetical distractions of rational arithmetic. In this paper we explore how to write fraction-free problems of various types including elimination, matrix inverses, orthogonality, and the (non-normalizing) Gram-Schmidt process.
The effect of the H-1 scaling factors τ and ω on the structure of H in the single-step procedure.
Martini, Johannes W R; Schrauf, Matias F; Garcia-Baccino, Carolina A; Pimentel, Eduardo C G; Munilla, Sebastian; Rogberg-Muñoz, Andres; Cantet, Rodolfo J C; Reimer, Christian; Gao, Ning; Wimmer, Valentin; Simianer, Henner
2018-04-13
The single-step covariance matrix H combines the pedigree-based relationship matrix [Formula: see text] with the more accurate information on realized relatedness of genotyped individuals represented by the genomic relationship matrix [Formula: see text]. In particular, to improve convergence behavior of iterative approaches and to reduce inflation, two weights [Formula: see text] and [Formula: see text] have been introduced in the definition of [Formula: see text], which blend the inverse of a part of [Formula: see text] with the inverse of [Formula: see text]. Since the definition of this blending is based on the equation describing [Formula: see text], its impact on the structure of [Formula: see text] is not obvious. In a joint discussion, we considered the question of the shape of [Formula: see text] for non-trivial [Formula: see text] and [Formula: see text]. Here, we present the general matrix [Formula: see text] as a function of these parameters and discuss its structure and properties. Moreover, we screen for optimal values of [Formula: see text] and [Formula: see text] with respect to predictive ability, inflation and iterations up to convergence on a well investigated, publicly available wheat data set. Our results may help the reader to develop a better understanding for the effects of changes of [Formula: see text] and [Formula: see text] on the covariance model. In particular, we give theoretical arguments that as a general tendency, inflation will be reduced by increasing [Formula: see text] or by decreasing [Formula: see text].
NASA Technical Reports Server (NTRS)
Ma, Q.; Boulet, C.; Tipping, R. H.
2017-01-01
Line shape parameters including the half-widths and the off-diagonal elements of the relaxation matrix have been calculated for self-broadened NH3 lines in the perpendicular v4 band. As in the pure rotational and the parallel v1 bands, the small inversion splitting in this band causes a complete failure of the isolated line approximation. As a result, one has to use formalisms not relying on this approximation. However, due to differences between parallel and perpendicular bands of NH3, the applicability of the formalism used in our previous studies of the v1 band and other parallel bands must be carefully verified. We have found that, as long as potential models only contain components with K1 equals K2 equals 0, whose matrix elements require the selection rule delta k equals 0, the formalism is applicable for the v4 band with some minor adjustments. Based on both theoretical considerations and results from numerical calculations, the non-diagonality of the relaxation matrices in all the PP, RP, PQ, RQ, PR, and RR branches is discussed. Theoretically calculated self-broadened half-widths are compared with measurements and the values listed in HITRAN 2012. With respect to line coupling effects, we have compared our calculated intra-doublet off-diagonal elements of the relaxation matrix with reliable measurements carried out in the PP branch where the spectral environment is favorable. The agreement is rather good since our results do well reproduce the observed k and j dependences of these elements, thus validating our formalism.
Fine-granularity inference and estimations to network traffic for SDN.
Jiang, Dingde; Huo, Liuwei; Li, Ya
2018-01-01
An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective.
Hollaus, K; Magele, C; Merwa, R; Scharfetter, H
2004-02-01
Magnetic induction tomography of biological tissue is used to reconstruct the changes in the complex conductivity distribution by measuring the perturbation of an alternating primary magnetic field. To facilitate the sensitivity analysis and the solution of the inverse problem a fast calculation of the sensitivity matrix, i.e. the Jacobian matrix, which maps the changes of the conductivity distribution onto the changes of the voltage induced in a receiver coil, is needed. The use of finite differences to determine the entries of the sensitivity matrix does not represent a feasible solution because of the high computational costs of the basic eddy current problem. Therefore, the reciprocity theorem was exploited. The basic eddy current problem was simulated by the finite element method using symmetric tetrahedral edge elements of second order. To test the method various simulations were carried out and discussed.
Fine-granularity inference and estimations to network traffic for SDN
Huo, Liuwei; Li, Ya
2018-01-01
An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective. PMID:29718913
An inverse method was developed to integrate satellite observations of atmospheric pollutant column concentrations and direct sensitivities predicted by a regional air quality model in order to discern biases in the emissions of the pollutant precursors.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.
2015-07-01
This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.
NASA Astrophysics Data System (ADS)
Huang, Chieh-Szu; Chang, Ming-Chuan; Huang, Cheng-Liang; Lin, Shih-kang
2016-12-01
Thin-film electroluminescent devices are promising solid-state lighting devices. Red light-emitting phosphor is the key component to be integrated with the well-established blue light-emitting diode chips for stimulating natural sunlight. However, environmentally hazardous rare-earth (RE) dopants, e.g. Eu2+ and Ce2+, are commonly used for red-emitting phosphors. Mg2TiO4 inverse spinel has been reported as a promising matrix material for "RE-free" red light luminescent material. In this paper, Mg2TiO4 inverse spinel is investigated using both experimental and theoretical approaches. The Mg2TiO4 thin films were deposited on Si (100) substrates using either spin-coating with the sol-gel process, or radio frequency sputtering, and annealed at various temperatures ranging from 600°C to 900°C. The crystallinity, microstructures, and photoluminescent properties of the Mg2TiO4 thin films were characterized. In addition, the atomistic model of the Mg2TiO4 inverse spinel was constructed, and the electronic band structure of Mg2TiO4 was calculated based on density functional theory. Essential physical and optoelectronic properties of the Mg2TiO4 luminance material as well as its optimal thin-film processing conditions were comprehensively reported.
Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.
Han, Lei; Zhang, Yu; Zhang, Tong
2016-08-01
The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Variance and covariance estimates for weaning weight of Senepol cattle.
Wright, D W; Johnson, Z B; Brown, C J; Wildeus, S
1991-10-01
Variance and covariance components were estimated for weaning weight from Senepol field data for use in the reduced animal model for a maternally influenced trait. The 4,634 weaning records were used to evaluate 113 sires and 1,406 dams on the island of St. Croix. Estimates of direct additive genetic variance (sigma 2A), maternal additive genetic variance (sigma 2M), covariance between direct and maternal additive genetic effects (sigma AM), permanent maternal environmental variance (sigma 2PE), and residual variance (sigma 2 epsilon) were calculated by equating variances estimated from a sire-dam model and a sire-maternal grandsire model, with and without the inverse of the numerator relationship matrix (A-1), to their expectations. Estimates were sigma 2A, 139.05 and 138.14 kg2; sigma 2M, 307.04 and 288.90 kg2; sigma AM, -117.57 and -103.76 kg2; sigma 2PE, -258.35 and -243.40 kg2; and sigma 2 epsilon, 588.18 and 577.72 kg2 with and without A-1, respectively. Heritability estimates for direct additive (h2A) were .211 and .210 with and without A-1, respectively. Heritability estimates for maternal additive (h2M) were .47 and .44 with and without A-1, respectively. Correlations between direct and maternal (IAM) effects were -.57 and -.52 with and without A-1, respectively.
Electrostatic point charge fitting as an inverse problem: Revealing the underlying ill-conditioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, Maxim V.; Talipov, Marat R.; Timerghazin, Qadir K., E-mail: qadir.timerghazin@marquette.edu
2015-10-07
Atom-centered point charge (PC) model of the molecular electrostatics—a major workhorse of the atomistic biomolecular simulations—is usually parameterized by least-squares (LS) fitting of the point charge values to a reference electrostatic potential, a procedure that suffers from numerical instabilities due to the ill-conditioned nature of the LS problem. To reveal the origins of this ill-conditioning, we start with a general treatment of the point charge fitting problem as an inverse problem and construct an analytical model with the point charges spherically arranged according to Lebedev quadrature which is naturally suited for the inverse electrostatic problem. This analytical model is contrastedmore » to the atom-centered point-charge model that can be viewed as an irregular quadrature poorly suited for the problem. This analysis shows that the numerical problems of the point charge fitting are due to the decay of the curvatures corresponding to the eigenvectors of LS sum Hessian matrix. In part, this ill-conditioning is intrinsic to the problem and is related to decreasing electrostatic contribution of the higher multipole moments, that are, in the case of Lebedev grid model, directly associated with the Hessian eigenvectors. For the atom-centered model, this association breaks down beyond the first few eigenvectors related to the high-curvature monopole and dipole terms; this leads to even wider spread-out of the Hessian curvature values. Using these insights, it is possible to alleviate the ill-conditioning of the LS point-charge fitting without introducing external restraints and/or constraints. Also, as the analytical Lebedev grid PC model proposed here can reproduce multipole moments up to a given rank, it may provide a promising alternative to including explicit multipole terms in a force field.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klawikowski, S; Christian, J; Schott, D
Purpose: Pilot study developing a CT-texture based model for early assessment of treatment response during the delivery of chemoradiation therapy (CRT) for pancreatic cancer. Methods: Daily CT data acquired for 24 pancreatic head cancer patients using CT-on-rails, during the routine CT-guided CRT delivery with a radiation dose of 50.4 Gy in 28 fractions, were analyzed. The pancreas head was contoured on each daily CT. Texture analysis was performed within the pancreas head contour using a research tool (IBEX). Over 1300 texture metrics including: grey level co-occurrence, run-length, histogram, neighborhood intensity difference, and geometrical shape features were calculated for each dailymore » CT. Metric-trend information was established by finding the best fit of either a linear, quadratic, or exponential function for each metric value verses accumulated dose. Thus all the daily CT texture information was consolidated into a best-fit trend type for a given patient and texture metric. Linear correlation was performed between the patient histological response vector (good, medium, poor) and all combinations of 23 patient subgroups (statistical jackknife) determining which metrics were most correlated to response and repeatedly reliable across most patients. Control correlations against CT scanner, reconstruction kernel, and gated/nongated CT images were also calculated. Euclidean distance measure was used to group/sort patient vectors based on the data of these trend-response metrics. Results: We found four specific trend-metrics (Gray Level Coocurence Matrix311-1InverseDiffMomentNorm, Gray Level Coocurence Matrix311-1InverseDiffNorm, Gray Level Coocurence Matrix311-1 Homogeneity2, and Intensity Direct Local StdMean) that were highly correlated with patient response and repeatedly reliable. Our four trend-metric model successfully ordered our pilot response dataset (p=0.00070). We found no significant correlation to our control parameters: gating (p=0.7717), scanner (p=0.9741), and kernel (p=0.8586). Conclusion: We have successfully created a CT-texture based early treatment response prediction model using the CTs acquired during the delivery of chemoradiation therapy for pancreatic cancer. Future testing is required to validate the model with more patient data.« less
Bispectral Inversion: The Construction of a Time Series from Its Bispectrum
1988-04-13
take the inverse transform . Since the goal is to compute a time series given its bispectrum, it would also be nice to stay entirely in the frequency...domain and be able to go directly from the bispectrum to the Fourier transform of the time series without the need to inverse transform continuous...the picture. The approximations arise from representing the bicovariance, which is the inverse transform of a continuous function, by the inverse disrte
NASA Astrophysics Data System (ADS)
Zhang, Hongqin; Tian, Xiangjun
2018-04-01
Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.
The Efffect of Image Apodization on Global Mode Parameters and Rotational Inversions
NASA Astrophysics Data System (ADS)
Larson, Tim; Schou, Jesper
2016-10-01
It has long been known that certain systematic errors in the global mode analysis of data from both MDI and HMI depend on how the input images were apodized. Recently it has come to light, while investigating a six-month period in f-mode frequencies, that mode coverage is highest when B0 is maximal. Recalling that the leakage matrix is calculated in the approximation that B0=0, it comes as a surprise that more modes are fitted when the leakage matrix is most incorrect. It is now believed that the six-month oscillation has primarily to do with what portion of the solar surface is visible. Other systematic errors that depend on the part of the disk used include high-latitude anomalies in the rotation rate and a prominent feature in the normalized residuals of odd a-coefficients. Although the most likely cause of all these errors is errors in the leakage matrix, extensive recalculation of the leaks has not made any difference. Thus we conjecture that another effect may be at play, such as errors in the noise model or one that has to do with the alignment of the apodization with the spherical harmonics. In this poster we explore how differently shaped apodizations affect the results of inversions for internal rotation, for both maximal and minimal absolute values of B0.
NASA Astrophysics Data System (ADS)
Wang, Jun; Meng, Xiaohong; Li, Fang
2017-11-01
Generalized inversion is one of the important steps in the quantitative interpretation of gravity data. With appropriate algorithm and parameters, it gives a view of the subsurface which characterizes different geological bodies. However, generalized inversion of gravity data is time consuming due to the large amount of data points and model cells adopted. Incorporating of various prior information as constraints deteriorates the above situation. In the work discussed in this paper, a method for fast nonlinear generalized inversion of gravity data is proposed. The fast multipole method is employed for forward modelling. The inversion objective function is established with weighted data misfit function along with model objective function. The total objective function is solved by a dataspace algorithm. Moreover, depth weighing factor is used to improve depth resolution of the result, and bound constraint is incorporated by a transfer function to limit the model parameters in a reliable range. The matrix inversion is accomplished by a preconditioned conjugate gradient method. With the above algorithm, equivalent density vectors can be obtained, and interpolation is performed to get the finally density model on the fine mesh in the model domain. Testing on synthetic gravity data demonstrated that the proposed method is faster than conventional generalized inversion algorithm to produce an acceptable solution for gravity inversion problem. The new developed inversion method was also applied for inversion of the gravity data collected over Sichuan basin, southwest China. The established density structure in this study helps understanding the crustal structure of Sichuan basin and provides reference for further oil and gas exploration in this area.
Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.
2008-05-01
The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.
Solvability of the electrocardiology inverse problem for a moving dipole.
Tolkachev, V; Bershadsky, B; Nemirko, A
1993-01-01
New formulations of the direct and inverse problems for the moving dipole are offered. It has been suggested to limit the study by a small area on the chest surface. This lowers the role of the medium inhomogeneity. When formulating the direct problem, irregular components are considered. The algorithm of simultaneous determination of the dipole and regular noise parameters has been described and analytically investigated. It is shown that temporal overdetermination of the equations offers a single solution of the inverse problem for the four leads.
Total-variation based velocity inversion with Bregmanized operator splitting algorithm
NASA Astrophysics Data System (ADS)
Zand, Toktam; Gholami, Ali
2018-04-01
Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.
NASA Astrophysics Data System (ADS)
Sun, J.; Li, Y.
2017-12-01
Magnetic data contain important information about the subsurface rocks that were magnetized in the geological history, which provides an important avenue to the study of the crustal heterogeneities associated with magmatic and hydrothermal activities. Interpretation of magnetic data has been widely used in mineral exploration, basement characterization and large scale crustal studies for several decades. However, interpreting magnetic data has been often complicated by the presence of remanent magnetizations with unknown magnetization directions. Researchers have developed different methods to deal with the challenges posed by remanence. We have developed a new and effective approach to inverting magnetic data for magnetization vector distributions characterized by region-wise consistency in the magnetization directions. This approach combines the classical Tikhonov inversion scheme with fuzzy C-means clustering algorithm, and constrains the estimated magnetization vectors to a specified small number of possible directions while fitting the observed magnetic data to within noise level. Our magnetization vector inversion recovers both the magnitudes and the directions of the magnetizations in the subsurface. Magnetization directions reflect the unique geological or hydrothermal processes applied to each geological unit, and therefore, can potentially be used for the purpose of differentiating various geological units. We have developed a practically convenient and effective way of assessing the uncertainty associated with the inverted magnetization directions (Figure 1), and investigated how geological differentiation results might be affected (Figure 2). The algorithm and procedures we have developed for magnetization vector inversion and uncertainty analysis open up new possibilities of extracting useful information from magnetic data affected by remanence. We will use a field data example from exploration of an iron-oxide-copper-gold (IOCG) deposit in Brazil to illustrate how to solve the inverse problem, assess uncertainty, and perform geology differentiation in practice. We will also discuss the potential applications of this new method to large scale crustal studies.
NASA Technical Reports Server (NTRS)
Dulikravich, George S. (Editor)
1991-01-01
Papers from the Third International Conference on Inverse Design Concepts and Optimization in Engineering Sciences (ICIDES) are presented. The papers discuss current research in the general field of inverse, semi-inverse, and direct design and optimization in engineering sciences. The rapid growth of this relatively new field is due to the availability of faster and larger computing machines.
Cho, Jungyeon
2011-05-13
Electron magnetohydrodynamics (EMHD) provides a fluidlike description of small-scale magnetized plasmas. An EMHD wave propagates along magnetic field lines. The direction of propagation can be either parallel or antiparallel to the magnetic field lines. We numerically study propagation of three-dimensional (3D) EMHD wave packets moving in one direction. We obtain two major results. (1) Unlike its magnetohydrodynamic (MHD) counterpart, an EMHD wave packet is dispersive. Because of this, EMHD wave packets traveling in one direction create opposite-traveling wave packets via self-interaction and cascade energy to smaller scales. (2) EMHD wave packets traveling in one direction clearly exhibit inverse energy cascade. We find that the latter is due to conservation of magnetic helicity. We compare inverse energy cascade in 3D EMHD turbulence and two-dimensional (2D) hydrodynamic turbulence.
On Gammelgaard's Formula for a Star Product with Separation of Variables
NASA Astrophysics Data System (ADS)
Karabegov, Alexander
2013-08-01
We show that Gammelgaard's formula expressing a star product with separation of variables on a pseudo-Kähler manifold in terms of directed graphs without cycles is equivalent to an inversion formula for an operator on a formal Fock space. We prove this inversion formula directly and thus offer an alternative approach to Gammelgaard's formula which gives more insight into the question why the directed graphs in his formula have no cycles.
NASA Astrophysics Data System (ADS)
Galaleldin, S.; Mannan, H. A.; Mukhtar, H.
2017-12-01
In this study, mixed matrix membranes comprised of polyethersulfone as the bulk polymer phase and titanium dioxide (TiO2) nanoparticles as the inorganic discontinuous phase were prepared for CO2/CH4 separation. Membranes were synthesized at filler loading of 0, 5, 10 and 15 wt % via dry phase inversion method. Morphology, chemical bonding and thermal characteristics of membranes were scrutinized utilizing different techniques, namely: Field Emission Scanning Electron Microscopy (FESEM), Fourier Transform InfraRed (FTIR) spectra and Thermogravimetric analysis (TGA) respectively. Membranes gas separation performance was evaluated for CO2 and CH4 gases at 4 bar feed pressure. The highest separation performance was achieved by mixed matrix membrane (MMM) at 5 % loading of TiO2.
Secret Message Decryption: Group Consulting Projects Using Matrices and Linear Programming
ERIC Educational Resources Information Center
Gurski, Katharine F.
2009-01-01
We describe two short group projects for finite mathematics students that incorporate matrices and linear programming into fictional consulting requests presented as a letter to the students. The students are required to use mathematics to decrypt secret messages in one project involving matrix multiplication and inversion. The second project…
Calibration of remotely sensed proportion or area estimates for misclassification error
Raymond L. Czaplewski; Glenn P. Catts
1992-01-01
Classifications of remotely sensed data contain misclassification errors that bias areal estimates. Monte Carlo techniques were used to compare two statistical methods that correct or calibrate remotely sensed areal estimates for misclassification bias using reference data from an error matrix. The inverse calibration estimator was consistently superior to the...
Inversion of high frequency surface waves with fundamental and higher modes
Xia, J.; Miller, R.D.; Park, C.B.; Tian, G.
2003-01-01
The phase velocity of Rayleigh-waves of a layered earth model is a function of frequency and four groups of earth parameters: compressional (P)-wave velocity, shear (S)-wave velocity, density, and thickness of layers. For the fundamental mode of Rayleigh waves, analysis of the Jacobian matrix for high frequencies (2-40 Hz) provides a measure of dispersion curve sensitivity to earth model parameters. S-wave velocities are the dominant influence of the four earth model parameters. This thesis is true for higher modes of high frequency Rayleigh waves as well. Our numerical modeling by analysis of the Jacobian matrix supports at least two quite exciting higher mode properties. First, for fundamental and higher mode Rayleigh wave data with the same wavelength, higher modes can "see" deeper than the fundamental mode. Second, higher mode data can increase the resolution of the inverted S-wave velocities. Real world examples show that the inversion process can be stabilized and resolution of the S-wave velocity model can be improved when simultaneously inverting the fundamental and higher mode data. ?? 2002 Elsevier Science B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lanen, Theo A.; Watt, David W.
1995-10-01
Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.
Fischer, Nadine; Prestel, S.; Ritzmann, M.; ...
2016-10-28
We present the first public implementation of antenna-based QCD initial- and final-state showers. The shower kernels are 2→3 antenna functions, which capture not only the collinear dynamics but also the leading soft (coherent) singularities of QCD matrix elements. We define the evolution measure to be inversely proportional to the leading poles, hence gluon emissions are evolved in a p ⊥ measure inversely proportional to the eikonal, while processes that only contain a single pole (e.g., g → qq¯) are evolved in virtuality. Non-ordered emissions are allowed, suppressed by an additional power of 1/Q 2. Recoils and kinematics are governed bymore » exact on-shell 2 → 3 phase-space factorisations. This first implementation is limited to massless QCD partons and colourless resonances. Tree-level matrix-element corrections are included for QCD up to O(α 4 s) (4 jets), and for Drell–Yan and Higgs production up to O(α 3 s) (V / H + 3 jets). Finally, the resulting algorithm has been made publicly available in Vincia 2.0.« less
Effects of multiple scattering and surface albedo on the photochemistry of the troposphere
NASA Technical Reports Server (NTRS)
Augustsson, T. R.; Tiwari, S. N.
1981-01-01
The effect of treatment of incoming solar radiation on the photochemistry of the troposphere is discussed. A one dimensional photochemical model of the troposphere containing the species of the nitrogen, oxygen, carbon, hydrogen, and sulfur families was developed. The vertical flux is simulated by use of the parameterized eddy diffusion coefficients. The photochemical model is coupled to a radiative transfer model that calculates the radiation field due to the incoming solar radiation which initiates much of the photochemistry of the troposphere. Vertical profiles of tropospheric species were compared with the Leighton approximation, radiative transfer, matrix inversion model. The radiative transfer code includes the effects of multiple scattering due to molecules and aerosols, pure absorption, and surface albedo on the transfer of incoming solar radiation. It is indicated that significant differences exist for several key photolysis frequencies and species number density profiles between the Leighton approximation and the profiles generated with, radiative transfer, matrix inversion technique. Most species show enhanced vertical profiles when the more realistic treatment of the incoming solar radiation field is included
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Yoojin; Doughty, Christine
Input and output files used for fault characterization through numerical simulation using iTOUGH2. The synthetic data for the push period are generated by running a forward simulation (input parameters are provided in iTOUGH2 Brady GF6 Input Parameters.txt [InvExt6i.txt]). In general, the permeability of the fault gouge, damage zone, and matrix are assumed to be unknown. The input and output files are for the inversion scenario where only pressure transients are available at the monitoring well located 200 m above the injection well and only the fault gouge permeability is estimated. The input files are named InvExt6i, INPUT.tpl, FOFT.ins, CO2TAB, andmore » the output files are InvExt6i.out, pest.fof, and pest.sav (names below are display names). The table graphic in the data files below summarizes the inversion results, and indicates the fault gouge permeability can be estimated even if imperfect guesses are used for matrix and damage zone permeabilities, and permeability anisotropy is not taken into account.« less
A Comparison of Weights Matrices on Computation of Dengue Spatial Autocorrelation
NASA Astrophysics Data System (ADS)
Suryowati, K.; Bekti, R. D.; Faradila, A.
2018-04-01
Spatial autocorrelation is one of spatial analysis to identify patterns of relationship or correlation between locations. This method is very important to get information on the dispersal patterns characteristic of a region and linkages between locations. In this study, it applied on the incidence of Dengue Hemorrhagic Fever (DHF) in 17 sub districts in Sleman, Daerah Istimewa Yogyakarta Province. The link among location indicated by a spatial weight matrix. It describe the structure of neighbouring and reflects the spatial influence. According to the spatial data, type of weighting matrix can be divided into two types: point type (distance) and the neighbourhood area (contiguity). Selection weighting function is one determinant of the results of the spatial analysis. This study use queen contiguity based on first order neighbour weights, queen contiguity based on second order neighbour weights, and inverse distance weights. Queen contiguity first order and inverse distance weights shows that there is the significance spatial autocorrelation in DHF, but not by queen contiguity second order. Queen contiguity first and second order compute 68 and 86 neighbour list
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augustsson, T.R.; Tiwari, S.N.
The effect of treatment of incoming solar radiation on the photochemistry of the troposphere is discussed. A one dimensional photochemical model of the troposphere containing the species of the nitrogen, oxygen, carbon, hydrogen, and sulfur families was developed. The vertical flux is simulated by use of the parameterized eddy diffusion coefficients. The photochemical model is coupled to a radiative transfer model that calculates the radiation field due to the incoming solar radiation which initiates much of the photochemistry of the troposphere. Vertical profiles of tropospheric species were compared with the Leighton approximation, radiative transfer, matrix inversion model. The radiative transfermore » code includes the effects of multiple scattering due to molecules and aerosols, pure absorption, and surface albedo on the transfer of incoming solar radiation. It is indicated that significant differences exist for several key photolysis frequencies and species number density profiles between the Leighton approximation and the profiles generated with, radiative transfer, matrix inversion technique. Most species show enhanced vertical profiles when the more realistic treatment of the incoming solar radiation field is included« less
Many Masses on One Stroke:. Economic Computation of Quark Propagators
NASA Astrophysics Data System (ADS)
Frommer, Andreas; Nöckel, Bertold; Güsken, Stephan; Lippert, Thomas; Schilling, Klaus
The computational effort in the calculation of Wilson fermion quark propagators in Lattice Quantum Chromodynamics can be considerably reduced by exploiting the Wilson fermion matrix structure in inversion algorithms based on the non-symmetric Lanczos process. We consider two such methods: QMR (quasi minimal residual) and BCG (biconjugate gradients). Based on the decomposition M/κ = 1/κ-D of the Wilson mass matrix, using QMR, one can carry out inversions on a whole trajectory of masses simultaneously, merely at the computational expense of a single propagator computation. In other words, one has to compute the propagator corresponding to the lightest mass only, while all the heavier masses are given for free, at the price of extra storage. Moreover, the symmetry γ5M = M†γ5 can be used to cut the computational effort in QMR and BCG by a factor of two. We show that both methods then become — in the critical regime of small quark masses — competitive to BiCGStab and significantly better than the standard MR method, with optimal relaxation factor, and CG as applied to the normal equations.
Semianalytical solutions for transport in aquifer and fractured clay matrix system
NASA Astrophysics Data System (ADS)
Huang, Junqi; Goltz, Mark N.
2015-09-01
A three-dimensional mathematical model that describes transport of contaminant in a horizontal aquifer with simultaneous diffusion into a fractured clay formation is proposed. A group of semianalytical solutions is derived based on specific initial and boundary conditions as well as various source functions. The analytical model solutions are evaluated by numerical Laplace inverse transformation and analytical Fourier inverse transformation. The model solutions can be used to study the fate and transport in a three-dimensional spatial domain in which a nonaqueous phase liquid exists as a pool atop a fractured low-permeability clay layer. The nonaqueous phase liquid gradually dissolves into the groundwater flowing past the pool, while simultaneously diffusing into the fractured clay formation below the aquifer. Mass transfer of the contaminant into the clay formation is demonstrated to be significantly enhanced by the existence of the fractures, even though the volume of fractures is relatively small compared to the volume of the clay matrix. The model solution is a useful tool in assessing contaminant attenuation processes in a confined aquifer underlain by a fractured clay formation.
The effects of core-reflected waves on finite fault inversions with teleseismic body wave data
NASA Astrophysics Data System (ADS)
Qian, Yunyi; Ni, Sidao; Wei, Shengji; Almeida, Rafael; Zhang, Han
2017-11-01
Teleseismic body waves are essential for imaging rupture processes of large earthquakes. Earthquake source parameters are usually characterized by waveform analyses such as finite fault inversions using only turning (direct) P and SH waves without considering the reflected phases from the core-mantle boundary (CMB). However, core-reflected waves such as ScS usually have amplitudes comparable to direct S waves due to the total reflection from the CMB and might interfere with the S waves used for inversion, especially at large epicentral distances for long duration earthquakes. In order to understand how core-reflected waves affect teleseismic body wave inversion results, we develop a procedure named Multitel3 to compute Green's functions that contain turning waves (direct P, pP, sP, direct S, sS and reverberations in the crust) and core-reflected waves (PcP, pPcP, sPcP, ScS, sScS and associated reflected phases from the CMB). This ray-based method can efficiently generate synthetic seismograms for turning and core-reflected waves independently, with the flexibility to take into account the 3-D Earth structure effect on the timing between these phases. The performance of this approach is assessed through a series of numerical inversion tests on synthetic waveforms of the 2008 Mw7.9 Wenchuan earthquake and the 2015 Mw7.8 Nepal earthquake. We also compare this improved method with the turning-wave only inversions and explore the stability of the new procedure when there are uncertainties in a priori information (such as fault geometry and epicentre location) or arrival time of core-reflected phases. Finally, a finite fault inversion of the 2005 Mw8.7 Nias-Simeulue earthquake is carried out using the improved Green's functions. Using enhanced Green's functions yields better inversion results as expected. While the finite source inversion with conventional P and SH waves is able to recover large-scale characteristics of the earthquake source, by adding PcP and ScS phases, the inverted slip model and moment rate function better match previous results incorporating field observations, geodetic and seismic data.
Methods to control phase inversions and enhance mass transfer in liquid-liquid dispersions
Tsouris, Constantinos; Dong, Junhang
2002-01-01
The present invention is directed to the effects of applied electric fields on liquid-liquid dispersions. In general, the present invention is directed to the control of phase inversions in liquid-liquid dispersions. Because of polarization and deformation effects, coalescence of aqueous drops is facilitated by the application of electric fields. As a result, with an increase in the applied voltage, the ambivalence region is narrowed and shifted toward higher volume fractions of the dispersed phase. This permits the invention to be used to ensure that the aqueous phase remains continuous, even at a high volume fraction of the organic phase. Additionally, the volume fraction of the organic phase may be increased without causing phase inversion, and may be used to correct a phase inversion which has already occurred. Finally, the invention may be used to enhance mass transfer rates from one phase to another through the use of phase inversions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morisato, A.; Shen, H.C.; Toy, L.G.
1996-12-31
Permeation properties of phase-separated blends prepared from glassy poly(1-trimethylsilyl-1-propyne) (PTMSP) and poly(1-phenyl-1-propyne) (PPP) were determined as a function of blend composition with pure hydrogen, nitrogen, oxygen, carbon dioxide, and butane. Blend permeabilities decrease significantly with increasing PPP concentration and suggest the occurrence of a phase inversion at low PPP content (5 to 20 wt%). Based on TEM analysis high-aspect-ratio (extended) PPP ellipsoidal dispersions are found in a PTMSP matrix, indicating that the phase inversion is closely related to dispersed-phase connectivity in the blends.
NASA Astrophysics Data System (ADS)
Mushtak, V. C.; Williams, E.
2010-12-01
The spatial-temporal behavior of world-wide lightning activity can be effectively used as an indicator of various geophysical processes, the global climate change being of a special interest among them. Since it has been reliably established that the lightning activity presents a major source of natural electromagnetic background in the Schumann resonance (SR) frequency range (5 to 40 Hz), SR measurements provide a continuous flow of information about this globally distributed source, thus forming an informative basis for monitoring its behavior via an inversion of observations into the source’s properties. To have such an inversion procedure effective, there is a series of prerequisites to comply with when planning and realizing it: (a) a proper choice of observable parameters to be used in the inversion; (b) a proper choice of a forward propagation model that would be accurate enough to take into consideration the major propagation effects occurring between a source and observer; (c) a proper choice of a method for inverting the sensitivity matrix. While the prerequisite (a) is quite naturally fulfilled by considering the SR resonance characteristics (modal frequencies, intensities, and quality factors), the compliance with prerequisites (b) and (c) has benefitted greatly from earlier seminal work on geophysical inversion by T.R. Madden. Since it has been found that the electrodynamic non-uniformities of the Earth-ionosphere waveguide, primarily the day/night, play an essential role in low-frequency propagation, use has been made of theory for the two-dimensional telegraph equation (TDTE; Kirillov, 2002) developed on the basis of the innovative suggestion by Madden and Thompson (1965) to consider the waveguide, both physically and mathematically, by analogy with a two-dimensional transmission line. Because of the iterative nature of the inversion procedure and the complicated, non-analytical character of the propagation theory, a special, fast-running TDTE forward algorithm has been developed for repeated numerous calculations of the sensitivity matrix. The theory for the inverse boundary value problem from Madden (1972) allows not only to correctly invert the sensitivity matrix, especially when the latter is ill-defined, but also to determine a priori the optimal observational design. The workability of the developed approaches and techniques is illustrated by estimating and processing observations from a network of SR stations located in Europe (Sopron, Hungary; Belsk, Poland), Asia (Shilong, India; Moshiri, Japan), North America (Rhode Island, USA), and Antarctica (Syowa). The spatial dynamics of major lightning “chimneys” determined via the inversion procedure had been found in a good agreement with general geophysical knowledge even when only the modal frequencies had been used. The incorporation of modal intensities greatly improves the agreement, while the Q-factors have been found of a lesser informative value. The preliminary results form a promising basis for achieving the ultimate objective of this study, The authors are deeply grateful to all the participants of the project who have generously, and on a gratis basis, invested their time and effort into preparing and providing the SR data.
L 1-2 minimization for exact and stable seismic attenuation compensation
NASA Astrophysics Data System (ADS)
Wang, Yufeng; Ma, Xiong; Zhou, Hui; Chen, Yangkang
2018-06-01
Frequency-dependent amplitude absorption and phase velocity dispersion are typically linked by the causality-imposed Kramers-Kronig relations, which inevitably degrade the quality of seismic data. Seismic attenuation compensation is an important processing approach for enhancing signal resolution and fidelity, which can be performed on either pre-stack or post-stack data so as to mitigate amplitude absorption and phase dispersion effects resulting from intrinsic anelasticity of subsurface media. Inversion-based compensation with L1 norm constraint, enlightened by the sparsity of the reflectivity series, enjoys better stability over traditional inverse Q filtering. However, constrained L1 minimization serving as the convex relaxation of the literal L0 sparsity count may not give the sparsest solution when the kernel matrix is severely ill conditioned. Recently, non-convex metric for compressed sensing has attracted considerable research interest. In this paper, we propose a nearly unbiased approximation of the vector sparsity, denoted as L1-2 minimization, for exact and stable seismic attenuation compensation. Non-convex penalty function of L1-2 norm can be decomposed into two convex subproblems via difference of convex algorithm, each subproblem can be solved efficiently by alternating direction method of multipliers. The superior performance of the proposed compensation scheme based on L1-2 metric over conventional L1 penalty is further demonstrated by both synthetic and field examples.
Dealing with non-unique and non-monotonic response in particle sizing instruments
NASA Astrophysics Data System (ADS)
Rosenberg, Phil
2017-04-01
A number of instruments used as de-facto standards for measuring particle size distributions are actually incapable of uniquely determining the size of an individual particle. This is due to non-unique or non-monotonic response functions. Optical particle counters have non monotonic response due to oscillations in the Mie response curves, especially for large aerosol and small cloud droplets. Scanning mobility particle sizers respond identically to two particles where the ratio of particle size to particle charge is approximately the same. Images of two differently sized cloud or precipitation particles taken by an optical array probe can have similar dimensions or shadowed area depending upon where they are in the imaging plane. A number of methods exist to deal with these issues, including assuming that positive and negative errors cancel, smoothing response curves, integrating regions in measurement space before conversion to size space and matrix inversion. Matrix inversion (also called kernel inversion) has the advantage that it determines the size distribution which best matches the observations, given specific information about the instrument (a matrix which specifies the probability that a particle of a given size will be measured in a given instrument size bin). In this way it maximises use of the information in the measurements. However this technique can be confused by poor counting statistics which can cause erroneous results and negative concentrations. Also an effective method for propagating uncertainties is yet to be published or routinely implemented. Her we present a new alternative which overcomes these issues. We use Bayesian methods to determine the probability that a given size distribution is correct given a set of instrument data and then we use Markov Chain Monte Carlo methods to sample this many dimensional probability distribution function to determine the expectation and (co)variances - hence providing a best guess and an uncertainty for the size distribution which includes contributions from the non-unique response curve, counting statistics and can propagate calibration uncertainties.
Inverse medium scattering from periodic structures with fixed-direction incoming waves
NASA Astrophysics Data System (ADS)
Gibson, Peter; Hu, Guanghui; Zhao, Yue
2018-07-01
This paper is concerned with inverse time-harmonic acoustic and electromagnetic scattering from an infinite biperiodic medium (diffraction grating) in three dimensions. In the acoustic case, we prove that the near-field data of fixed-direction plane waves incited at multiple frequencies uniquely determine a refractive index function which depends on two variables. An analogous uniqueness result holds for time-harmonic Maxwell’s system if the inhomogeneity is periodic in one direction and remains invariant along the other two directions. Uniqueness for recovering (non-periodic) compactly supported contrast functions are also presented.
From the Rendering Equation to Stratified Light Transport Inversion
2010-12-09
iteratively. These approaches relate closely to the radiosity method for diffuse global illumination in forward rendering (Hanrahan et al, 1991; Gortler et...currently simply use sparse matrices to represent T, we are also interested in exploring connections with hierar- chical and wavelet radiosity as in...Seidel iterative methods used in radiosity . 2.4 Inverse Light Transport Previous work on inverse rendering has considered inversion of the direct
Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis
NASA Astrophysics Data System (ADS)
Jiao, Yujian; Wang, Li-Lian; Huang, Can
2016-01-01
The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.
NASA Astrophysics Data System (ADS)
Rietbroek, R.; Uebbing, B.; Lück, C.; Kusche, J.
2017-12-01
Ocean mass content (OMC) change due to the melting of the ice-sheets in Greenland and Antarctica, melting of glaciers and changes in terrestrial hydrology is a major contributor to present-day sea level rise. Since 2002, the GRACE satellite mission serves as a valuable tool for directly measuring the variations in OMC. As GRACE has almost reached the end of its lifetime, efforts are being made to utilize the Swarm mission for the recovery of low degree time-variable gravity fields to bridge a possible gap until the GRACE-FO mission and to fill up periods where GRACE data was not existent. To this end we compute Swarm monthly normal equations and spherical harmonics that are found competitive to other solutions. In addition to directly measuring the OMC, combination of GRACE gravity data with altimetry data in a global inversion approach allows to separate the total sea level change into individual mass-driven and steric contributions. However, published estimates of OMC from the direct and inverse methods differ not only depending on the time window, but also are influenced by numerous post-processing choices. Here, we will look into sources of such differences between direct and inverse approaches and evaluate the capabilities of Swarm to derive OMC. Deriving time series of OMC requires several processing steps; choosing a GRACE (and altimetry) product, data coverage, masks and filters to be applied in either spatial or spectral domain, corrections related to spatial leakage, GIA and geocenter motion. In this study, we compare and quantify the effects of the different processing choices of the direct and inverse methods. Our preliminary results point to the GIA correction as the major source of difference between the two approaches.
Direct Measurement of the Density Matrix of a Quantum System
NASA Astrophysics Data System (ADS)
Thekkadath, G. S.; Giner, L.; Chalich, Y.; Horton, M. J.; Banker, J.; Lundeen, J. S.
2016-09-01
One drawback of conventional quantum state tomography is that it does not readily provide access to single density matrix elements since it requires a global reconstruction. Here, we experimentally demonstrate a scheme that can be used to directly measure individual density matrix elements of general quantum states. The scheme relies on measuring a sequence of three observables, each complementary to the last. The first two measurements are made weak to minimize the disturbance they cause to the state, while the final measurement is strong. We perform this joint measurement on polarized photons in pure and mixed states to directly measure their density matrix. The weak measurements are achieved using two walk-off crystals, each inducing a polarization-dependent spatial shift that couples the spatial and polarization degrees of freedom of the photons. This direct measurement method provides an operational meaning to the density matrix and promises to be especially useful for large dimensional states.
Direct Measurement of the Density Matrix of a Quantum System.
Thekkadath, G S; Giner, L; Chalich, Y; Horton, M J; Banker, J; Lundeen, J S
2016-09-16
One drawback of conventional quantum state tomography is that it does not readily provide access to single density matrix elements since it requires a global reconstruction. Here, we experimentally demonstrate a scheme that can be used to directly measure individual density matrix elements of general quantum states. The scheme relies on measuring a sequence of three observables, each complementary to the last. The first two measurements are made weak to minimize the disturbance they cause to the state, while the final measurement is strong. We perform this joint measurement on polarized photons in pure and mixed states to directly measure their density matrix. The weak measurements are achieved using two walk-off crystals, each inducing a polarization-dependent spatial shift that couples the spatial and polarization degrees of freedom of the photons. This direct measurement method provides an operational meaning to the density matrix and promises to be especially useful for large dimensional states.
NASA Astrophysics Data System (ADS)
Puzyrev, Vladimir; Torres-Verdín, Carlos; Calo, Victor
2018-05-01
The interpretation of resistivity measurements acquired in high-angle and horizontal wells is a critical technical problem in formation evaluation. We develop an efficient parallel 3-D inversion method to estimate the spatial distribution of electrical resistivity in the neighbourhood of a well from deep directional electromagnetic induction measurements. The methodology places no restriction on the spatial distribution of the electrical resistivity around arbitrary well trajectories. The fast forward modelling of triaxial induction measurements performed with multiple transmitter-receiver configurations employs a parallel direct solver. The inversion uses a pre-conditioned gradient-based method whose accuracy is improved using the Wolfe conditions to estimate optimal step lengths at each iteration. The large transmitter-receiver offsets, used in the latest generation of commercial directional resistivity tools, improve the depth of investigation to over 30 m from the wellbore. Several challenging synthetic examples confirm the feasibility of the full 3-D inversion-based interpretations for these distances, hence enabling the integration of resistivity measurements with seismic amplitude data to improve the forecast of the petrophysical and fluid properties. Employing parallel direct solvers for the triaxial induction problems allows for large reductions in computational effort, thereby opening the possibility to invert multiposition 3-D data in practical CPU times.
3D tensor-based blind multispectral image decomposition for tumor demarcation
NASA Astrophysics Data System (ADS)
Kopriva, Ivica; Peršin, Antun
2010-03-01
Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).
Efficient least angle regression for identification of linear-in-the-parameters models
Beach, Thomas H.; Rezgui, Yacine
2017-01-01
Least angle regression, as a promising model selection method, differentiates itself from conventional stepwise and stagewise methods, in that it is neither too greedy nor too slow. It is closely related to L1 norm optimization, which has the advantage of low prediction variance through sacrificing part of model bias property in order to enhance model generalization capability. In this paper, we propose an efficient least angle regression algorithm for model selection for a large class of linear-in-the-parameters models with the purpose of accelerating the model selection process. The entire algorithm works completely in a recursive manner, where the correlations between model terms and residuals, the evolving directions and other pertinent variables are derived explicitly and updated successively at every subset selection step. The model coefficients are only computed when the algorithm finishes. The direct involvement of matrix inversions is thereby relieved. A detailed computational complexity analysis indicates that the proposed algorithm possesses significant computational efficiency, compared with the original approach where the well-known efficient Cholesky decomposition is involved in solving least angle regression. Three artificial and real-world examples are employed to demonstrate the effectiveness, efficiency and numerical stability of the proposed algorithm. PMID:28293140
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION
Allen, Genevera I.; Tibshirani, Robert
2015-01-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility. PMID:26877823
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.
Allen, Genevera I; Tibshirani, Robert
2010-06-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.
Le Châtelier reciprocal relations and the mechanical analog
NASA Astrophysics Data System (ADS)
Gilmore, Robert
1983-08-01
Le Châtelier's principle is discussed carefully in terms of two sets of simple thermodynamic examples. The principle is then formulated quantitatively for general thermodynamic systems. The formulation is in terms of a perturbation-response matrix, the Le Châtelier matrix [L]. Le Châtelier's principle is contained in the diagonal elements of this matrix, all of which exceed one. These matrix elements describe the response of a system to a perturbation of either its extensive or intensive variables. These response ratios are inverses of each other. The Le Châtelier matrix is symmetric, so that a new set of thermodynamic reciprocal relations is derived. This quantitative formulation is illustrated by a single simple example which includes the original examples and shows the reciprocities among them. The assumptions underlying this new quantitative formulation of Le Châtelier's principle are general and applicable to a wide variety of nonthermodynamic systems. Le Châtelier's principle is formulated quantitatively for mechanical systems in static equilibrium, and mechanical examples of this formulation are given.
Development of a spectroscopic Mueller matrix imaging ellipsometer for nanostructure metrology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiuguo; Du, Weichao; Yuan, Kui
2016-05-15
In this paper, we describe the development of a spectroscopic Mueller matrix imaging ellipsometer (MMIE), which combines the great power of Mueller matrix ellipsometry with the high spatial resolution of optical microscopy. A dual rotating-compensator configuration is adopted to collect the full 4 × 4 imaging Mueller matrix in a single measurement. The light wavelengths are scanned in the range of 400–700 nm by a monochromator. The instrument has measurement accuracy and precision better than 0.01 for all the Mueller matrix elements in both the whole image and the whole spectral range. The instrument was then applied for the measurementmore » of nanostructures combined with an inverse diffraction problem solving technique. The experiment performed on a photoresist grating sample has demonstrated the great potential of MMIE for accurate grating reconstruction from spectral data collected by a single pixel of the camera and for efficient quantification of geometrical profile of the grating structure over a large area with pixel resolution. It is expected that MMIE will be a powerful tool for nanostructure metrology in future high-volume nanomanufacturing.« less
Pan, Yijie; Wang, Yongtian; Liu, Juan; Li, Xin; Jia, Jia
2014-03-01
Previous research [Appl. Opt.52, A290 (2013)] has revealed that Fourier analysis of three-dimensional affine transformation theory can be used to improve the computation speed of the traditional polygon-based method. In this paper, we continue our research and propose an improved full analytical polygon-based method developed upon this theory. Vertex vectors of primitive and arbitrary triangles and the pseudo-inverse matrix were used to obtain an affine transformation matrix representing the spatial relationship between the two triangles. With this relationship and the primitive spectrum, we analytically obtained the spectrum of the arbitrary triangle. This algorithm discards low-level angular dependent computations. In order to add diffusive reflection to each arbitrary surface, we also propose a whole matrix computation approach that takes advantage of the affine transformation matrix and uses matrix multiplication to calculate shifting parameters of similar sub-polygons. The proposed method improves hologram computation speed for the conventional full analytical approach. Optical experimental results are demonstrated which prove that the proposed method can effectively reconstruct three-dimensional scenes.
Random matrix theory and fund of funds portfolio optimisation
NASA Astrophysics Data System (ADS)
Conlon, T.; Ruskin, H. J.; Crane, M.
2007-08-01
The proprietary nature of Hedge Fund investing means that it is common practise for managers to release minimal information about their returns. The construction of a fund of hedge funds portfolio requires a correlation matrix which often has to be estimated using a relatively small sample of monthly returns data which induces noise. In this paper, random matrix theory (RMT) is applied to a cross-correlation matrix C, constructed using hedge fund returns data. The analysis reveals a number of eigenvalues that deviate from the spectrum suggested by RMT. The components of the deviating eigenvectors are found to correspond to distinct groups of strategies that are applied by hedge fund managers. The inverse participation ratio is used to quantify the number of components that participate in each eigenvector. Finally, the correlation matrix is cleaned by separating the noisy part from the non-noisy part of C. This technique is found to greatly reduce the difference between the predicted and realised risk of a portfolio, leading to an improved risk profile for a fund of hedge funds.
Laplace Transform Based Radiative Transfer Studies
NASA Astrophysics Data System (ADS)
Hu, Y.; Lin, B.; Ng, T.; Yang, P.; Wiscombe, W.; Herath, J.; Duffy, D.
2006-12-01
Multiple scattering is the major uncertainty for data analysis of space-based lidar measurements. Until now, accurate quantitative lidar data analysis has been limited to very thin objects that are dominated by single scattering, where photons from the laser beam only scatter a single time with particles in the atmosphere before reaching the receiver, and simple linear relationship between physical property and lidar signal exists. In reality, multiple scattering is always a factor in space-based lidar measurement and it dominates space- based lidar returns from clouds, dust aerosols, vegetation canopy and phytoplankton. While multiple scattering are clear signals, the lack of a fast-enough lidar multiple scattering computation tool forces us to treat the signal as unwanted "noise" and use simple multiple scattering correction scheme to remove them. Such multiple scattering treatments waste the multiple scattering signals and may cause orders of magnitude errors in retrieved physical properties. Thus the lack of fast and accurate time-dependent radiative transfer tools significantly limits lidar remote sensing capabilities. Analyzing lidar multiple scattering signals requires fast and accurate time-dependent radiative transfer computations. Currently, multiple scattering is done with Monte Carlo simulations. Monte Carlo simulations take minutes to hours and are too slow for interactive satellite data analysis processes and can only be used to help system / algorithm design and error assessment. We present an innovative physics approach to solve the time-dependent radiative transfer problem. The technique utilizes FPGA based reconfigurable computing hardware. The approach is as following, 1. Physics solution: Perform Laplace transform on the time and spatial dimensions and Fourier transform on the viewing azimuth dimension, and convert the radiative transfer differential equation solving into a fast matrix inversion problem. The majority of the radiative transfer computation goes to matrix inversion processes, FFT and inverse Laplace transforms. 2. Hardware solutions: Perform the well-defined matrix inversion, FFT and Laplace transforms on highly parallel, reconfigurable computing hardware. This physics-based computational tool leads to accurate quantitative analysis of space-based lidar signals and improves data quality of current lidar mission such as CALIPSO. This presentation will introduce the basic idea of this approach, preliminary results based on SRC's FPGA-based Mapstation, and how we may apply it to CALIPSO data analysis.
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Girolami, M.
2014-11-01
We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The Bayesian framework is employed to cast the inverse problem into the task of statistical inference whose solution is the posterior distribution in infinite dimensional parameter space conditional upon observation data and Gaussian prior measure. We discretize both the likelihood and the prior using the H1-conforming finite element method together with a matrix transfer technique. The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost uncorrelated/independent from the others providing statistically efficient Markov chain simulation. However this statistical efficiency comes at a computational cost. This motivates us to consider computationally more efficient strategies for RMHMC. At the heart of our construction is the fact that for Gaussian error structures the Fisher information matrix coincides with the Gauss-Newton Hessian. We exploit this fact in considering a computationally simplified RMHMC method combining state-of-the-art adjoint techniques and the superiority of the RMHMC method. Specifically, we first form the Gauss-Newton Hessian at the maximum a posteriori point and then use it as a fixed constant metric tensor throughout RMHMC simulation. This eliminates the need for the computationally costly differential geometric Christoffel symbols, which in turn greatly reduces computational effort at a corresponding loss of sampling efficiency. We further reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint technique. Various numerical results up to 1025 parameters are presented to demonstrate the ability of the RMHMC method in exploring the geometric structure of the problem to propose (almost) uncorrelated/independent samples that are far away from each other, and yet the acceptance rate is almost unity. The results also suggest that for the PDE models considered the proposed fixed metric RMHMC can attain almost as high a quality performance as the original RMHMC, i.e. generating (almost) uncorrelated/independent samples, while being two orders of magnitude less computationally expensive.
Tondon, Abhishek; Kaunas, Roland
2014-01-01
Cell structure depends on both matrix strain and stiffness, but their interactive effects are poorly understood. We investigated the interactive roles of matrix properties and stretching patterns on cell structure by uniaxially stretching U2OS cells expressing GFP-actin on silicone rubber sheets supporting either a surface-adsorbed coating or thick hydrogel of type-I collagen. Cells and their actin stress fibers oriented perpendicular to the direction of cyclic stretch on collagen-coated sheets, but oriented parallel to the stretch direction on collagen gels. There was significant alignment parallel to the direction of a steady increase in stretch for cells on collagen gels, while cells on collagen-coated sheets did not align in any direction. The extent of alignment was dependent on both strain rate and duration. Stretch-induced alignment on collagen gels was blocked by the myosin light-chain kinase inhibitor ML7, but not by the Rho-kinase inhibitor Y27632. We propose that active orientation of the actin cytoskeleton perpendicular and parallel to direction of stretch on stiff and soft substrates, respectively, are responses that tend to maintain intracellular tension at an optimal level. Further, our results indicate that cells can align along directions of matrix stress without collagen fibril alignment, indicating that matrix stress can directly regulate cell morphology.