Sample records for linear inverse solution

  1. Application of a stochastic inverse to the geophysical inverse problem

    NASA Technical Reports Server (NTRS)

    Jordan, T. H.; Minster, J. B.

    1972-01-01

    The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.

  2. Polynomial compensation, inversion, and approximation of discrete time linear systems

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1987-01-01

    The least-squares transformation of a discrete-time multivariable linear system into a desired one by convolving the first with a polynomial system yields optimal polynomial solutions to the problems of system compensation, inversion, and approximation. The polynomial coefficients are obtained from the solution to a so-called normal linear matrix equation, whose coefficients are shown to be the weighting patterns of certain linear systems. These, in turn, can be used in the recursive solution of the normal equation.

  3. Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.

    2010-12-01

    Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.

  4. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  5. FAST TRACK PAPER: Non-iterative multiple-attenuation methods: linear inverse solutions to non-linear inverse problems - II. BMG approximation

    NASA Astrophysics Data System (ADS)

    Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing

    2004-12-01

    The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.

  6. Inverse solutions for electrical impedance tomography based on conjugate gradients methods

    NASA Astrophysics Data System (ADS)

    Wang, M.

    2002-01-01

    A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.

  7. Bayesian Approach to the Joint Inversion of Gravity and Magnetic Data, with Application to the Ismenius Area of Mars

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Raymond, C.; Smrekar, S.; Millbury, C.

    2004-01-01

    This viewgraph presentation reviews a Bayesian approach to the inversion of gravity and magnetic data with specific application to the Ismenius Area of Mars. Many inverse problems encountered in geophysics and planetary science are well known to be non-unique (i.e. inversion of gravity the density structure of a body). In hopes of reducing the non-uniqueness of solutions, there has been interest in the joint analysis of data. An example is the joint inversion of gravity and magnetic data, with the assumption that the same physical anomalies generate both the observed magnetic and gravitational anomalies. In this talk, we formulate the joint analysis of different types of data in a Bayesian framework and apply the formalism to the inference of the density and remanent magnetization structure for a local region in the Ismenius area of Mars. The Bayesian approach allows prior information or constraints in the solutions to be incorporated in the inversion, with the "best" solutions those whose forward predictions most closely match the data while remaining consistent with assumed constraints. The application of this framework to the inversion of gravity and magnetic data on Mars reveals two typical challenges - the forward predictions of the data have a linear dependence on some of the quantities of interest, and non-linear dependence on others (termed the "linear" and "non-linear" variables, respectively). For observations with Gaussian noise, a Bayesian approach to inversion for "linear" variables reduces to a linear filtering problem, with an explicitly computable "error" matrix. However, for models whose forward predictions have non-linear dependencies, inference is no longer given by such a simple linear problem, and moreover, the uncertainty in the solution is no longer completely specified by a computable "error matrix". It is therefore important to develop methods for sampling from the full Bayesian posterior to provide a complete and statistically consistent picture of model uncertainty, and what has been learned from observations. We will discuss advanced numerical techniques, including Monte Carlo Markov

  8. Recursive inversion of externally defined linear systems

    NASA Technical Reports Server (NTRS)

    Bach, Ralph E., Jr.; Baram, Yoram

    1988-01-01

    The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problems of system identification and compensation.

  9. A systematic linear space approach to solving partially described inverse eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Hu, Sau-Lon James; Li, Haujun

    2008-06-01

    Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.

  10. Recursive inversion of externally defined linear systems by FIR filters

    NASA Technical Reports Server (NTRS)

    Bach, Ralph E., Jr.; Baram, Yoram

    1989-01-01

    The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least-squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problem of system identification and compensation.

  11. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  12. Inverse scattering method and soliton double solution family for the general symplectic gravity model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao Yajun

    A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.

  13. Solution of underdetermined systems of equations with gridded a priori constraints.

    PubMed

    Stiros, Stathis C; Saltogianni, Vasso

    2014-01-01

    The TOPINV, Topological Inversion algorithm (or TGS, Topological Grid Search) initially developed for the inversion of highly non-linear redundant systems of equations, can solve a wide range of underdetermined systems of non-linear equations. This approach is a generalization of a previous conclusion that this algorithm can be used for the solution of certain integer ambiguity problems in Geodesy. The overall approach is based on additional (a priori) information for the unknown variables. In the past, such information was used either to linearize equations around approximate solutions, or to expand systems of observation equations solved on the basis of generalized inverses. In the proposed algorithm, the a priori additional information is used in a third way, as topological constraints to the unknown n variables, leading to an R(n) grid containing an approximation of the real solution. The TOPINV algorithm does not focus on point-solutions, but exploits the structural and topological constraints in each system of underdetermined equations in order to identify an optimal closed space in the R(n) containing the real solution. The centre of gravity of the grid points defining this space corresponds to global, minimum-norm solutions. The rationale and validity of the overall approach are demonstrated on the basis of examples and case studies, including fault modelling, in comparison with SVD solutions and true (reference) values, in an accuracy-oriented approach.

  14. Three-dimensional inversion of multisource array electromagnetic data

    NASA Astrophysics Data System (ADS)

    Tartaras, Efthimios

    Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.

  15. Inverse dynamics of a 3 degree of freedom spatial flexible manipulator

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Serna, M.

    1989-01-01

    A technique is presented for solving the inverse dynamics and kinematics of 3 degree of freedom spatial flexible manipulator. The proposed method finds the joint torques necessary to produce a specified end effector motion. Since the inverse dynamic problem in elastic manipulators is closely coupled to the inverse kinematic problem, the solution of the first also renders the displacements and rotations at any point of the manipulator, including the joints. Furthermore the formulation is complete in the sense that it includes all the nonlinear terms due to the large rotation of the links. The Timoshenko beam theory is used to model the elastic characteristics, and the resulting equations of motion are discretized using the finite element method. An iterative solution scheme is proposed that relies on local linearization of the problem. The solution of each linearization is carried out in the frequency domain. The performance and capabilities of this technique are tested through simulation analysis. Results show the potential use of this method for the smooth motion control of space telerobots.

  16. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    NASA Astrophysics Data System (ADS)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good performance of the algorithm. Both versions and documentation are available on Internet and anybody can download them. The goal of this presentation is to offer the algorithm and computer codes for anybody interested in the solution to inverse problems.

  17. Computationally efficient control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne (Inventor)

    2001-01-01

    A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.

  18. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  19. Inverse kinematics of a dual linear actuator pitch/roll heliostat

    NASA Astrophysics Data System (ADS)

    Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh

    2017-06-01

    This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.

  20. A new frequency domain analytical solution of a cascade of diffusive channels for flood routing

    NASA Astrophysics Data System (ADS)

    Cimorelli, Luigi; Cozzolino, Luca; Della Morte, Renata; Pianese, Domenico; Singh, Vijay P.

    2015-04-01

    Simplified flood propagation models are often employed in practical applications for hydraulic and hydrologic analyses. In this paper, we present a new numerical method for the solution of the Linear Parabolic Approximation (LPA) of the De Saint Venant equations (DSVEs), accounting for the space variation of model parameters and the imposition of appropriate downstream boundary conditions. The new model is based on the analytical solution of a cascade of linear diffusive channels in the Laplace Transform domain. The time domain solutions are obtained using a Fourier series approximation of the Laplace Inversion formula. The new Inverse Laplace Transform Diffusive Flood Routing model (ILTDFR) can be used as a building block for the construction of real-time flood forecasting models or in optimization models, because it is unconditionally stable and allows fast and fairly precise computation.

  1. A novel post-processing scheme for two-dimensional electrical impedance tomography based on artificial neural networks

    PubMed Central

    2017-01-01

    Objective Electrical Impedance Tomography (EIT) is a powerful non-invasive technique for imaging applications. The goal is to estimate the electrical properties of living tissues by measuring the potential at the boundary of the domain. Being safe with respect to patient health, non-invasive, and having no known hazards, EIT is an attractive and promising technology. However, it suffers from a particular technical difficulty, which consists of solving a nonlinear inverse problem in real time. Several nonlinear approaches have been proposed as a replacement for the linear solver, but in practice very few are capable of stable, high-quality, and real-time EIT imaging because of their very low robustness to errors and inaccurate modeling, or because they require considerable computational effort. Methods In this paper, a post-processing technique based on an artificial neural network (ANN) is proposed to obtain a nonlinear solution to the inverse problem, starting from a linear solution. While common reconstruction methods based on ANNs estimate the solution directly from the measured data, the method proposed here enhances the solution obtained from a linear solver. Conclusion Applying a linear reconstruction algorithm before applying an ANN reduces the effects of noise and modeling errors. Hence, this approach significantly reduces the error associated with solving 2D inverse problems using machine-learning-based algorithms. Significance This work presents radical enhancements in the stability of nonlinear methods for biomedical EIT applications. PMID:29206856

  2. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  3. Determination of unknown coefficient in a non-linear elliptic problem related to the elastoplastic torsion of a bar

    NASA Astrophysics Data System (ADS)

    Hasanov, Alemdar; Erdem, Arzu

    2008-08-01

    The inverse problem of determining the unknown coefficient of the non-linear differential equation of torsional creep is studied. The unknown coefficient g = g({xi}2) depends on the gradient{xi} : = |{nabla}u| of the solution u(x), x [isin] {Omega} [sub] Rn, of the direct problem. It is proved that this gradient is bounded in C-norm. This permits one to choose the natural class of admissible coefficients for the considered inverse problem. The continuity in the norm of the Sobolev space H1({Omega}) of the solution u(x;g) of the direct problem with respect to the unknown coefficient g = g({xi}2) is obtained in the following sense: ||u(x;g) - u(x;gm)||1 [->] 0 when gm({eta}) [->] g({eta}) point-wise as m [->] {infty}. Based on these results, the existence of a quasi-solution of the inverse problem in the considered class of admissible coefficients is obtained. Numerical examples related to determination of the unknown coefficient are presented.

  4. Solution Methods for 3D Tomographic Inversion Using A Highly Non-Linear Ray Tracer

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Ballard, S.; Young, C. J.; Chang, M.

    2008-12-01

    To develop 3D velocity models to improve nuclear explosion monitoring capability, we have developed a 3D tomographic modeling system that traces rays using an implementation of the Um and Thurber ray pseudo- bending approach, with full enforcement of Snell's Law in 3D at the major discontinuities. Due to the highly non-linear nature of the ray tracer, however, we are forced to substantially damp the inversion in order to converge on a reasonable model. Unfortunately the amount of damping is not known a priori and can significantly extend the number of calls of the computationally expensive ray-tracer and the least squares matrix solver. If the damping term is too small the solution step-size produces either an un-realistic model velocity change or places the solution in or near a local minimum from which extrication is nearly impossible. If the damping term is too large, convergence can be very slow or premature convergence can occur. Standard approaches involve running inversions with a suite of damping parameters to find the best model. A better solution methodology is to take advantage of existing non-linear solution techniques such as Levenberg-Marquardt (LM) or quasi-newton iterative solvers. In particular, the LM algorithm was specifically designed to find the minimum of a multi-variate function that is expressed as the sum of squares of non-linear real-valued functions. It has become a standard technique for solving non-linear least squared problems, and is widely adopted in a broad spectrum of disciplines, including the geosciences. At each iteration, the LM approach dynamically varies the level of damping to optimize convergence. When the current estimate of the solution is far from the ultimate solution LM behaves as a steepest decent method, but transitions to Gauss- Newton behavior, with near quadratic convergence, as the estimate approaches the final solution. We show typical linear solution techniques and how they can lead to local minima if the damping is set too low. We also describe the LM technique and show how it automatically determines the appropriate damping factor as it iteratively converges on the best solution. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000.

  5. Mixed linear-nonlinear fault slip inversion: Bayesian inference of model, weighting, and smoothing parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, J.; Johnson, K. M.

    2009-12-01

    Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress drop.

  6. Probability density of spatially distributed soil moisture inferred from crosshole georadar traveltime measurements

    NASA Astrophysics Data System (ADS)

    Linde, N.; Vrugt, J. A.

    2009-04-01

    Geophysical models are increasingly used in hydrological simulations and inversions, where they are typically treated as an artificial data source with known uncorrelated "data errors". The model appraisal problem in classical deterministic linear and non-linear inversion approaches based on linearization is often addressed by calculating model resolution and model covariance matrices. These measures offer only a limited potential to assign a more appropriate "data covariance matrix" for future hydrological applications, simply because the regularization operators used to construct a stable inverse solution bear a strong imprint on such estimates and because the non-linearity of the geophysical inverse problem is not explored. We present a parallelized Markov Chain Monte Carlo (MCMC) scheme to efficiently derive the posterior spatially distributed radar slowness and water content between boreholes given first-arrival traveltimes. This method is called DiffeRential Evolution Adaptive Metropolis (DREAM_ZS) with snooker updater and sampling from past states. Our inverse scheme does not impose any smoothness on the final solution, and uses uniform prior ranges of the parameters. The posterior distribution of radar slowness is converted into spatially distributed soil moisture values using a petrophysical relationship. To benchmark the performance of DREAM_ZS, we first apply our inverse method to a synthetic two-dimensional infiltration experiment using 9421 traveltimes contaminated with Gaussian errors and 80 different model parameters, corresponding to a model discretization of 0.3 m × 0.3 m. After this, the method is applied to field data acquired in the vadose zone during snowmelt. This work demonstrates that fully non-linear stochastic inversion can be applied with few limiting assumptions to a range of common two-dimensional tomographic geophysical problems. The main advantage of DREAM_ZS is that it provides a full view of the posterior distribution of spatially distributed soil moisture, which is key to appropriately treat geophysical parameter uncertainty and infer hydrologic models.

  7. A note on convergence of solutions of total variation regularized linear inverse problems

    NASA Astrophysics Data System (ADS)

    Iglesias, José A.; Mercier, Gwenael; Scherzer, Otmar

    2018-05-01

    In a recent paper by Chambolle et al (2017 Inverse Problems 33 015002) it was proven that if the subgradient of the total variation at the noise free data is not empty, the level-sets of the total variation denoised solutions converge to the level-sets of the noise free data with respect to the Hausdorff distance. The condition on the subgradient corresponds to the source condition introduced by Burger and Osher (2007 Multiscale Model. Simul. 6 365–95), who proved convergence rates results with respect to the Bregman distance under this condition. We generalize the result of Chambolle et al to total variation regularization of general linear inverse problems under such a source condition. As particular applications we present denoising in bounded and unbounded, convex and non convex domains, deblurring and inversion of the circular Radon transform. In all these examples the convergence result applies. Moreover, we illustrate the convergence behavior through numerical examples.

  8. The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method

    NASA Astrophysics Data System (ADS)

    Voronina, T. A.; Romanenko, A. A.

    2016-12-01

    Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.

  9. Scarp degraded by linear diffusion: inverse solution for age.

    USGS Publications Warehouse

    Andrews, D.J.; Hanks, T.C.

    1985-01-01

    Under the assumption that landforms unaffected by drainage channels are degraded according to the linear diffusion equation, a procedure is developed to invert a scarp profile to find its 'diffusion age'. The inverse procedure applied to synthetic data yields the following rules of thumb. Evidence of initial scarp shape has been lost when apparent age reaches twice its initial value. A scarp that appears to have been formed by one event may have been formed by two with an interval between them as large as apparent age. The simplicity of scarp profile measurement and this inversion makes profile analysis attractive. -from Authors

  10. Integrated Analytic and Linearized Inverse Kinematics for Precise Full Body Interactions

    NASA Astrophysics Data System (ADS)

    Boulic, Ronan; Raunhardt, Daniel

    Despite the large success of games grounded on movement-based interactions the current state of full body motion capture technologies still prevents the exploitation of precise interactions with complex environments. This paper focuses on ensuring a precise spatial correspondence between the user and the avatar. We build upon our past effort in human postural control with a Prioritized Inverse Kinematics framework. One of its key advantage is to ease the dynamic combination of postural and collision avoidance constraints. However its reliance on a linearized approximation of the problem makes it vulnerable to the well-known full extension singularity of the limbs. In such context the tracking performance is reduced and/or less believable intermediate postural solutions are produced. We address this issue by introducing a new type of analytic constraint that smoothly integrates within the prioritized Inverse Kinematics framework. The paper first recalls the background of full body 3D interactions and the advantages and drawbacks of the linearized IK solution. Then the Flexion-EXTension constraint (FLEXT in short) is introduced for the partial position control of limb-like articulated structures. Comparative results illustrate the interest of this new type of integrated analytical and linearized IK control.

  11. Steady induction effects in geomagnetism. Part 1B: Geomagnetic estimation of steady surficial core motions: A non-linear inverse problem

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1993-01-01

    The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.

  12. Inversion Of Jacobian Matrix For Robot Manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Report discusses inversion of Jacobian matrix for class of six-degree-of-freedom arms with spherical wrist, i.e., with last three joints intersecting. Shows by taking advantage of simple geometry of such arms, closed-form solution of Q=J-1X, which represents linear transformation from task space to joint space, obtained efficiently. Presents solutions for PUMA arm, JPL/Stanford arm, and six-revolute-joint coplanar arm along with all singular points. Main contribution of paper shows simple geometry of this type of arms exploited in performing inverse transformation without any need to compute Jacobian or its inverse explicitly. Implication of this computational efficiency advanced task-space control schemes for spherical-wrist arms implemented more efficiently.

  13. Easy way to determine quantitative spatial resolution distribution for a general inverse problem

    NASA Astrophysics Data System (ADS)

    An, M.; Feng, M.

    2013-12-01

    The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.

  14. The incomplete inverse and its applications to the linear least squares problem

    NASA Technical Reports Server (NTRS)

    Morduch, G. E.

    1977-01-01

    A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.

  15. Computer programs for the solution of systems of linear algebraic equations

    NASA Technical Reports Server (NTRS)

    Sequi, W. T.

    1973-01-01

    FORTRAN subprograms for the solution of systems of linear algebraic equations are described, listed, and evaluated in this report. Procedures considered are direct solution, iteration, and matrix inversion. Both incore methods and those which utilize auxiliary data storage devices are considered. Some of the subroutines evaluated require the entire coefficient matrix to be in core, whereas others account for banding or sparceness of the system. General recommendations relative to equation solving are made, and on the basis of tests, specific subprograms are recommended.

  16. Approximate non-linear multiparameter inversion for multicomponent single and double P-wave scattering in isotropic elastic media

    NASA Astrophysics Data System (ADS)

    Ouyang, Wei; Mao, Weijian

    2018-03-01

    An asymptotic quadratic true-amplitude inversion method for isotropic elastic P waves is proposed to invert medium parameters. The multicomponent P-wave scattered wavefield is computed based on a forward relationship using second-order Born approximation and corresponding high-frequency ray theoretical methods. Within the local double scattering mechanism, the P-wave transmission factors are elaborately calculated, which results in the radiation pattern for P-waves scattering being a quadratic combination of the density and Lamé's moduli perturbation parameters. We further express the elastic P-wave scattered wavefield in a form of generalized Radon transform (GRT). After introducing classical backprojection operators, we obtain an approximate solution of the inverse problem by solving a quadratic non-linear system. Numerical tests with synthetic data computed by finite-differences scheme demonstrate that our quadratic inversion can accurately invert perturbation parameters for strong perturbations, compared with the P-wave single-scattering linear inversion method. Although our inversion strategy here is only syncretized with P-wave scattering, it can be extended to invert multicomponent elastic data containing both P-wave and S-wave information.

  17. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    NASA Astrophysics Data System (ADS)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).

  18. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  19. A comparative study of minimum norm inverse methods for MEG imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leahy, R.M.; Mosher, J.C.; Phillips, J.W.

    1996-07-01

    The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less

  20. Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media

    NASA Astrophysics Data System (ADS)

    Jakobsen, Morten; Tveit, Svenn

    2018-05-01

    We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.

  1. Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow

    NASA Astrophysics Data System (ADS)

    Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar

    2014-09-01

    We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.

  2. Calculation of earthquake rupture histories using a hybrid global search algorithm: Application to the 1992 Landers, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.

    1996-01-01

    A method is presented for the simultaneous calculation of slip amplitudes and rupture times for a finite fault using a hybrid global search algorithm. The method we use combines simulated annealing with the downhill simplex method to produce a more efficient search algorithm then either of the two constituent parts. This formulation has advantages over traditional iterative or linearized approaches to the problem because it is able to escape local minima in its search through model space for the global optimum. We apply this global search method to the calculation of the rupture history for the Landers, California, earthquake. The rupture is modeled using three separate finite-fault planes to represent the three main fault segments that failed during this earthquake. Both the slip amplitude and the time of slip are calculated for a grid work of subfaults. The data used consist of digital, teleseismic P and SH body waves. Long-period, broadband, and short-period records are utilized to obtain a wideband characterization of the source. The results of the global search inversion are compared with a more traditional linear-least-squares inversion for only slip amplitudes. We use a multi-time-window linear analysis to relax the constraints on rupture time and rise time in the least-squares inversion. Both inversions produce similar slip distributions, although the linear-least-squares solution has a 10% larger moment (7.3 ?? 1026 dyne-cm compared with 6.6 ?? 1026 dyne-cm). Both inversions fit the data equally well and point out the importance of (1) using a parameterization with sufficient spatial and temporal flexibility to encompass likely complexities in the rupture process, (2) including suitable physically based constraints on the inversion to reduce instabilities in the solution, and (3) focusing on those robust rupture characteristics that rise above the details of the parameterization and data set.

  3. A trade-off solution between model resolution and covariance in surface-wave inversion

    USGS Publications Warehouse

    Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.

    2010-01-01

    Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.

  4. Numerical solution of inverse scattering for near-field optics.

    PubMed

    Bao, Gang; Li, Peijun

    2007-06-01

    A novel regularized recursive linearization method is developed for a two-dimensional inverse medium scattering problem that arises in near-field optics, which reconstructs the scatterer of an inhomogeneous medium located on a substrate from data accessible through photon scanning tunneling microscopy experiments. Based on multiple frequency scattering data, the method starts from the Born approximation corresponding to weak scattering at a low frequency, and each update is obtained by continuation on the wavenumber from solutions of one forward problem and one adjoint problem of the Helmholtz equation.

  5. Astrophysical masers - Inverse methods, precision, resolution and uniqueness

    NASA Astrophysics Data System (ADS)

    Lerche, I.

    1986-07-01

    The paper provides exact analytic solutions to the two-level, steady-state, maser problem in parametric form, with the emergent intensities expressed in terms of the incident intensities and with the maser length also given in terms of an integral over the intensities. It is shown that some assumption must be made on the emergent intensity on the nonobservable side of the astrophysical maser in order to obtain any inversion of the equations. The incident intensities can then be expressed in terms of the emergent, observable, flux. It is also shown that the inversion is nonunique unless a homogeneous linear integral equation has only a null solution. Constraints imposed by knowledge of the physical length of the maser are felt in a nonlinear manner by the parametric variable and do not appear to provide any substantive additional information to reduce the degree of nonuniqueness of the inverse solutions. It is concluded that the questions of precision, resolution and uniqueness for solutions to astrophysical maser problems will remain more of an emotional art than a logical science for some time to come.

  6. Forward and inverse solutions for Risley prism based on the Denavit-Hartenberg methodology

    NASA Astrophysics Data System (ADS)

    Beltran-Gonzalez, A.; Garcia-Torales, G.; Strojnik, M.; Flores, J. L.; Garcia-Luna, J. L.

    2017-08-01

    In this work forward and inverse solutions for two-element Risley prism for pointing and scanning beam systems are developed. A more efficient and faster algorithm is proposed to make an analogy of the Risley prism system compared with a robotic system with two degrees of freedom. This system of equations controls each Risley prism individually as a planar manipulator arm of two links. In order to evaluate the algorithm we implement it in a pointing system. We perform popular routines such as the linear, spiral and loops traces. Using forward and inverse solutions for two-element Risley prism it is also possible to point at coordinates specified by the user, provided they are within the pointer area of work area. Experimental results are showed as a validation of our proposal.

  7. The general linear inverse problem - Implication of surface waves and free oscillations for earth structure.

    NASA Technical Reports Server (NTRS)

    Wiggins, R. A.

    1972-01-01

    The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.

  8. An approximation theory for the identification of linear thermoelastic systems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.; Su, Chien-Hua Frank

    1990-01-01

    An abstract approximation framework and convergence theory for the identification of thermoelastic systems is developed. Starting from an abstract operator formulation consisting of a coupled second order hyperbolic equation of elasticity and first order parabolic equation for heat conduction, well-posedness is established using linear semigroup theory in Hilbert space, and a class of parameter estimation problems is then defined involving mild solutions. The approximation framework is based upon generic Galerkin approximation of the mild solutions, and convergence of solutions of the resulting sequence of approximating finite dimensional parameter identification problems to a solution of the original infinite dimensional inverse problem is established using approximation results for operator semigroups. An example involving the basic equations of one dimensional linear thermoelasticity and a linear spline based scheme are discussed. Numerical results indicate how the approach might be used in a study of damping mechanisms in flexible structures.

  9. Spatial operator factorization and inversion of the manipulator mass matrix

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo; Kreutz-Delgado, Kenneth

    1992-01-01

    This paper advances two linear operator factorizations of the manipulator mass matrix. Embedded in the factorizations are many of the techniques that are regarded as very efficient computational solutions to inverse and forward dynamics problems. The operator factorizations provide a high-level architectural understanding of the mass matrix and its inverse, which is not visible in the detailed algorithms. They also lead to a new approach to the development of computer programs or organize complexity in robot dynamics.

  10. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data balancing normalization weights for the joint inversion of two or more data sets encourages the inversion to fit each data type equally well. A synthetic joint inversion of marine CSEM and MT data illustrates the algorithm's performance and parallel scaling on up to 480 processing cores. CSEM inversion of data from the Middle America Trench offshore Nicaragua demonstrates a real world application. The source code and MATLAB interface tools are freely available at http://mare2dem.ucsd.edu.

  11. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  12. Nonlinear Waves and Inverse Scattering

    DTIC Science & Technology

    1990-09-18

    to be published Proceedings: conference Chaos in Australia (February 1990). 5. On the Kadomtsev Petviashvili Equation and Associated Constraints by...Scattering Transfoni (IST). IST is a method which alows one to’solve nonlinear wave equations by solving certain related direct and inverse scattering...problems. We use these results to find solutions to nonlinear wave equations much like one uses Fourier analysis for linear problems. Moreover the

  13. Complex transition metal hydrides: linear correlation of countercation electronegativity versus T-D bond lengths.

    PubMed

    Humphries, T D; Sheppard, D A; Buckley, C E

    2015-06-30

    For homoleptic 18-electron complex hydrides, an inverse linear correlation has been established between the T-deuterium bond length (T = Fe, Co, Ni) and the average electronegativity of the metal countercations. This relationship can be further employed towards aiding structural solutions and predicting physical properties of novel complex transition metal hydrides.

  14. 3-D linear inversion of gravity data: method and application to Basse-Terre volcanic island, Guadeloupe, Lesser Antilles

    NASA Astrophysics Data System (ADS)

    Barnoud, Anne; Coutant, Olivier; Bouligand, Claire; Gunawan, Hendra; Deroussi, Sébastien

    2016-04-01

    We use a Bayesian formalism combined with a grid node discretization for the linear inversion of gravimetric data in terms of 3-D density distribution. The forward modelling and the inversion method are derived from seismological inversion techniques in order to facilitate joint inversion or interpretation of density and seismic velocity models. The Bayesian formulation introduces covariance matrices on model parameters to regularize the ill-posed problem and reduce the non-uniqueness of the solution. This formalism favours smooth solutions and allows us to specify a spatial correlation length and to perform inversions at multiple scales. We also extract resolution parameters from the resolution matrix to discuss how well our density models are resolved. This method is applied to the inversion of data from the volcanic island of Basse-Terre in Guadeloupe, Lesser Antilles. A series of synthetic tests are performed to investigate advantages and limitations of the methodology in this context. This study results in the first 3-D density models of the island of Basse-Terre for which we identify: (i) a southward decrease of densities parallel to the migration of volcanic activity within the island, (ii) three dense anomalies beneath Petite Plaine Valley, Beaugendre Valley and the Grande-Découverte-Carmichaël-Soufrière Complex that may reflect the trace of former major volcanic feeding systems, (iii) shallow low-density anomalies in the southern part of Basse-Terre, especially around La Soufrière active volcano, Piton de Bouillante edifice and along the western coast, reflecting the presence of hydrothermal systems and fractured and altered rocks.

  15. Solving geosteering inverse problems by stochastic Hybrid Monte Carlo method

    DOE PAGES

    Shen, Qiuyang; Wu, Xuqing; Chen, Jiefu; ...

    2017-11-20

    The inverse problems arise in almost all fields of science where the real-world parameters are extracted from a set of measured data. The geosteering inversion plays an essential role in the accurate prediction of oncoming strata as well as a reliable guidance to adjust the borehole position on the fly to reach one or more geological targets. This mathematical treatment is not easy to solve, which requires finding an optimum solution among a large solution space, especially when the problem is non-linear and non-convex. Nowadays, a new generation of logging-while-drilling (LWD) tools has emerged on the market. The so-called azimuthalmore » resistivity LWD tools have azimuthal sensitivity and a large depth of investigation. Hence, the associated inverse problems become much more difficult since the earth model to be inverted will have more detailed structures. The conventional deterministic methods are incapable to solve such a complicated inverse problem, where they suffer from the local minimum trap. Alternatively, stochastic optimizations are in general better at finding global optimal solutions and handling uncertainty quantification. In this article, we investigate the Hybrid Monte Carlo (HMC) based statistical inversion approach and suggest that HMC based inference is more efficient in dealing with the increased complexity and uncertainty faced by the geosteering problems.« less

  16. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  17. The Form of the Solutions of the Linear Integro-Differential Equations of Subsonic Aeroelasticity.

    DTIC Science & Technology

    1979-09-01

    coefficients w (0) are given in Table 3; it V follows that, for T > 0 and (E - K v2) non-singular, the inverse transform of M- ) has the form, using (B-I) V...degree of freedom system by expanding )M- I in the form of equation (35), obtaining its inverse transform using the v -1results of Appendix A and hence...obtaining the inverse transform of M- l . The two-dimensional case, when the characteristic equation has a zero root, is not as simple. * Assuming all

  18. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  19. An Efficient Spectral Method for Ordinary Differential Equations with Rational Function Coefficients

    NASA Technical Reports Server (NTRS)

    Coutsias, Evangelos A.; Torres, David; Hagstrom, Thomas

    1994-01-01

    We present some relations that allow the efficient approximate inversion of linear differential operators with rational function coefficients. We employ expansions in terms of a large class of orthogonal polynomial families, including all the classical orthogonal polynomials. These families obey a simple three-term recurrence relation for differentiation, which implies that on an appropriately restricted domain the differentiation operator has a unique banded inverse. The inverse is an integration operator for the family, and it is simply the tridiagonal coefficient matrix for the recurrence. Since in these families convolution operators (i.e. matrix representations of multiplication by a function) are banded for polynomials, we are able to obtain a banded representation for linear differential operators with rational coefficients. This leads to a method of solution of initial or boundary value problems that, besides having an operation count that scales linearly with the order of truncation N, is computationally well conditioned. Among the applications considered is the use of rational maps for the resolution of sharp interior layers.

  20. Guaranteed estimation of solutions to Helmholtz transmission problems with uncertain data from their indirect noisy observations

    NASA Astrophysics Data System (ADS)

    Podlipenko, Yu. K.; Shestopalov, Yu. V.

    2017-09-01

    We investigate the guaranteed estimation problem of linear functionals from solutions to transmission problems for the Helmholtz equation with inexact data. The right-hand sides of equations entering the statements of transmission problems and the statistical characteristics of observation errors are supposed to be unknown and belonging to certain sets. It is shown that the optimal linear mean square estimates of the above mentioned functionals and estimation errors are expressed via solutions to the systems of transmission problems of the special type. The results and techniques can be applied in the analysis and estimation of solution to forward and inverse electromagnetic and acoustic problems with uncertain data that arise in mathematical models of the wave diffraction on transparent bodies.

  1. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  2. Use of Linear Prediction Uncertainty Analysis to Guide Conditioning of Models Simulating Surface-Water/Groundwater Interactions

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; White, J.; Doherty, J.

    2011-12-01

    Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.

  3. Bayesian inversion of refraction seismic traveltime data

    NASA Astrophysics Data System (ADS)

    Ryberg, T.; Haberland, Ch

    2018-03-01

    We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test for a synthetic data set from a known model is also presented.

  4. Colloidal inverse bicontinuous cubic membranes of block copolymers with tunable surface functional groups

    NASA Astrophysics Data System (ADS)

    La, Yunju; Park, Chiyoung; Shin, Tae Joo; Joo, Sang Hoon; Kang, Sebyung; Kim, Kyoung Taek

    2014-06-01

    Analogous to the complex membranes found in cellular organelles, such as the endoplasmic reticulum, the inverse cubic mesophases of lipids and their colloidal forms (cubosomes) possess internal networks of water channels arranged in crystalline order, which provide a unique nanospace for membrane-protein crystallization and guest encapsulation. Polymeric analogues of cubosomes formed by the direct self-assembly of block copolymers in solution could provide new polymeric mesoporous materials with a three-dimensionally organized internal maze of large water channels. Here we report the self-assembly of amphiphilic dendritic-linear block copolymers into polymer cubosomes in aqueous solution. The presence of precisely defined bulky dendritic blocks drives the block copolymers to form spontaneously highly curved bilayers in aqueous solution. This results in the formation of colloidal inverse bicontinuous cubic mesophases. The internal networks of water channels provide a high surface area with tunable surface functional groups that can serve as anchoring points for large guests such as proteins and enzymes.

  5. A multi-frequency iterative imaging method for discontinuous inverse medium problem

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Feng, Lixin

    2018-06-01

    The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.

  6. Predicting the Underwater Sound of Moderate and Heavy Rainfall from Laboratory Measurements of Radiation from Single Large Raindrops

    DTIC Science & Technology

    1992-03-01

    Elementary Linear Algebra with Applications, pp. 301- 323, John Wiley and Sons Inc., 1987. Atlas, D., and Ulbrich, C. E. W., "The Physical Basis for...vector drd In this case, the linear system is said to be inconsistent ( Anton and Rorres, 1987). In contrast, for an underdetermined system (where the...ocean acoustical tomography and seismology. In simplest terms, the general linear inverse problem consists of fimding the desired solution to a set of m

  7. Extended resolvent and inverse scattering with an application to KPI

    NASA Astrophysics Data System (ADS)

    Boiti, M.; Pempinelli, F.; Pogrebkov, A. K.; Prinari, B.

    2003-08-01

    We present in detail an extended resolvent approach for investigating linear problems associated to 2+1 dimensional integrable equations. Our presentation is based as an example on the nonstationary Schrödinger equation with potential being a perturbation of the one-soliton potential by means of a decaying two-dimensional function. Modification of the inverse scattering theory as well as properties of the Jost solutions and spectral data as follows from the resolvent approach are given.

  8. Classifying bilinear differential equations by linear superposition principle

    NASA Astrophysics Data System (ADS)

    Zhang, Lijun; Khalique, Chaudry Masood; Ma, Wen-Xiu

    2016-09-01

    In this paper, we investigate the linear superposition principle of exponential traveling waves to construct a sub-class of N-wave solutions of Hirota bilinear equations. A necessary and sufficient condition for Hirota bilinear equations possessing this specific sub-class of N-wave solutions is presented. We apply this result to find N-wave solutions to the (2+1)-dimensional KP equation, a (3+1)-dimensional generalized Kadomtsev-Petviashvili (KP) equation, a (3+1)-dimensional generalized BKP equation and the (2+1)-dimensional BKP equation. The inverse question, i.e., constructing Hirota Bilinear equations possessing N-wave solutions, is considered and a refined 3-step algorithm is proposed. As examples, we construct two very general kinds of Hirota bilinear equations of order 4 possessing N-wave solutions among which one satisfies dispersion relation and another does not satisfy dispersion relation.

  9. Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1991-01-01

    We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  10. Reversed inverse regression for the univariate linear calibration and its statistical properties derived using a new methodology

    NASA Astrophysics Data System (ADS)

    Kang, Pilsang; Koo, Changhoi; Roh, Hokyu

    2017-11-01

    Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.

  11. LQS_INVERSION v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiss, Chester J

    FORTRAN90 codes for inversion of electrostatic geophysical data in terms of three subsurface parameters in a single-well, oilfield environment: the linear charge density of the steel well casing (L), the point charge associated with an induced fracture filled with a conductive contrast agent (Q) and the location of said fracture (s). Theory is described in detail in Weiss et al. (Geophysics, 2016). Inversion strategy is to loop over candidate fracture locations, and at each one minimize the squared Cartesian norm of the data misfit to arrive at L and Q. Solution method is to construct the 2x2 linear system ofmore » normal equations and compute L and Q algebraically. Practical Application: Oilfield environments where observed electrostatic geophysical data can reasonably be assumed by a simple L-Q-s model. This may include hydrofracking operations, as postulated in Weiss et al. (2016), but no field validation examples have so far been provided.« less

  12. Force sensing using 3D displacement measurements in linear elastic bodies

    NASA Astrophysics Data System (ADS)

    Feng, Xinzeng; Hui, Chung-Yuen

    2016-07-01

    In cell traction microscopy, the mechanical forces exerted by a cell on its environment is usually determined from experimentally measured displacement by solving an inverse problem in elasticity. In this paper, an innovative numerical method is proposed which finds the "optimal" traction to the inverse problem. When sufficient regularization is applied, we demonstrate that the proposed method significantly improves the widely used approach using Green's functions. Motivated by real cell experiments, the equilibrium condition of a slowly migrating cell is imposed as a set of equality constraints on the unknown traction. Our validation benchmarks demonstrate that the numeric solution to the constrained inverse problem well recovers the actual traction when the optimal regularization parameter is used. The proposed method can thus be applied to study general force sensing problems, which utilize displacement measurements to sense inaccessible forces in linear elastic bodies with a priori constraints.

  13. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  14. SYNTHESIS OF NOVEL ALL-DIELECTRIC GRATING FILTERS USING GENETIC ALGORITHMS

    NASA Technical Reports Server (NTRS)

    Zuffada, Cinzia; Cwik, Tom; Ditchman, Christopher

    1997-01-01

    We are concerned with the design of inhomogeneous, all dielectric (lossless) periodic structures which act as filters. Dielectric filters made as stacks of inhomogeneous gratings and layers of materials are being used in optical technology, but are not common at microwave frequencies. The problem is then finding the periodic cell's geometric configuration and permittivity values which correspond to a specified reflectivity/transmittivity response as a function of frequency/illumination angle. This type of design can be thought of as an inverse-source problem, since it entails finding a distribution of sources which produce fields (or quantities derived from them) of given characteristics. Electromagnetic sources (electric and magnetic current densities) in a volume are related to the outside fields by a well known linear integral equation. Additionally, the sources are related to the fields inside the volume by a constitutive equation, involving the material properties. Then, the relationship linking the fields outside the source region to those inside is non-linear, in terms of material properties such as permittivity, permeability and conductivity. The solution of the non-linear inverse problem is cast here as a combination of two linear steps, by explicitly introducing the electromagnetic sources in the computational volume as a set of unknowns in addition to the material unknowns. This allows to solve for material parameters and related electric fields in the source volume which are consistent with Maxwell's equations. Solutions are obtained iteratively by decoupling the two steps. First, we invert for the permittivity only in the minimization of a cost function and second, given the materials, we find the corresponding electric fields through direct solution of the integral equation in the source volume. The sources thus computed are used to generate the far fields and the synthesized triter response. The cost function is obtained by calculating the deviation between the synthesized value of reflectivity/transmittivity and the desired one. Solution geometries for the periodic cell are sought as gratings (ensembles of columns of different heights and widths), or combinations of homogeneous layers of different dielectric materials and gratings. Hence the explicit unknowns of the inversion step are the material permittivities and the relative boundaries separating homogeneous parcels of the periodic cell.

  15. On the perturbation and subproper splittings for the generalized inverse AT,S(2) of rectangular matrix A

    NASA Astrophysics Data System (ADS)

    Wei, Yimin; Wu, Hebing

    2001-12-01

    In this paper, the perturbation and subproper splittings for the generalized inverse AT,S(2), the unique matrix X such that XAX=X, R(X)=T and N(X)=S, are considered. We present lower and upper bounds for the perturbation of AT,S(2). Convergence of subproper splittings for computing the special solution AT,S(2)b of restricted rectangular linear system Ax=b, x[set membership, variant]T, are studied. For the solution AT,S(2)b we develop a characterization. Therefore, we give a unified treatment of the related problems considered in literature by Ben-Israel, Berman, Hanke, Neumann, Plemmons, etc.

  16. Stability and uncertainty of finite-fault slip inversions: Application to the 2004 Parkfield, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.; Mendoza, C.; Ji, C.; Larson, K.M.

    2007-01-01

    The 2004 Parkfield, California, earthquake is used to investigate stability and uncertainty aspects of the finite-fault slip inversion problem with different a priori model assumptions. We utilize records from 54 strong ground motion stations and 13 continuous, 1-Hz sampled, geodetic instruments. Two inversion procedures are compared: a linear least-squares subfault-based methodology and a nonlinear global search algorithm. These two methods encompass a wide range of the different approaches that have been used to solve the finite-fault slip inversion problem. For the Parkfield earthquake and the inversion of velocity or displacement waveforms, near-surface related site response (top 100 m, frequencies above 1 Hz) is shown to not significantly affect the solution. Results are also insensitive to selection of slip rate functions with similar duration and to subfault size if proper stabilizing constraints are used. The linear and nonlinear formulations yield consistent results when the same limitations in model parameters are in place and the same inversion norm is used. However, the solution is sensitive to the choice of inversion norm, the bounds on model parameters, such as rake and rupture velocity, and the size of the model fault plane. The geodetic data set for Parkfield gives a slip distribution different from that of the strong-motion data, which may be due to the spatial limitation of the geodetic stations and the bandlimited nature of the strong-motion data. Cross validation and the bootstrap method are used to set limits on the upper bound for rupture velocity and to derive mean slip models and standard deviations in model parameters. This analysis shows that slip on the northwestern half of the Parkfield rupture plane from the inversion of strong-motion data is model dependent and has a greater uncertainty than slip near the hypocenter.

  17. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  18. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  19. Variability simulations with a steady, linearized primitive equations model

    NASA Technical Reports Server (NTRS)

    Kinter, J. L., III; Nigam, S.

    1985-01-01

    Solutions of the steady, primitive equations on a sphere, linearized about a zonally symmetric basic state are computed for the purpose of simulating monthly mean variability in the troposphere. The basic states are observed, winter monthly mean, zonal means of zontal and meridional velocities, temperatures and surface pressures computed from the 15 year NMC time series. A least squares fit to a series of Legendre polynomials is used to compute the basic states between 20 H and the equator, and the hemispheres are assumed symmetric. The model is spectral in the zonal direction, and centered differences are employed in the meridional and vertical directions. Since the model is steady and linear, the solution is obtained by inversion of a block, pente-diagonal matrix. The model simulates the climatology of the GFDL nine level, spectral general circulation model quite closely, particularly in middle latitudes above the boundary layer. This experiment is an extension of that simulation to examine variability of the steady, linear solution.

  20. A comparison of lidar inversion methods for cirrus applications

    NASA Technical Reports Server (NTRS)

    Elouragini, Salem; Flamant, Pierre H.

    1992-01-01

    Several methods for inverting the lidar equation are suggested as means to derive the cirrus optical properties (beta backscatter, alpha extinction coefficients, and delta optical depth) at one wavelength. The lidar equation can be inverted in a linear or logarithmic form; either solution assumes a linear relationship: beta = kappa(alpha), where kappa is the lidar ratio. A number of problems prevent us from calculating alpha (or beta) with a good accuracy. Some of these are as follows: (1) the multiple scattering effect (most authors neglect it); (2) an absolute calibration of the lidar system (difficult and sometimes not possible); (3) lack of accuracy on the lidar ratio k (taken as constant, but in fact it varies with range and cloud species); and (4) the determination of boundary condition for logarithmic solution which depends on signal to noise ration (SNR) at cloud top. An inversion in a linear form needs an absolute calibration of the system. In practice one uses molecular backscattering below the cloud to calibrate the system. This method is not permanent because the lower atmosphere turbidity is variable. For a logarithmic solution, a reference extinction coefficient (alpha(sub f)) at cloud top is required. Several methods to determine alpha(sub f) were suggested. We tested these methods at low SNR. This led us to propose two new methods referenced as S1 and S2.

  1. Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.

    PubMed

    Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery

    The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.

  2. Computational Methods for Sparse Solution of Linear Inverse Problems

    DTIC Science & Technology

    2009-03-01

    this approach is that the algorithms take advantage of fast matrix–vector multiplications. An implementation is available as pdco and SolveBP in the...M. A. Saunders, “ PDCO : primal-dual interior-point method for con- vex objectives,” Systems Optimization Laboratory, Stanford University, Tech. Rep

  3. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  4. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  5. Preconditioned alternating direction method of multipliers for inverse problems with constraints

    NASA Astrophysics Data System (ADS)

    Jiao, Yuling; Jin, Qinian; Lu, Xiliang; Wang, Weijie

    2017-02-01

    We propose a preconditioned alternating direction method of multipliers (ADMM) to solve linear inverse problems in Hilbert spaces with constraints, where the feature of the sought solution under a linear transformation is captured by a possibly non-smooth convex function. During each iteration step, our method avoids solving large linear systems by choosing a suitable preconditioning operator. In case the data is given exactly, we prove the convergence of our preconditioned ADMM without assuming the existence of a Lagrange multiplier. In case the data is corrupted by noise, we propose a stopping rule using information on noise level and show that our preconditioned ADMM is a regularization method; we also propose a heuristic rule when the information on noise level is unavailable or unreliable and give its detailed analysis. Numerical examples are presented to test the performance of the proposed method.

  6. FWT2D: A massively parallel program for frequency-domain full-waveform tomography of wide-aperture seismic data—Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves

    2009-03-01

    This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.

  7. Exact finite element method analysis of viscoelastic tapered structures to transient loads

    NASA Technical Reports Server (NTRS)

    Spyrakos, Constantine Chris

    1987-01-01

    A general method is presented for determining the dynamic torsional/axial response of linear structures composed of either tapered bars or shafts to transient excitations. The method consists of formulating and solving the dynamic problem in the Laplace transform domain by the finite element method and obtaining the response by a numerical inversion of the transformed solution. The derivation of the torsional and axial stiffness matrices is based on the exact solution of the transformed governing equation of motion, and it consequently leads to the exact solution of the problem. The solution permits treatment of the most practical cases of linear tapered bars and shafts, and employs modeling of structures with only one element per member which reduces the number of degrees of freedom involved. The effects of external viscous or internal viscoelastic damping are also taken into account.

  8. An efficient implementation of a high-order filter for a cubed-sphere spectral element model

    NASA Astrophysics Data System (ADS)

    Kang, Hyun-Gyu; Cheong, Hyeong-Bin

    2017-03-01

    A parallel-scalable, isotropic, scale-selective spatial filter was developed for the cubed-sphere spectral element model on the sphere. The filter equation is a high-order elliptic (Helmholtz) equation based on the spherical Laplacian operator, which is transformed into cubed-sphere local coordinates. The Laplacian operator is discretized on the computational domain, i.e., on each cell, by the spectral element method with Gauss-Lobatto Lagrange interpolating polynomials (GLLIPs) as the orthogonal basis functions. On the global domain, the discrete filter equation yielded a linear system represented by a highly sparse matrix. The density of this matrix increases quadratically (linearly) with the order of GLLIP (order of the filter), and the linear system is solved in only O (Ng) operations, where Ng is the total number of grid points. The solution, obtained by a row reduction method, demonstrated the typical accuracy and convergence rate of the cubed-sphere spectral element method. To achieve computational efficiency on parallel computers, the linear system was treated by an inverse matrix method (a sparse matrix-vector multiplication). The density of the inverse matrix was lowered to only a few times of the original sparse matrix without degrading the accuracy of the solution. For better computational efficiency, a local-domain high-order filter was introduced: The filter equation is applied to multiple cells, and then the central cell was only used to reconstruct the filtered field. The parallel efficiency of applying the inverse matrix method to the global- and local-domain filter was evaluated by the scalability on a distributed-memory parallel computer. The scale-selective performance of the filter was demonstrated on Earth topography. The usefulness of the filter as a hyper-viscosity for the vorticity equation was also demonstrated.

  9. Solving ill-posed inverse problems using iterative deep neural networks

    NASA Astrophysics Data System (ADS)

    Adler, Jonas; Öktem, Ozan

    2017-12-01

    We propose a partially learned approach for the solution of ill-posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularisation theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularising functional. The method results in a gradient-like iterative scheme, where the ‘gradient’ component is learned using a convolutional network that includes the gradients of the data discrepancy and regulariser as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of 512 × 512 pixel images in about 0.4 s using a single graphics processing unit (GPU).

  10. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    NASA Astrophysics Data System (ADS)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  11. Inverse scattering transform for the nonlocal nonlinear Schrödinger equation with nonzero boundary conditions

    NASA Astrophysics Data System (ADS)

    Ablowitz, Mark J.; Luo, Xu-Dan; Musslimani, Ziad H.

    2018-01-01

    In 2013, a new nonlocal symmetry reduction of the well-known AKNS (an integrable system of partial differential equations, introduced by and named after Mark J. Ablowitz, David J. Kaup, and Alan C. Newell et al. (1974)) scattering problem was found. It was shown to give rise to a new nonlocal PT symmetric and integrable Hamiltonian nonlinear Schrödinger (NLS) equation. Subsequently, the inverse scattering transform was constructed for the case of rapidly decaying initial data and a family of spatially localized, time periodic one-soliton solutions was found. In this paper, the inverse scattering transform for the nonlocal NLS equation with nonzero boundary conditions at infinity is presented in four different cases when the data at infinity have constant amplitudes. The direct and inverse scattering problems are analyzed. Specifically, the direct problem is formulated, the analytic properties of the eigenfunctions and scattering data and their symmetries are obtained. The inverse scattering problem, which arises from a novel nonlocal system, is developed via a left-right Riemann-Hilbert problem in terms of a suitable uniformization variable and the time dependence of the scattering data is obtained. This leads to a method to linearize/solve the Cauchy problem. Pure soliton solutions are discussed, and explicit 1-soliton solution and two 2-soliton solutions are provided for three of the four different cases corresponding to two different signs of nonlinearity and two different values of the phase difference between plus and minus infinity. In another case, there are no solitons.

  12. 3D CSEM inversion based on goal-oriented adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Key, K.

    2016-12-01

    We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.

  13. A posteriori error estimates in voice source recovery

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  14. Scanning electron microscope fine tuning using four-bar piezoelectric actuated mechanism

    NASA Astrophysics Data System (ADS)

    Hatamleh, Khaled S.; Khasawneh, Qais A.; Al-Ghasem, Adnan; Jaradat, Mohammad A.; Sawaqed, Laith; Al-Shabi, Mohammad

    2018-01-01

    Scanning Electron Microscopes are extensively used for accurate micro/nano images exploring. Several strategies have been proposed to fine tune those microscopes in the past few years. This work presents a new fine tuning strategy of a scanning electron microscope sample table using four bar piezoelectric actuated mechanisms. The introduced paper presents an algorithm to find all possible inverse kinematics solutions of the proposed mechanism. In addition, another algorithm is presented to search for the optimal inverse kinematic solution. Both algorithms are used simultaneously by means of a simulation study to fine tune a scanning electron microscope sample table through a pre-specified circular or linear path of motion. Results of the study shows that, proposed algorithms were able to minimize the power required to drive the piezoelectric actuated mechanism by a ratio of 97.5% for all simulated paths of motion when compared to general non-optimized solution.

  15. Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics

    PubMed Central

    Petrov, Yury

    2012-01-01

    EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497

  16. Adjoint Sensitivity Analysis of Orbital Mechanics: Application to Computations of Observables' Partials with Respect to Harmonics of the Planetary Gravity Fields

    NASA Technical Reports Server (NTRS)

    Ustinov, Eugene A.; Sunseri, Richard F.

    2005-01-01

    An approach is presented to the inversion of gravity fields based on evaluation of partials of observables with respect to gravity harmonics using the solution of adjoint problem of orbital dynamics of the spacecraft. Corresponding adjoint operator is derived directly from the linear operator of the linearized forward problem of orbital dynamics. The resulting adjoint problem is similar to the forward problem and can be solved by the same methods. For given highest degree N of gravity harmonics desired, this method involves integration of N adjoint solutions as compared to integration of N2 partials of the forward solution with respect to gravity harmonics in the conventional approach. Thus, for higher resolution gravity models, this approach becomes increasingly more effective in terms of computer resources as compared to the approach based on the solution of the forward problem of orbital dynamics.

  17. Joint inversion of regional and teleseismic earthquake waveforms

    NASA Astrophysics Data System (ADS)

    Baker, Mark R.; Doser, Diane I.

    1988-03-01

    A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.

  18. Analysis of Operating Principles with S-system Models

    PubMed Central

    Lee, Yun; Chen, Po-Wei; Voit, Eberhard O.

    2011-01-01

    Operating principles address general questions regarding the response dynamics of biological systems as we observe or hypothesize them, in comparison to a priori equally valid alternatives. In analogy to design principles, the question arises: Why are some operating strategies encountered more frequently than others and in what sense might they be superior? It is at this point impossible to study operation principles in complete generality, but the work here discusses the important situation where a biological system must shift operation from its normal steady state to a new steady state. This situation is quite common and includes many stress responses. We present two distinct methods for determining different solutions to this task of achieving a new target steady state. Both methods utilize the property of S-system models within Biochemical Systems Theory (BST) that steady-states can be explicitly represented as systems of linear algebraic equations. The first method uses matrix inversion, a pseudo-inverse, or regression to characterize the entire admissible solution space. Operations on the basis of the solution space permit modest alterations of the transients toward the target steady state. The second method uses standard or mixed integer linear programming to determine admissible solutions that satisfy criteria of functional effectiveness, which are specified beforehand. As an illustration, we use both methods to characterize alternative response patterns of yeast subjected to heat stress, and compare them with observations from the literature. PMID:21377479

  19. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pablant, N. A.; Bell, R. E.; Bitter, M.

    2014-11-15

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at the Large Helical Device. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy andmore » tomographic inversion, XICS can provide profile measurements of the local emissivity, temperature, and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modified Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example, geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  20. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaksa)

    DOE PAGES

    Pablant, N. A.; Bell, R. E.; Bitter, M.; ...

    2014-08-08

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at LHD. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy and tomographic inversion, XICSmore » can provide pro file measurements of the local emissivity, temperature and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modifi ed Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  1. Probabilistic dual heuristic programming-based adaptive critic

    NASA Astrophysics Data System (ADS)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  2. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.

  3. A three-step maximum a posteriori probability method for InSAR data inversion of coseismic rupture with application to the 14 April 2010 Mw 6.9 Yushu, China, earthquake

    NASA Astrophysics Data System (ADS)

    Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei

    2013-08-01

    develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diwaker, E-mail: diwakerphysics@gmail.com; Chakraborty, Aniruddha

    The Smoluchowski equation with a time-dependent sink term is solved exactly. In this method, knowing the probability distribution P(0, s) at the origin, allows deriving the probability distribution P(x, s) at all positions. Exact solutions of the Smoluchowski equation are also provided in different cases where the sink term has linear, constant, inverse, and exponential variation in time.

  5. Image processing and reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chartrand, Rick

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  6. Studying the Transient Thermal Contact Conductance Between the Exhaust Valve and Its Seat Using the Inverse Method

    NASA Astrophysics Data System (ADS)

    Nezhad, Mohsen Motahari; Shojaeefard, Mohammad Hassan; Shahraki, Saeid

    2016-02-01

    In this study, the experiments aimed at analyzing thermally the exhaust valve in an air-cooled internal combustion engine and estimating the thermal contact conductance in fixed and periodic contacts. Due to the nature of internal combustion engines, the duration of contact between the valve and its seat is too short, and much time is needed to reach the quasi-steady state in the periodic contact between the exhaust valve and its seat. Using the methods of linear extrapolation and the inverse solution, the surface contact temperatures and the fixed and periodic thermal contact conductance were calculated. The results of linear extrapolation and inverse methods have similar trends, and based on the error analysis, they are accurate enough to estimate the thermal contact conductance. Moreover, due to the error analysis, a linear extrapolation method using inverse ratio is preferred. The effects of pressure, contact frequency, heat flux, and cooling air speed on thermal contact conductance have been investigated. The results show that by increasing the contact pressure the thermal contact conductance increases substantially. In addition, by increasing the engine speed the thermal contact conductance decreases. On the other hand, by boosting the air speed the thermal contact conductance increases, and by raising the heat flux the thermal contact conductance reduces. The average calculated error equals to 12.9 %.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechant, Lawrence J.

    Wave packet analysis provides a connection between linear small disturbance theory and subsequent nonlinear turbulent spot flow behavior. The traditional association between linear stability analysis and nonlinear wave form is developed via the method of stationary phase whereby asymptotic (simplified) mean flow solutions are used to estimate dispersion behavior and stationary phase approximation are used to invert the associated Fourier transform. The resulting process typically requires nonlinear algebraic equations inversions that can be best performed numerically, which partially mitigates the value of the approximation as compared to a more complete, e.g. DNS or linear/nonlinear adjoint methods. To obtain a simpler,more » closed-form analytical result, the complete packet solution is modeled via approximate amplitude (linear convected kinematic wave initial value problem) and local sinusoidal (wave equation) expressions. Significantly, the initial value for the kinematic wave transport expression follows from a separable variable coefficient approximation to the linearized pressure fluctuation Poisson expression. The resulting amplitude solution, while approximate in nature, nonetheless, appears to mimic many of the global features, e.g. transitional flow intermittency and pressure fluctuation magnitude behavior. A low wave number wave packet models also recover meaningful auto-correlation and low frequency spectral behaviors.« less

  8. Inverse square law isothermal property in relativistic charged static distributions

    NASA Astrophysics Data System (ADS)

    Hansraj, Sudan; Qwabe, Nkululeko

    2017-12-01

    We analyze the impact of the inverse square law fall-off of the energy density in a charged isotropic spherically symmetric fluid. Initially, we impose a linear barotropic equation of state p = αρ but this leads to an intractable differential equation. Next, we consider the neutral isothermal metric of Saslaw et al. [Phys. Rev. D 13, 471 (1996)] in an electric field and the usual inverse square law of energy density and pressure results thus preserving the equation of state. Additionally, we discard a linear equation of state and endeavor to find new classes of solutions with the inverse square law fall-off of density. Certain prescribed forms of the spatial and temporal gravitational forms result in new exact solutions. An interesting result that emerges is that while isothermal fluid spheres are unbounded in the neutral case, this is not so when charge is involved. Indeed it was found that barotropic equations of state exist and hypersurfaces of vanishing pressure exist establishing a boundary in practically all models. One model was studied in depth and found to satisfy other elementary requirements for physical admissibility such as a subluminal sound speed as well as gravitational surface redshifts smaller than 2. Buchdahl [Acta Phys. Pol. B 10, 673 (1965)], Böhmer and Harko [Gen. Relat. Gravit. 39, 757 (2007)] and Andréasson [Commum. Math. Phys. 198, 507 (2009)] mass-radius bounds were also found to be satisfied. Graphical plots utilizing constants selected from the boundary conditions established that the model displayed characteristics consistent with physically viable models.

  9. Travelling wave solutions of the homogeneous one-dimensional FREFLO model

    NASA Astrophysics Data System (ADS)

    Huang, B.; Hong, J. Y.; Jing, G. Q.; Niu, W.; Fang, L.

    2018-01-01

    Presently there is quite few analytical studies in traffic flows due to the non-linearity of the governing equations. In the present paper we introduce travelling wave solutions for the homogeneous one-dimensional FREFLO model, which are expressed in the form of series and describe the procedure that vehicles/pedestrians move with a negative velocity and decelerate until rest, then accelerate inversely to positive velocities. This method is expect to be extended to more complex situations in the future.

  10. Estimation on nonlinear damping in second order distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Reich, Simeon; Rosen, I. G.

    1989-01-01

    An approximation and convergence theory for the identification of nonlinear damping in abstract wave equations is developed. It is assumed that the unknown dissipation mechanism to be identified can be described by a maximal monotone operator acting on the generalized velocity. The stiffness is assumed to be linear and symmetric. Functional analytic techniques are used to establish that solutions to a sequence of finite dimensional (Galerkin) approximating identification problems in some sense approximate a solution to the original infinite dimensional inverse problem.

  11. Unsteady Solution of Non-Linear Differential Equations Using Walsh Function Series

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2015-01-01

    Walsh functions form an orthonormal basis set consisting of square waves. The discontinuous nature of square waves make the system well suited for representing functions with discontinuities. The product of any two Walsh functions is another Walsh function - a feature that can radically change an algorithm for solving non-linear partial differential equations (PDEs). The solution algorithm of non-linear differential equations using Walsh function series is unique in that integrals and derivatives may be computed using simple matrix multiplication of series representations of functions. Solutions to PDEs are derived as functions of wave component amplitude. Three sample problems are presented to illustrate the Walsh function series approach to solving unsteady PDEs. These include an advection equation, a Burgers equation, and a Riemann problem. The sample problems demonstrate the use of the Walsh function solution algorithms, exploiting Fast Walsh Transforms in multi-dimensions (O(Nlog(N))). Details of a Fast Walsh Reciprocal, defined here for the first time, enable inversion of aWalsh Symmetric Matrix in O(Nlog(N)) operations. Walsh functions have been derived using a fractal recursion algorithm and these fractal patterns are observed in the progression of pairs of wave number amplitudes in the solutions. These patterns are most easily observed in a remapping defined as a fractal fingerprint (FFP). A prolongation of existing solutions to the next highest order exploits these patterns. The algorithms presented here are considered a work in progress that provide new alternatives and new insights into the solution of non-linear PDEs.

  12. The genetic algorithm: A robust method for stress inversion

    NASA Astrophysics Data System (ADS)

    Thakur, Prithvi; Srivastava, Deepak C.; Gupta, Pravin K.

    2017-01-01

    The stress inversion of geological or geophysical observations is a nonlinear problem. In most existing methods, it is solved by linearization, under certain assumptions. These linear algorithms not only oversimplify the problem but also are vulnerable to entrapment of the solution in a local optimum. We propose the use of a nonlinear heuristic technique, the genetic algorithm, which searches the global optimum without making any linearizing assumption or simplification. The algorithm mimics the natural evolutionary processes of selection, crossover and mutation and, minimizes a composite misfit function for searching the global optimum, the fittest stress tensor. The validity and efficacy of the algorithm are demonstrated by a series of tests on synthetic and natural fault-slip observations in different tectonic settings and also in situations where the observations are noisy. It is shown that the genetic algorithm is superior to other commonly practised methods, in particular, in those tectonic settings where none of the principal stresses is directed vertically and/or the given data set is noisy.

  13. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems

    DOE PAGES

    Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...

    2017-03-05

    Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.

  14. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  15. Trans-dimensional Bayesian inversion of airborne electromagnetic data for 2D conductivity profiles

    NASA Astrophysics Data System (ADS)

    Hawkins, Rhys; Brodie, Ross C.; Sambridge, Malcolm

    2018-02-01

    This paper presents the application of a novel trans-dimensional sampling approach to a time domain airborne electromagnetic (AEM) inverse problem to solve for plausible conductivities of the subsurface. Geophysical inverse field problems, such as time domain AEM, are well known to have a large degree of non-uniqueness. Common least-squares optimisation approaches fail to take this into account and provide a single solution with linearised estimates of uncertainty that can result in overly optimistic appraisal of the conductivity of the subsurface. In this new non-linear approach, the spatial complexity of a 2D profile is controlled directly by the data. By examining an ensemble of proposed conductivity profiles it accommodates non-uniqueness and provides more robust estimates of uncertainties.

  16. ALARA: The next link in a chain of activation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, P.P.H.; Henderson, D.L.

    1996-12-31

    The Adaptive Laplace and Analytic Radioactivity Analysis [ALARA] code has been developed as the next link in the chain of DKR radioactivity codes. Its methods address the criticisms of DKR while retaining its best features. While DKR ignored loops in the transmutation/decay scheme to preserve the exactness of the mathematical solution, ALARA incorporates new computational approaches without jeopardizing the most important features of DKR`s physical modelling and mathematical methods. The physical model uses `straightened-loop, linear chains` to achieve the same accuracy in the loop solutions as is demanded in the rest of the scheme. In cases where a chain hasmore » no loops, the exact DKR solution is used. Otherwise, ALARA adaptively chooses between a direct Laplace inversion technique and a Laplace expansion inversion technique to optimize the accuracy and speed of the solution. All of these methods result in matrix solutions which allow the fastest and most accurate solution of exact pulsing histories. Since the entire history is solved for each chain as it is created, ALARA achieves the optimum combination of high accuracy, high speed and low memory usage. 8 refs., 2 figs.« less

  17. Is 3D true non linear traveltime tomography reasonable ?

    NASA Astrophysics Data System (ADS)

    Herrero, A.; Virieux, J.

    2003-04-01

    The data sets requiring 3D analysis tools in the context of seismic exploration (both onshore and offshore experiments) or natural seismicity (micro seismicity surveys or post event measurements) are more and more numerous. Classical linearized tomographies and also earthquake localisation codes need an accurate 3D background velocity model. However, if the medium is complex and a priori information not available, a 1D analysis is not able to provide an adequate background velocity image. Moreover, the design of the acquisition layouts is often intrinsically 3D and renders difficult even 2D approaches, especially in natural seismicity cases. Thus, the solution relies on the use of a 3D true non linear approach, which allows to explore the model space and to identify an optimal velocity image. The problem becomes then practical and its feasibility depends on the available computing resources (memory and time). In this presentation, we show that facing a 3D traveltime tomography problem with an extensive non-linear approach combining fast travel time estimators based on level set methods and optimisation techniques such as multiscale strategy is feasible. Moreover, because management of inhomogeneous inversion parameters is more friendly in a non linear approach, we describe how to perform a jointly non-linear inversion for the seismic velocities and the sources locations.

  18. A Glimpse in the Third Dimension for Electrical Resistivity Profiles

    NASA Astrophysics Data System (ADS)

    Robbins, A. R.; Plattner, A.

    2017-12-01

    We present an electrode layout strategy designed to enhance the popular two-dimensional electrical resistivity profile. Offsetting electrodes from the traditional linear layout and using 3-D inversion software allows for mapping the three-dimensional electrical resistivity close to the profile plane. We established a series of synthetic tests using simulated data generated from chosen resistivity distributions with a three-dimensional target feature. All inversions and simulations were conducted using freely-available ERT software, BERT and E4D. Synthetic results demonstrate the effectiveness of the offset electrode approach, whereas the linear layout failed to resolve the three-dimensional character of our subsurface feature. A field survey using trench backfill as a known resistivity contrast confirmed our synthetic tests. As we show, 3-D inversions of linear layouts for starting models without previously known structure are futile ventures because they generate symmetric resistivity solutions with respect to the profile plane. This is a consequence of the layout's inherent symmetrical sensitivity patterns. An offset electrode layout is not subject to the same limitation, as the collective measurements do not share a common sensitivity symmetry. For practitioners, this approach presents a low-cost improvement of a traditional geophysical method which is simple to use yet may provide critical information about the three dimensional structure of the subsurface close to the profile.

  19. Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas

    2017-04-01

    Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.

  20. Estimating permeability from quasi-static deformation: Temporal variations and arrival time inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasco, D.W.; Ferretti, Alessandro; Novali, Fabrizio

    2008-05-01

    Transient pressure variations within a reservoir can be treated as a propagating front and analyzed using an asymptotic formulation. From this perspective one can define a pressure 'arrival time' and formulate solutions along trajectories, in the manner of ray theory. We combine this methodology and a technique for mapping overburden deformation into reservoir volume change as a means to estimate reservoir flow properties, such as permeability. Given the entire 'travel time' or phase field, obtained from the deformation data, we can construct the trajectories directly, there-by linearizing the inverse problem. A numerical study indicates that, using this approach, we canmore » infer large-scale variations in flow properties. In an application to Interferometric Synthetic Aperture (InSAR) observations associated with a CO{sub 2} injection at the Krechba field, Algeria, we image pressure propagation to the northwest. An inversion for flow properties indicates a linear trend of high permeability. The high permeability correlates with a northwest trending fault on the flank of the anticline which defines the field.« less

  1. Hessian Schatten-norm regularization for linear inverse problems.

    PubMed

    Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael

    2013-05-01

    We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.

  2. Estimation for the Linear Model With Uncertain Covariance Matrices

    NASA Astrophysics Data System (ADS)

    Zachariah, Dave; Shariati, Nafiseh; Bengtsson, Mats; Jansson, Magnus; Chatterjee, Saikat

    2014-03-01

    We derive a maximum a posteriori estimator for the linear observation model, where the signal and noise covariance matrices are both uncertain. The uncertainties are treated probabilistically by modeling the covariance matrices with prior inverse-Wishart distributions. The nonconvex problem of jointly estimating the signal of interest and the covariance matrices is tackled by a computationally efficient fixed-point iteration as well as an approximate variational Bayes solution. The statistical performance of estimators is compared numerically to state-of-the-art estimators from the literature and shown to perform favorably.

  3. Interior point techniques for LP and NLP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evtushenko, Y.

    By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.

  4. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  5. A Conditionally Integrable Bi-confluent Heun Potential Involving Inverse Square Root and Centrifugal Barrier Terms

    NASA Astrophysics Data System (ADS)

    Ishkhanyan, Tigran A.; Krainov, Vladimir P.; Ishkhanyan, Artur M.

    2018-05-01

    We present a conditionally integrable potential, belonging to the bi-confluent Heun class, for which the Schrödinger equation is solved in terms of the confluent hypergeometric functions. The potential involves an attractive inverse square root term x-1/2 with arbitrary strength and a repulsive centrifugal barrier core x-2 with the strength fixed to a constant. This is a potential well defined on the half-axis. Each of the fundamental solutions composing the general solution of the Schrödinger equation is written as an irreducible linear combination, with non-constant coefficients, of two confluent hypergeometric functions. We present the explicit solution in terms of the non-integer order Hermite functions of scaled and shifted argument and discuss the bound states supported by the potential. We derive the exact equation for the energy spectrum and approximate that by a highly accurate transcendental equation involving trigonometric functions. Finally, we construct an accurate approximation for the bound-state energy levels.

  6. Reconstruction of electrical impedance tomography (EIT) images based on the expectation maximum (EM) method.

    PubMed

    Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi

    2012-11-01

    Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Application of genetic algorithms to focal mechanism determination

    NASA Astrophysics Data System (ADS)

    Kobayashi, Reiji; Nakanishi, Ichiro

    1994-04-01

    Genetic algorithms are a new class of methods for global optimization. They resemble Monte Carlo techniques, but search for solutions more efficiently than uniform Monte Carlo sampling. In the field of geophysics, genetic algorithms have recently been used to solve some non-linear inverse problems (e.g., earthquake location, waveform inversion, migration velocity estimation). We present an application of genetic algorithms to focal mechanism determination from first-motion polarities of P-waves and apply our method to two recent large events, the Kushiro-oki earthquake of January 15, 1993 and the SW Hokkaido (Japan Sea) earthquake of July 12, 1993. Initial solution and curvature information of the objective function that gradient methods need are not required in our approach. Moreover globally optimal solutions can be efficiently obtained. Calculation of polarities based on double-couple models is the most time-consuming part of the source mechanism determination. The amount of calculations required by the method designed in this study is much less than that of previous grid search methods.

  8. Regularized two-step brain activity reconstruction from spatiotemporal EEG data

    NASA Astrophysics Data System (ADS)

    Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry

    2004-10-01

    We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.

  9. SFDBSI_GLS v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poppeliers, Christian

    Matlab code for inversion of frequency domain, electrostatic geophysical data in terms of scalar scattering amplitudes in the subsurface. The data is assumed to be the difference between two measurements: electric field measurements prior to the injection of an electrically conductive proppant, and the electric field measurements after proppant injection. The proppant is injected into the subsurface via a well, and its purpose is to prop open fractures created by hydraulic fracturing. In both cases the illuminating electric field is assumed to be a vertically incident plane wave. The inversion strategy is to solve a set of linear system ofmore » equations, where each equation defines the amplitude of a candidate scattering volume. The model space is defined by M potential scattering locations and the frequency domain (of which there are k frequencies) data are recorded on N receivers. The solution thus solves a kN x M system of linear equations for M scalar amplitudes within the user-defined solution space. Practical Application: Oilfield environments where observed electrostatic geophysical data can reasonably be assumed to be scattered by subsurface proppant volumes. No field validation examples have so far been provided.« less

  10. Multicomponent pre-stack seismic waveform inversion in transversely isotropic media using a non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Padhi, Amit; Mallick, Subhashis

    2014-03-01

    Inversion of band- and offset-limited single component (P wave) seismic data does not provide robust estimates of subsurface elastic parameters and density. Multicomponent seismic data can, in principle, circumvent this limitation but adds to the complexity of the inversion algorithm because it requires simultaneous optimization of multiple objective functions, one for each data component. In seismology, these multiple objectives are typically handled by constructing a single objective given as a weighted sum of the objectives of individual data components and sometimes with additional regularization terms reflecting their interdependence; which is then followed by a single objective optimization. Multi-objective problems, inclusive of the multicomponent seismic inversion are however non-linear. They have non-unique solutions, known as the Pareto-optimal solutions. Therefore, casting such problems as a single objective optimization provides one out of the entire set of the Pareto-optimal solutions, which in turn, may be biased by the choice of the weights. To handle multiple objectives, it is thus appropriate to treat the objective as a vector and simultaneously optimize each of its components so that the entire Pareto-optimal set of solutions could be estimated. This paper proposes such a novel multi-objective methodology using a non-dominated sorting genetic algorithm for waveform inversion of multicomponent seismic data. The applicability of the method is demonstrated using synthetic data generated from multilayer models based on a real well log. We document that the proposed method can reliably extract subsurface elastic parameters and density from multicomponent seismic data both when the subsurface is considered isotropic and transversely isotropic with a vertical symmetry axis. We also compute approximate uncertainty values in the derived parameters. Although we restrict our inversion applications to horizontally stratified models, we outline a practical procedure of extending the method to approximately include local dips for each source-receiver offset pair. Finally, the applicability of the proposed method is not just limited to seismic inversion but it could be used to invert different data types not only requiring multiple objectives but also multiple physics to describe them.

  11. Modeling the 16 September 2015 Chile tsunami source with the inversion of deep-ocean tsunami records by means of the r - solution method

    NASA Astrophysics Data System (ADS)

    Voronina, Tatyana; Romanenko, Alexey; Loskutov, Artem

    2017-04-01

    The key point in the state-of-the-art in the tsunami forecasting is constructing a reliable tsunami source. In this study, we present an application of the original numerical inversion technique to modeling the tsunami sources of the 16 September 2015 Chile tsunami. The problem of recovering a tsunami source from remote measurements of the incoming wave in the deep-water tsunameters is considered as an inverse problem of mathematical physics in the class of ill-posed problems. This approach is based on the least squares and the truncated singular value decomposition techniques. The tsunami wave propagation is considered within the scope of the linear shallow-water theory. As in inverse seismic problem, the numerical solutions obtained by mathematical methods become unstable due to the presence of noise in real data. A method of r-solutions makes it possible to avoid instability in the solution to the ill-posed problem under study. This method seems to be attractive from the computational point of view since the main efforts are required only once for calculating the matrix whose columns consist of computed waveforms for each harmonic as a source (an unknown tsunami source is represented as a part of a spatial harmonics series in the source area). Furthermore, analyzing the singular spectra of the matrix obtained in the course of numerical calculations one can estimate the future inversion by a certain observational system that will allow offering a more effective disposition for the tsunameters with the help of precomputations. In other words, the results obtained allow finding a way to improve the inversion by selecting the most informative set of available recording stations. The case study of the 6 February 2013 Solomon Islands tsunami highlights a critical role of arranging deep-water tsunameters for obtaining the inversion results. Implementation of the proposed methodology to the 16 September 2015 Chile tsunami has successfully produced tsunami source model. The function recovered by the method proposed can find practical applications both as an initial condition for various optimization approaches and for computer calculation of the tsunami wave propagation.

  12. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas

    2016-11-01

    Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.

  13. Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods.

    PubMed

    Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H

    2016-05-01

    The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.

  14. Inverse problem of HIV cell dynamics using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    González, J. A.; Guzmán, F. S.

    2017-01-01

    In order to describe the cell dynamics of T-cells in a patient infected with HIV, we use a flavour of Perelson's model. This is a non-linear system of Ordinary Differential Equations that describes the evolution of healthy, latently infected, infected T-cell concentrations and the free viral cells. Different parameters in the equations give different dynamics. Considering the concentration of these types of cells is known for a particular patient, the inverse problem consists in estimating the parameters in the model. We solve this inverse problem using a Genetic Algorithm (GA) that minimizes the error between the solutions of the model and the data from the patient. These errors depend on the parameters of the GA, like mutation rate and population, although a detailed analysis of this dependence will be described elsewhere.

  15. Application of quasi-distributions for solving inverse problems of neutron and {gamma}-ray transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pogosbekyan, L.R.; Lysov, D.A.

    The considered inverse problems deal with the calculation of the unknown values of nuclear installations by means of the known (goal) functionals of neutron/{gamma}-ray distributions. The example of these problems might be the calculation of the automatic control rods position as function of neutron sensors reading, or the calculation of experimentally-corrected values of cross-sections, isotopes concentration, fuel enrichment via the measured functional. The authors have developed the new method to solve inverse problem. It finds flux density as quasi-solution of the particles conservation linear system adjointed to equalities for functionals. The method is more effective compared to the one basedmore » on the classical perturbation theory. It is suitable for vectorization and it can be used successfully in optimization codes.« less

  16. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.

  17. a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems

    NASA Astrophysics Data System (ADS)

    Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.

    2017-12-01

    We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.

  18. TOPEX/POSEIDON tides estimated using a global inverse model

    NASA Technical Reports Server (NTRS)

    Egbert, Gary D.; Bennett, Andrew F.; Foreman, Michael G. G.

    1994-01-01

    Altimetric data from the TOPEX/POSEIDON mission will be used for studies of global ocean circulation and marine geophysics. However, it is first necessary to remove the ocean tides, which are aliased in the raw data. The tides are constrained by the two distinct types of information: the hydrodynamic equations which the tidal fields of elevations and velocities must satisfy, and direct observational data from tide gauges and satellite altimetry. Here we develop and apply a generalized inverse method, which allows us to combine rationally all of this information into global tidal fields best fitting both the data and the dynamics, in a least squares sense. The resulting inverse solution is a sum of the direct solution to the astronomically forced Laplace tidal equations and a linear combination of the representers for the data functionals. The representer functions (one for each datum) are determined by the dynamical equations, and by our prior estimates of the statistics or errors in these equations. Our major task is a direct numerical calculation of these representers. This task is computationally intensive, but well suited to massively parallel processing. By calculating the representers we reduce the full (infinite dimensional) problem to a relatively low-dimensional problem at the outset, allowing full control over the conditioning and hence the stability of the inverse solution. With the representers calculated we can easily update our model as additional TOPEX/POSEIDON data become available. As an initial illustration we invert harmonic constants from a set of 80 open-ocean tide gauges. We then present a practical scheme for direct inversion of TOPEX/POSEIDON crossover data. We apply this method to 38 cycles of geophysical data records (GDR) data, computing preliminary global estimates of the four principal tidal constituents, M(sub 2), S(sub 2), K(sub 1) and O(sub 1). The inverse solution yields tidal fields which are simultaneously smoother, and in better agreement with altimetric and ground truth data, than previously proposed tidal models. Relative to the 'default' tidal corrections provided with the TOPEX/POSEIDON GDR, the inverse solution reduces crossover difference variances significantly (approximately 20-30%), even though only a small number of free parameters (approximately equal to 1000) are actually fit to the crossover data.

  19. Recovery of time-dependent volatility in option pricing model

    NASA Astrophysics Data System (ADS)

    Deng, Zui-Cha; Hon, Y. C.; Isakov, V.

    2016-11-01

    In this paper we investigate an inverse problem of determining the time-dependent volatility from observed market prices of options with different strikes. Due to the non linearity and sparsity of observations, an analytical solution to the problem is generally not available. Numerical approximation is also difficult to obtain using most of the existing numerical algorithms. Based on our recent theoretical results, we apply the linearisation technique to convert the problem into an inverse source problem from which recovery of the unknown volatility function can be achieved. Two kinds of strategies, namely, the integral equation method and the Landweber iterations, are adopted to obtain the stable numerical solution to the inverse problem. Both theoretical analysis and numerical examples confirm that the proposed approaches are effective. The work described in this paper was partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region (Project No. CityU 101112) and grants from the NNSF of China (Nos. 11261029, 11461039), and NSF grants DMS 10-08902 and 15-14886 and by Emylou Keith and Betty Dutcher Distinguished Professorship at the Wichita State University (USA).

  20. Bayesian prestack seismic inversion with a self-adaptive Huber-Markov random-field edge protection scheme

    NASA Astrophysics Data System (ADS)

    Tian, Yu-Kun; Zhou, Hui; Chen, Han-Ming; Zou, Ya-Ming; Guan, Shou-Jun

    2013-12-01

    Seismic inversion is a highly ill-posed problem, due to many factors such as the limited seismic frequency bandwidth and inappropriate forward modeling. To obtain a unique solution, some smoothing constraints, e.g., the Tikhonov regularization are usually applied. The Tikhonov method can maintain a global smooth solution, but cause a fuzzy structure edge. In this paper we use Huber-Markov random-field edge protection method in the procedure of inverting three parameters, P-velocity, S-velocity and density. The method can avoid blurring the structure edge and resist noise. For the parameter to be inverted, the Huber-Markov random-field constructs a neighborhood system, which further acts as the vertical and lateral constraints. We use a quadratic Huber edge penalty function within the layer to suppress noise and a linear one on the edges to avoid a fuzzy result. The effectiveness of our method is proved by inverting the synthetic data without and with noises. The relationship between the adopted constraints and the inversion results is analyzed as well.

  1. Nonlinear system guidance in the presence of transmission zero dynamics

    NASA Technical Reports Server (NTRS)

    Meyer, G.; Hunt, L. R.; Su, R.

    1995-01-01

    An iterative procedure is proposed for computing the commanded state trajectories and controls that guide a possibly multiaxis, time-varying, nonlinear system with transmission zero dynamics through a given arbitrary sequence of control points. The procedure is initialized by the system inverse with the transmission zero effects nulled out. Then the 'steady state' solution of the perturbation model with the transmission zero dynamics intact is computed and used to correct the initial zero-free solution. Both time domain and frequency domain methods are presented for computing the steady state solutions of the possibly nonminimum phase transmission zero dynamics. The procedure is illustrated by means of linear and nonlinear examples.

  2. 3D linear inversion of magnetic susceptibility data acquired by frequency domain EMI

    NASA Astrophysics Data System (ADS)

    Thiesson, J.; Tabbagh, A.; Simon, F.-X.; Dabas, M.

    2017-01-01

    Low induction number EMI instruments are able to simultaneously measure a soil's apparent magnetic susceptibility and electrical conductivity. This family of dual measurement instruments is highly useful for the analysis of soils and archeological sites. However, the electromagnetic properties of soils are found to vary over considerably different ranges: whereas their electrical conductivity varies from ≤ 0.1 to ≥ 100 mS/m, their relative magnetic permeability remains within a very small range, between 1.0001 and 1.01 SI. Consequently, although apparent conductivity measurements need to be inverted using non-linear processes, the variations of the apparent magnetic susceptibility can be approximated through the use of linear processes, as in the case of the magnetic prospection technique. Our proposed 3D inversion algorithm starts from apparent susceptibility data sets, acquired using different instruments over a given area. A reference vertical profile is defined by considering the mode of the vertical distributions of both the electrical resistivity and of the magnetic susceptibility. At each point of the mapped area, the reference vertical profile response is subtracted to obtain the apparent susceptibility variation dataset. A 2D horizontal Fourier transform is applied to these variation datasets and to the dipole (impulse) response of each instrument, a (vertical) 1D inversion is performed at each point in the spectral domain, and finally the resulting dataset is inverse transformed to restore the apparent 3D susceptibility variations. It has been shown that when applied to synthetic results, this method is able to correct the apparent deformations of a buried object resulting from the geometry of the instrument, and to restore reliable quantitative susceptibility contrasts. It also allows the thin layer solution, similar to that used in magnetic prospection, to be implemented. When applied to field data it initially delivers a level of contrast comparable to that obtained with a non-linear 3D inversion. Over four different sites, this method is able to produce, following an acceptably short computation time, realistic values for the lateral and vertical variations in susceptibility, which are significantly different to those given by a point-by-point 1D inversion.

  3. Bounding solutions of geometrically nonlinear viscoelastic problems

    NASA Technical Reports Server (NTRS)

    Stubstad, J. M.; Simitses, G. J.

    1985-01-01

    Integral transform techniques, such as the Laplace transform, provide simple and direct methods for solving viscoelastic problems formulated within a context of linear material response and using linear measures for deformation. Application of the transform operator reduces the governing linear integro-differential equations to a set of algebraic relations between the transforms of the unknown functions, the viscoelastic operators, and the initial and boundary conditions. Inversion either directly or through the use of the appropriate convolution theorem, provides the time domain response once the unknown functions have been expressed in terms of sums, products or ratios of known transforms. When exact inversion is not possible approximate techniques may provide accurate results. The overall problem becomes substantially more complex when nonlinear effects must be included. Situations where a linear material constitutive law can still be productively employed but where the magnitude of the resulting time dependent deformations warrants the use of a nonlinear kinematic analysis are considered. The governing equations will be nonlinear integro-differential equations for this class of problems. Thus traditional as well as approximate techniques, such as cited above, cannot be employed since the transform of a nonlinear function is not explicitly expressible.

  4. Bounding solutions of geometrically nonlinear viscoelastic problems

    NASA Technical Reports Server (NTRS)

    Stubstad, J. M.; Simitses, G. J.

    1986-01-01

    Integral transform techniques, such as the Laplace transform, provide simple and direct methods for solving viscoelastic problems formulated within a context of linear material response and using linear measures for deformation. Application of the transform operator reduces the governing linear integro-differential equations to a set of algebraic relations between the transforms of the unknown functions, the viscoelastic operators, and the initial and boundary conditions. Inversion either directly or through the use of the appropriate convolution theorem, provides the time domain response once the unknown functions have been expressed in terms of sums, products or ratios of known transforms. When exact inversion is not possible approximate techniques may provide accurate results. The overall problem becomes substantially more complex when nonlinear effects must be included. Situations where a linear material constitutive law can still be productively employed but where the magnitude of the resulting time dependent deformations warrants the use of a nonlinear kinematic analysis are considered. The governing equations will be nonlinear integro-differential equations for this class of problems. Thus traditional as well as approximate techniques, such as cited above, cannot be employed since the transform of a nonlinear function is not explicitly expressible.

  5. Approximate nonlinear multiparameter inversion for multicomponent single and double P-wave scattering in isotropic elastic media

    NASA Astrophysics Data System (ADS)

    Ouyang, Wei; Mao, Weijian

    2018-07-01

    An asymptotic quadratic true-amplitude inversion method for isotropic elastic P waves is proposed to invert medium parameters. The multicomponent P-wave scattered wavefield is computed based on a forward relationship using second-order Born approximation and corresponding high-frequency ray theoretical methods. Within the local double scattering mechanism, the P-wave transmission factors are elaborately calculated, which results in the radiation pattern for P-wave scattering being a quadratic combination of the density and Lamé's moduli perturbation parameters. We further express the elastic P-wave scattered wavefield in a form of generalized Radon transform. After introducing classical backprojection operators, we obtain an approximate solution of the inverse problem by solving a quadratic nonlinear system. Numerical tests with synthetic data computed by finite-differences scheme demonstrate that our quadratic inversion can accurately invert perturbation parameters for strong perturbations, compared with the P-wave single-scattering linear inversion method. Although our inversion strategy here is only syncretized with P-wave scattering, it can be extended to invert multicomponent elastic data containing both P- and S-wave information.

  6. Error analysis in inverse scatterometry. I. Modeling.

    PubMed

    Al-Assaad, Rayan M; Byrne, Dale M

    2007-02-01

    Scatterometry is an optical technique that has been studied and tested in recent years in semiconductor fabrication metrology for critical dimensions. Previous work presented an iterative linearized method to retrieve surface-relief profile parameters from reflectance measurements upon diffraction. With the iterative linear solution model in this work, rigorous models are developed to represent the random and deterministic or offset errors in scatterometric measurements. The propagation of different types of error from the measurement data to the profile parameter estimates is then presented. The improvement in solution accuracies is then demonstrated with theoretical and experimental data by adjusting for the offset errors. In a companion paper (in process) an improved optimization method is presented to account for unknown offset errors in the measurements based on the offset error model.

  7. Mathematics of Computed Tomography

    NASA Astrophysics Data System (ADS)

    Hawkins, William Grant

    A review of the applications of the Radon transform is presented, with emphasis on emission computed tomography and transmission computed tomography. The theory of the 2D and 3D Radon transforms, and the effects of attenuation for emission computed tomography are presented. The algebraic iterative methods, their importance and limitations are reviewed. Analytic solutions of the 2D problem the convolution and frequency filtering methods based on linear shift invariant theory, and the solution of the circular harmonic decomposition by integral transform theory--are reviewed. The relation between the invisible kernels, the inverse circular harmonic transform, and the consistency conditions are demonstrated. The discussion and review are extended to the 3D problem-convolution, frequency filtering, spherical harmonic transform solutions, and consistency conditions. The Cormack algorithm based on reconstruction with Zernike polynomials is reviewed. An analogous algorithm and set of reconstruction polynomials is developed for the spherical harmonic transform. The relations between the consistency conditions, boundary conditions and orthogonal basis functions for the 2D projection harmonics are delineated and extended to the 3D case. The equivalence of the inverse circular harmonic transform, the inverse Radon transform, and the inverse Cormack transform is presented. The use of the number of nodes of a projection harmonic as a filter is discussed. Numerical methods for the efficient implementation of angular harmonic algorithms based on orthogonal functions and stable recursion are presented. The derivation of a lower bound for the signal-to-noise ratio of the Cormack algorithm is derived.

  8. A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Zuo, B.; Hu, X.; Li, H.

    2011-12-01

    A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.

  9. Application of the Discrete Regularization Method to the Inverse of the Chord Vibration Equation

    NASA Astrophysics Data System (ADS)

    Wang, Linjun; Han, Xu; Wei, Zhouchao

    The inverse problem of the initial condition about the boundary value of the chord vibration equation is ill-posed. First, we transform it into a Fredholm integral equation. Second, we discretize it by the trapezoidal formula method, and then obtain a severely ill-conditioned linear equation, which is sensitive to the disturbance of the data. In addition, the tiny error of right data causes the huge concussion of the solution. We cannot obtain good results by the traditional method. In this paper, we solve this problem by the Tikhonov regularization method, and the numerical simulations demonstrate that this method is feasible and effective.

  10. Minimum relative entropy, Bayes and Kapur

    NASA Astrophysics Data System (ADS)

    Woodbury, Allan D.

    2011-04-01

    The focus of this paper is to illustrate important philosophies on inversion and the similarly and differences between Bayesian and minimum relative entropy (MRE) methods. The development of each approach is illustrated through the general-discrete linear inverse. MRE differs from both Bayes and classical statistical methods in that knowledge of moments are used as ‘data’ rather than sample values. MRE like Bayes, presumes knowledge of a prior probability distribution and produces the posterior pdf itself. MRE attempts to produce this pdf based on the information provided by new moments. It will use moments of the prior distribution only if new data on these moments is not available. It is important to note that MRE makes a strong statement that the imposed constraints are exact and complete. In this way, MRE is maximally uncommitted with respect to unknown information. In general, since input data are known only to within a certain accuracy, it is important that any inversion method should allow for errors in the measured data. The MRE approach can accommodate such uncertainty and in new work described here, previous results are modified to include a Gaussian prior. A variety of MRE solutions are reproduced under a number of assumed moments and these include second-order central moments. Various solutions of Jacobs & van der Geest were repeated and clarified. Menke's weighted minimum length solution was shown to have a basis in information theory, and the classic least-squares estimate is shown as a solution to MRE under the conditions of more data than unknowns and where we utilize the observed data and their associated noise. An example inverse problem involving a gravity survey over a layered and faulted zone is shown. In all cases the inverse results match quite closely the actual density profile, at least in the upper portions of the profile. The similar results to Bayes presented in are a reflection of the fact that the MRE posterior pdf, and its mean are constrained not by d=Gm but by its first moment E(d=Gm), a weakened form of the constraints. If there is no error in the data then one should expect a complete agreement between Bayes and MRE and this is what is shown. Similar results are shown when second moment data is available (e.g. posterior covariance equal to zero). But dissimilar results are noted when we attempt to derive a Bayesian like result from MRE. In the various examples given in this paper, the problems look similar but are, in the final analysis, not equal. The methods of attack are different and so are the results even though we have used the linear inverse problem as a common template.

  11. Azimuthal Seismic Amplitude Variation with Offset and Azimuth Inversion in Weakly Anisotropic Media with Orthorhombic Symmetry

    NASA Astrophysics Data System (ADS)

    Pan, Xinpeng; Zhang, Guangzhi; Yin, Xingyao

    2018-01-01

    Seismic amplitude variation with offset and azimuth (AVOaz) inversion is well known as a popular and pragmatic tool utilized to estimate fracture parameters. A single set of vertical fractures aligned along a preferred horizontal direction embedded in a horizontally layered medium can be considered as an effective long-wavelength orthorhombic medium. Estimation of Thomsen's weak-anisotropy (WA) parameters and fracture weaknesses plays an important role in characterizing the orthorhombic anisotropy in a weakly anisotropic medium. Our goal is to demonstrate an orthorhombic anisotropic AVOaz inversion approach to describe the orthorhombic anisotropy utilizing the observable wide-azimuth seismic reflection data in a fractured reservoir with the assumption of orthorhombic symmetry. Combining Thomsen's WA theory and linear-slip model, we first derive a perturbation in stiffness matrix of a weakly anisotropic medium with orthorhombic symmetry under the assumption of small WA parameters and fracture weaknesses. Using the perturbation matrix and scattering function, we then derive an expression for linearized PP-wave reflection coefficient in terms of P- and S-wave moduli, density, Thomsen's WA parameters, and fracture weaknesses in such an orthorhombic medium, which avoids the complicated nonlinear relationship between the orthorhombic anisotropy and azimuthal seismic reflection data. Incorporating azimuthal seismic data and Bayesian inversion theory, the maximum a posteriori solutions of Thomsen's WA parameters and fracture weaknesses in a weakly anisotropic medium with orthorhombic symmetry are reasonably estimated with the constraints of Cauchy a priori probability distribution and smooth initial models of model parameters to enhance the inversion resolution and the nonlinear iteratively reweighted least squares strategy. The synthetic examples containing a moderate noise demonstrate the feasibility of the derived orthorhombic anisotropic AVOaz inversion method, and the real data illustrate the inversion stabilities of orthorhombic anisotropy in a fractured reservoir.

  12. Next Generation Robots for STEM Education andResearch at Huston Tillotson University

    DTIC Science & Technology

    2017-11-10

    dynamics through the following command: roslaunch mtb_lab6_feedback_linearization gravity_compensation.launch Part B: Gravity Inversion : After...understood the system’s natural dynamics. roslaunch mtb_lab6_feedback_linearization gravity_compensation.launch Part B: Gravity Inversion ...is created using the following command: roslaunch mtb_lab6_feedback_linearization gravity_inversion.launch Gravity inversion is just one

  13. Exact solutions for the source-excited cylindrical electromagnetic waves in a nonlinear nondispersive medium.

    PubMed

    Es'kin, V A; Kudrin, A V; Petrov, E Yu

    2011-06-01

    The behavior of electromagnetic fields in nonlinear media has been a topical problem since the discovery of materials with a nonlinearity of electromagnetic properties. The problem of finding exact solutions for the source-excited nonlinear waves in curvilinear coordinates has been regarded as unsolvable for a long time. In this work, we present the first solution of this type for a cylindrically symmetric field excited by a pulsed current filament in a nondispersive medium that is simultaneously inhomogeneous and nonlinear. Assuming that the medium has a power-law permittivity profile in the linear regime and lacks a center of inversion, we derive an exact solution for the electromagnetic field excited by a current filament in such a medium and discuss the properties of this solution.

  14. Using informative priors in facies inversion: The case of C-ISR method

    NASA Astrophysics Data System (ADS)

    Valakas, G.; Modis, K.

    2016-08-01

    Inverse problems involving the characterization of hydraulic properties of groundwater flow systems by conditioning on observations of the state variables are mathematically ill-posed because they have multiple solutions and are sensitive to small changes in the data. In the framework of McMC methods for nonlinear optimization and under an iterative spatial resampling transition kernel, we present an algorithm for narrowing the prior and thus producing improved proposal realizations. To achieve this goal, we cosimulate the facies distribution conditionally to facies observations and normal scores transformed hydrologic response measurements, assuming a linear coregionalization model. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior. The effectiveness of our approach is demonstrated by an example application on a synthetic underdetermined inverse problem in aquifer characterization.

  15. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of their slip distributions. Also, reliable solutions are generally obtained with data in a 30-minute window following the origin time, suggesting that a real-time system could obtain solutions in less than one hour following the origin time.

  16. Analytically based forward and inverse models of fluvial landscape evolution during temporally continuous climatic and tectonic variations

    NASA Astrophysics Data System (ADS)

    Goren, Liran; Petit, Carole

    2017-04-01

    Fluvial channels respond to changing tectonic and climatic conditions by adjusting their patterns of erosion and relief. It is therefore expected that by examining these patterns, we can infer the tectonic and climatic conditions that shaped the channels. However, the potential interference between climatic and tectonic signals complicates this inference. Within the framework of the stream power model that describes incision rate of mountainous bedrock rivers, climate variability has two effects: it influences the erosive power of the river, causing local slope change, and it changes the fluvial response time that controls the rate at which tectonically and climatically induced slope breaks are communicated upstream. Because of this dual role, the fluvial response time during continuous climate change has so far been elusive, which hinders our understanding of environmental signal propagation and preservation in the fluvial topography. An analytic solution of the stream power model during general tectonic and climatic histories gives rise to a new definition of the fluvial response time. The analytic solution offers accurate predictions for landscape evolution that are hard to achieve with classical numerical schemes and thus can be used to validate and evaluate the accuracy of numerical landscape evolution models. The analytic solution together with the new definition of the fluvial response time allow inferring either the tectonic history or the climatic history from river long profiles by using simple linear inversion schemes. Analytic study of landscape evolution during periodic climate change reveals that high frequency (10-100 kyr) climatic oscillations with respect to the response time, such as Milankovitch cycles, are not expected to leave significant fingerprints in the upstream reaches of fluvial channels. Linear inversion schemes are applied to the Tinee river tributaries in the southern French Alps, where tributary long profiles are used to recover the incision rate history of the Tinee main trunk. Inversion results show periodic, high incision rate pulses, which are correlated with interglacial episodes. Similar incision rate histories are recovered for the past 100 kyr when assuming constant climatic conditions or periodic climatic oscillations, in agreement with theoretical predictions.

  17. Evaluation of the site effect with Heuristic Methods

    NASA Astrophysics Data System (ADS)

    Torres, N. N.; Ortiz-Aleman, C.

    2017-12-01

    The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.

  18. Vortex breakdown simulation

    NASA Technical Reports Server (NTRS)

    Hafez, M.; Ahmad, J.; Kuruvila, G.; Salas, M. D.

    1987-01-01

    In this paper, steady, axisymmetric inviscid, and viscous (laminar) swirling flows representing vortex breakdown phenomena are simulated using a stream function-vorticity-circulation formulation and two numerical methods. The first is based on an inverse iteration, where a norm of the solution is prescribed and the swirling parameter is calculated as a part of the output. The second is based on direct Newton iterations, where the linearized equations, for all the unknowns, are solved simultaneously by an efficient banded Gaussian elimination procedure. Several numerical solutions for inviscid and viscous flows are demonstrated, followed by a discussion of the results. Some improvements on previous work have been achieved: first order upwind differences are replaced by second order schemes, line relaxation procedure (with linear convergence rate) is replaced by Newton's iterations (which converge quadratically), and Reynolds numbers are extended from 200 up to 1000.

  19. Atmospheric, Cloud, and Surface Parameters Retrieved from Satellite Ultra-spectral Infrared Sounder Measurements

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Liu, Xu; Larar, Allen M.; Smith, William L.; Yang, Ping; Schluessel, Peter; Strow, Larrabee

    2007-01-01

    An advanced retrieval algorithm with a fast radiative transfer model, including cloud effects, is used for atmospheric profile and cloud parameter retrieval. This physical inversion scheme has been developed, dealing with cloudy as well as cloud-free radiance observed with ultraspectral infrared sounders, to simultaneously retrieve surface, atmospheric thermodynamic, and cloud microphysical parameters. A fast radiative transfer model, which applies to the clouded atmosphere, is used for atmospheric profile and cloud parameter retrieval. A one-dimensional (1-d) variational multivariable inversion solution is used to improve an iterative background state defined by an eigenvector-regression-retrieval. The solution is iterated in order to account for non-linearity in the 1-d variational solution. This retrieval algorithm is applied to the MetOp satellite Infrared Atmospheric Sounding Interferometer (IASI) launched on October 19, 2006. IASI possesses an ultra-spectral resolution of 0.25 cm(exp -1) and a spectral coverage from 645 to 2760 cm(exp -1). Preliminary retrievals of atmospheric soundings, surface properties, and cloud optical/microphysical properties with the IASI measurements are obtained and presented.

  20. On the gravitational field of static and stationary axial symmetric bodies with multi-polar structure

    NASA Astrophysics Data System (ADS)

    Letelier, Patricio S.

    1999-04-01

    We give a physical interpretation to the multi-polar Erez-Rozen-Quevedo solution of the Einstein equations in terms of bars. We find that each multi-pole corresponds to the Newtonian potential of a bar with linear density proportional to a Legendre polynomial. We use this fact to find an integral representation of the 0264-9381/16/4/010/img1 function. These integral representations are used in the context of the inverse scattering method to find solutions associated with one or more rotating bodies each with their own multi-polar structure.

  1. Hyperscaling violating black hole solutions and magneto-thermoelectric DC conductivities in holography

    NASA Astrophysics Data System (ADS)

    Ge, Xian-Hui; Tian, Yu; Wu, Shang-Yu; Wu, Shao-Feng

    2017-08-01

    We derive new black hole solutions in Einstein-Maxwell-axion-dilaton theory with a hyperscaling violation exponent. We then examine the corresponding anomalous transport exhibited by cuprate strange metals in the normal phase of high-temperature superconductors via gauge-gravity duality. Linear-temperature-dependence resistivity and quadratic-temperature-dependence inverse Hall angle can be achieved. In the high-temperature regime, the heat conductivity and Hall Lorenz ratio are proportional to the temperature. The Nernst signal first increases as temperature goes up, but it then decreases with increasing temperature in the high-temperature regime.

  2. Deformation of giant vesicles in AC electric fields —Dependence of the prolate-to-oblate transition frequency on vesicle radius

    NASA Astrophysics Data System (ADS)

    Antonova, K.; Vitkova, V.; Mitov, M. D.

    2010-02-01

    The electrodeformation of giant vesicles is studied as a function of their radii and the frequency of the applied AC field. At low frequency the shape is prolate, at sufficiently high frequency it is oblate and at some frequency, fc, the shape changes from prolate to oblate. A linear dependence of the prolate-to-oblate transition inverse frequency, 1/fc, on the vesicle radius is found. The nature of this phenomenon does not change with the variation of both the solution conductivity, σ, and the type of the fluid enclosed by the lipid membrane (water, sucrose or glucose aqueous solution). When σ increases, the value of fc increases while the slope of the line 1/fc(r) decreases. For vesicles in symmetrical conditions (the same conductivity of the inner and the outer solution) a linear dependence between σ and the critical frequency, fc, is obtained for conductivities up to σ=114 μS/cm. For vesicles with sizes below a certain minimum radius, depending on the solution conductivity, no shape transition could be observed.

  3. Crystal and NMR Structures of a Peptidomimetic β-Turn That Provides Facile Synthesis of 13-Membered Cyclic Tetrapeptides.

    PubMed

    Cameron, Alan J; Squire, Christopher J; Edwards, Patrick J B; Harjes, Elena; Sarojini, Vijayalekshmi

    2017-12-14

    Herein we report the unique conformations adopted by linear and cyclic tetrapeptides (CTPs) containing 2-aminobenzoic acid (2-Abz) in solution and as single crystals. The crystal structure of the linear tetrapeptide H 2 N-d-Leu-d-Phe-2-Abz-d-Ala-COOH (1) reveals a novel planar peptidomimetic β-turn stabilized by three hydrogen bonds and is in agreement with its NMR structure in solution. While CTPs are often synthetically inaccessible or cyclize in poor yield, both 1 and its N-Me-d-Phe analogue (2) adopt pseudo-cyclic frameworks enabling near quantitative conversion to the corresponding CTPs 3 and 4. The crystal structure of the N-methylated peptide (4) is the first reported for a CTP containing 2-Abz and reveals a distinctly planar 13-membered ring, which is also evident in solution. The N-methylation of d-Phe results in a peptide bond inversion compared to the conformation of 3 in solution. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Comparison of seismic waveform inversion results for the rupture history of a finite fault: application to the 1986 North Palm Springs, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.

    1989-01-01

    The July 8, 1986, North Palm Strings earthquake is used as a basis for comparison of several different approaches to the solution for the rupture history of a finite fault. The inversion of different waveform data is considered; both teleseismic P waveforms and local strong ground motion records. Linear parametrizations for slip amplitude are compared with nonlinear parametrizations for both slip amplitude and rupture time. Inversions using both synthetic and empirical Green's functions are considered. In general, accurate Green's functions are more readily calculable for the teleseismic problem where simple ray theory and flat-layered velocity structures are usually sufficient. However, uncertainties in the variation in t* with frequency most limit the resolution of teleseismic inversions. A set of empirical Green's functions that are well recorded at teleseismic distances could avoid the uncertainties in attenuation. In the inversion of strong motion data, the accurate calculation of propagation path effects other than attenuation effects is the limiting factor in the resolution of source parameters. -from Author

  5. Three-dimensional inverse problem of geometrical optics: a mathematical comparison between Fermat's principle and the eikonal equation.

    PubMed

    Borghero, Francesco; Demontis, Francesco

    2016-09-01

    In the framework of geometrical optics, we consider the following inverse problem: given a two-parameter family of curves (congruence) (i.e., f(x,y,z)=c1,g(x,y,z)=c2), construct the refractive-index distribution function n=n(x,y,z) of a 3D continuous transparent inhomogeneous isotropic medium, allowing for the creation of the given congruence as a family of monochromatic light rays. We solve this problem by following two different procedures: 1. By applying Fermat's principle, we establish a system of two first-order linear nonhomogeneous PDEs in the unique unknown function n=n(x,y,z) relating the assigned congruence of rays with all possible refractive-index profiles compatible with this family. Moreover, we furnish analytical proof that the family of rays must be a normal congruence. 2. By applying the eikonal equation, we establish a second system of two first-order linear homogeneous PDEs whose solutions give the equation S(x,y,z)=const. of the geometric wavefronts and, consequently, all pertinent refractive-index distribution functions n=n(x,y,z). Finally, we make a comparison between the two procedures described above, discussing appropriate examples having exact solutions.

  6. Dissipative particle dynamics: Effects of thermostating schemes on nano-colloid electrophoresis

    NASA Astrophysics Data System (ADS)

    Hassanzadeh Afrouzi, Hamid; Moshfegh, Abouzar; Farhadi, Mousa; Sedighi, Kurosh

    2018-05-01

    A novel fully explicit approach using dissipative particle dynamics (DPD) method is introduced in the present study to model the electrophoretic transport of nano-colloids in an electrolyte solution. Slater type charge smearing function included in 3D Ewald summation method is employed to treat electrostatic interaction. Performance of various thermostats are challenged to control the system temperature and study the dynamic response of colloidal electrophoretic mobility under practical ranges of external electric field (0 . 072 < E < 0 . 361 v/nm) covering linear to non-linear response regime, and ionic salt concentration (0.049 < SC < 0 . 69 [M]) covering weak to strong Debye screening of the colloid. System temperature and electrophoretic mobility both show a direct and inverse relationships respectively with electric field and colloidal repulsion; although they each respectively behave direct and inverse trends with salt concentration under various thermostats. Nosé-Hoover-Lowe-Andersen and Lowe-Andersen thermostats are found to function more effectively under high electric fields (E > 0 . 145[v/nm ]) while thermal equilibrium is maintained. Reasonable agreements are achieved by benchmarking the system radial distribution function with available EW3D modellings, as well as comparing reduced mobility against conventional Smoluchowski and Hückel theories, and numerical solution of Poisson-Boltzmann equation.

  7. Parameterizations for ensemble Kalman inversion

    NASA Astrophysics Data System (ADS)

    Chada, Neil K.; Iglesias, Marco A.; Roininen, Lassi; Stuart, Andrew M.

    2018-05-01

    The use of ensemble methods to solve inverse problems is attractive because it is a derivative-free methodology which is also well-adapted to parallelization. In its basic iterative form the method produces an ensemble of solutions which lie in the linear span of the initial ensemble. Choice of the parameterization of the unknown field is thus a key component of the success of the method. We demonstrate how both geometric ideas and hierarchical ideas can be used to design effective parameterizations for a number of applied inverse problems arising in electrical impedance tomography, groundwater flow and source inversion. In particular we show how geometric ideas, including the level set method, can be used to reconstruct piecewise continuous fields, and we show how hierarchical methods can be used to learn key parameters in continuous fields, such as length-scales, resulting in improved reconstructions. Geometric and hierarchical ideas are combined in the level set method to find piecewise constant reconstructions with interfaces of unknown topology.

  8. On the Use of Nonlinear Regularization in Inverse Methods for the Solar Tachocline Profile Determination

    NASA Astrophysics Data System (ADS)

    Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.

    Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.

  9. The quasi-optimality criterion in the linear functional strategy

    NASA Astrophysics Data System (ADS)

    Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey

    2018-07-01

    The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.

  10. Bessel smoothing filter for spectral-element mesh

    NASA Astrophysics Data System (ADS)

    Trinh, P. T.; Brossier, R.; Métivier, L.; Virieux, J.; Wellington, P.

    2017-06-01

    Smoothing filters are extremely important tools in seismic imaging and inversion, such as for traveltime tomography, migration and waveform inversion. For efficiency, and as they can be used a number of times during inversion, it is important that these filters can easily incorporate prior information on the geological structure of the investigated medium, through variable coherent lengths and orientation. In this study, we promote the use of the Bessel filter to achieve these purposes. Instead of considering the direct application of the filter, we demonstrate that we can rely on the equation associated with its inverse filter, which amounts to the solution of an elliptic partial differential equation. This enhances the efficiency of the filter application, and also its flexibility. We apply this strategy within a spectral-element-based elastic full waveform inversion framework. Taking advantage of this formulation, we apply the Bessel filter by solving the associated partial differential equation directly on the spectral-element mesh through the standard weak formulation. This avoids cumbersome projection operators between the spectral-element mesh and a regular Cartesian grid, or expensive explicit windowed convolution on the finite-element mesh, which is often used for applying smoothing operators. The associated linear system is solved efficiently through a parallel conjugate gradient algorithm, in which the matrix vector product is factorized and highly optimized with vectorized computation. Significant scaling behaviour is obtained when comparing this strategy with the explicit convolution method. The theoretical numerical complexity of this approach increases linearly with the coherent length, whereas a sublinear relationship is observed practically. Numerical illustrations are provided here for schematic examples, and for a more realistic elastic full waveform inversion gradient smoothing on the SEAM II benchmark model. These examples illustrate well the efficiency and flexibility of the approach proposed.

  11. Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gary D. Egbert

    2007-03-22

    The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to themore » full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before approaching more computationally cumbersome three-dimensional problems.« less

  12. Assigning uncertainties in the inversion of NMR relaxation data.

    PubMed

    Parker, Robert L; Song, Yi-Qaio

    2005-06-01

    Recovering the relaxation-time density function (or distribution) from NMR decay records requires inverting a Laplace transform based on noisy data, an ill-posed inverse problem. An important objective in the face of the consequent ambiguity in the solutions is to establish what reliable information is contained in the measurements. To this end we describe how upper and lower bounds on linear functionals of the density function, and ratios of linear functionals, can be calculated using optimization theory. Those bounded quantities cover most of those commonly used in the geophysical NMR, such as porosity, T(2) log-mean, and bound fluid volume fraction, and include averages over any finite interval of the density function itself. In the theory presented statistical considerations enter to account for the presence of significant noise in the signal, but not in a prior characterization of density models. Our characterization of the uncertainties is conservative and informative; it will have wide application in geophysical NMR and elsewhere.

  13. Informativeness of Wind Data in Linear Madden-Julian Oscillation Prediction

    DTIC Science & Technology

    2016-08-15

    Linear inverse models (LIMs) are used to explore predictability and information content of the Madden–Julian Oscillation (MJO). Hindcast skill for...mostly at the largest scales, adds 1–2 days of skill. Keywords: linear inverse modeling; Madden–Julian Oscillation; sub-seasonal prediction 1...tion that may reflect on the MJO’s incompletely under- stood dynamics. Cavanaugh et al. (2014, hereafter C14) explored the skill of linear inverse

  14. Highly Accurate Analytical Approximate Solution to a Nonlinear Pseudo-Oscillator

    NASA Astrophysics Data System (ADS)

    Wu, Baisheng; Liu, Weijia; Lim, C. W.

    2017-07-01

    A second-order Newton method is presented to construct analytical approximate solutions to a nonlinear pseudo-oscillator in which the restoring force is inversely proportional to the dependent variable. The nonlinear equation is first expressed in a specific form, and it is then solved in two steps, a predictor and a corrector step. In each step, the harmonic balance method is used in an appropriate manner to obtain a set of linear algebraic equations. With only one simple second-order Newton iteration step, a short, explicit, and highly accurate analytical approximate solution can be derived. The approximate solutions are valid for all amplitudes of the pseudo-oscillator. Furthermore, the method incorporates second-order Taylor expansion in a natural way, and it is of significant faster convergence rate.

  15. The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions

    NASA Astrophysics Data System (ADS)

    Loaiciga, Hugo A.; MariñO, Miguel A.

    1987-01-01

    The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.

  16. Kinematic inversion of the 2008 Mw7 Iwate-Miyagi (Japan) earthquake by two independent methods: Sensitivity and resolution analysis

    NASA Astrophysics Data System (ADS)

    Gallovic, Frantisek; Cirella, Antonella; Plicka, Vladimir; Piatanesi, Alessio

    2013-04-01

    On 14 June 2008, UTC 23:43, the border of Iwate and Miyagi prefectures was hit by an Mw7 reverse-fault type crustal earthquake. The event is known to have the largest ground acceleration observed to date (~4g), which was recorded at station IWTH25. We analyze observed strong motion data with the objective to image the event rupture process and the associated uncertainties. Two different slip inversion approaches are used, the difference between the two methods being only in the parameterization of the source model. To minimize mismodeling of the propagation effects we use crustal model obtained by full waveform inversion of aftershock records in the frequency range between 0.05-0.3 Hz. In the first method, based on linear formulation, the parameters are represented by samples of slip velocity functions along the (finely discretized) fault in a time window spanning the whole rupture duration. Such a source description is very general with no prior constraint on the nucleation point, rupture velocity, shape of the velocity function. Thus the inversion could resolve very general (unexpected) features of the rupture evolution, such as multiple rupturing, rupture-propagation reversals, etc. On the other hand, due to the relatively large number of model parameters, the inversion result is highly non-unique, with possibility of obtaining a biased solution. The second method is a non-linear global inversion technique, where each point on the fault can slip only once, following a prescribed functional form of the source time function. We invert simultaneously for peak slip velocity, slip angle, rise time and rupture time by allowing a given range of variability for each kinematic model parameter. For this reason, unlike to the linear inversion approach, the rupture process needs a smaller number of parameters to be retrieved, and is more constrained with a proper control on the allowed range of parameter values. In order to test the resolution and reliability of the retrieved models, we present a thorough analysis of the performance of the two inversion approaches. In fact, depending on the inversion strategy and the intrinsic 'non-uniqueness' of the inverse problem, the final slip maps and distribution of rupture onset times are generally different, sometimes even incompatible with each other. Great emphasis is devoted to the uncertainty estimate of both techniques. Thus we do not compare only the best fitting models, but their 'compatibility' in terms of the uncertainty limits.

  17. Complete Sets of Radiating and Nonradiating Parts of a Source and Their Fields with Applications in Inverse Scattering Limited-Angle Problems

    PubMed Central

    Louis, A. K.

    2006-01-01

    Many algorithms applied in inverse scattering problems use source-field systems instead of the direct computation of the unknown scatterer. It is well known that the resulting source problem does not have a unique solution, since certain parts of the source totally vanish outside of the reconstruction area. This paper provides for the two-dimensional case special sets of functions, which include all radiating and all nonradiating parts of the source. These sets are used to solve an acoustic inverse problem in two steps. The problem under discussion consists of determining an inhomogeneous obstacle supported in a part of a disc, from data, known for a subset of a two-dimensional circle. In a first step, the radiating parts are computed by solving a linear problem. The second step is nonlinear and consists of determining the nonradiating parts. PMID:23165060

  18. A matched-peak inversion approach for ocean acoustic travel-time tomography

    PubMed

    Skarsoulis

    2000-03-01

    A new approach for the inversion of travel-time data is proposed, based on the matching between model arrivals and observed peaks. Using the linearized model relations between sound-speed and arrival-time perturbations about a set of background states, arrival times and associated errors are calculated on a fine grid of model states discretizing the sound-speed parameter space. Each model state can explain (identify) a number of observed peaks in a particular reception lying within the uncertainty intervals of the corresponding predicted arrival times. The model states that explain the maximum number of observed peaks are considered as the more likely parametric descriptions of the reception; these model states can be described in terms of mean values and variances providing a statistical answer (matched-peak solution) to the inversion problem. A basic feature of the matched-peak inversion approach is that each reception can be treated independently, i.e., no constraints are posed from previous-reception identification or inversion results. Accordingly, there is no need for initialization of the inversion procedure and, furthermore, discontinuous travel-time data can be treated. The matched-peak inversion method is demonstrated by application to 9-month-long travel-time data from the Thetis-2 tomography experiment in the western Mediterranean sea.

  19. Iterative methods for mixed finite element equations

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.

    1985-01-01

    Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.

  20. Resolvent approach for two-dimensional scattering problems. Application to the nonstationary Schrödinger problem and the KPI equation

    NASA Astrophysics Data System (ADS)

    Boiti, M.; Pempinelli, F.; Pogrebkov, A. K.; Polivanov, M. C.

    1992-11-01

    The resolvent operator of the linear problem is determined as the full Green function continued in the complex domain in two variables. An analog of the known Hilbert identity is derived. We demonstrate the role of this identity in the study of two-dimensional scattering. Considering the nonstationary Schrödinger equation as an example, we show that all types of solutions of the linear problems, as well as spectral data known in the literature, are given as specific values of this unique function — the resolvent function. A new form of the inverse problem is formulated.

  1. Variable-permittivity linear inverse problem for the H(sub z)-polarized case

    NASA Technical Reports Server (NTRS)

    Moghaddam, M.; Chew, W. C.

    1993-01-01

    The H(sub z)-polarized inverse problem has rarely been studied before due to the complicated way in which the unknown permittivity appears in the wave equation. This problem is equivalent to the acoustic inverse problem with variable density. We have recently reported the solution to the nonlinear variable-permittivity H(sub z)-polarized inverse problem using the Born iterative method. Here, the linear inverse problem is solved for permittivity (epsilon) and permeability (mu) using a different approach which is an extension of the basic ideas of diffraction tomography (DT). The key to solving this problem is to utilize frequency diversity to obtain the required independent measurements. The receivers are assumed to be in the far field of the object, and plane wave incidence is also assumed. It is assumed that the scatterer is weak, so that the Born approximation can be used to arrive at a relationship between the measured pressure field and two terms related to the spatial Fourier transform of the two unknowns, epsilon and mu. The term involving permeability corresponds to monopole scattering and that for permittivity to dipole scattering. Measurements at several frequencies are used and a least squares problem is solved to reconstruct epsilon and mu. It is observed that the low spatial frequencies in the spectra of epsilon and mu produce inaccuracies in the results. Hence, a regularization method is devised to remove this problem. Several results are shown. Low contrast objects for which the above analysis holds are used to show that good reconstructions are obtained for both permittivity and permeability after regularization is applied.

  2. Focal mechanism determination of induced micro-earthquakes in reservoir by non linear inversion of amplitudes

    NASA Astrophysics Data System (ADS)

    Godano, M.; Regnier, M.; Deschamps, A.; Bardainne, T.

    2009-04-01

    Since these last years, the feasibility of CO2 storage in geological reservoir is carefully investigated. The monitoring of the seismicity (natural or induced by the gas injection) in the reservoir area is crucial for safety concerns. The location of the seismic events provide an imaging of the active structures which can be a potential leakage paths. Besides, the focal mechanism is an other important seismic attribute providing direct informations about the rock fracturing, and indirect information about the state of stress in the reservoir. We address the problem of focal mechanism determination for the micro-earthquakes induced in reservoirs with a potential application to the sites of CO2 storage. We developed a non linear inversion method of P, SV and SH direct waves amplitudes. To solve the inverse problem, we perfected our own simulated annealing algorithm. Our method allows simply determining the fault plane solution (strike, dip and rake of the fault plane) in the case of a double-couple source assumption. More generally, our method allows also determining the full moment tensor in case of non-purely shear source assumption. We searched to quantify the uncertainty associated to the obtained focal mechanisms. We defined three uncertainty causes. The first is related to the convergence process of the inversion, the second is related the amplitude picking error caused by the noise level and the third is related to the event location uncertainty. We performed a series of tests on synthetic data generated in reservoir configuration in order to validate our inversion method.

  3. Reconstructing Images in Astrophysics, an Inverse Problem Point of View

    NASA Astrophysics Data System (ADS)

    Theys, Céline; Aime, Claude

    2016-04-01

    After a short introduction, a first section provides a brief tutorial to the physics of image formation and its detection in the presence of noises. The rest of the chapter focuses on the resolution of the inverse problem . In the general form, the observed image is given by a Fredholm integral containing the object and the response of the instrument. Its inversion is formulated using a linear algebra. The discretized object and image of size N × N are stored in vectors x and y of length N 2. They are related one another by the linear relation y = H x, where H is a matrix of size N 2 × N 2 that contains the elements of the instrument response. This matrix presents particular properties for a shift invariant point spread function for which the Fredholm integral is reduced to a convolution relation. The presence of noise complicates the resolution of the problem. It is shown that minimum variance unbiased solutions fail to give good results because H is badly conditioned, leading to the need of a regularized solution. Relative strength of regularization versus fidelity to the data is discussed and briefly illustrated on an example using L-curves. The origins and construction of iterative algorithms are explained, and illustrations are given for the algorithms ISRA , for a Gaussian additive noise, and Richardson-Lucy , for a pure photodetected image (Poisson statistics). In this latter case, the way the algorithm modifies the spatial frequencies of the reconstructed image is illustrated for a diluted array of apertures in space. Throughout the chapter, the inverse problem is formulated in matrix form for the general case of the Fredholm integral, while numerical illustrations are limited to the deconvolution case, allowing the use of discrete Fourier transforms, because of computer limitations.

  4. On the solution of the generalized wave and generalized sine-Gordon equations

    NASA Technical Reports Server (NTRS)

    Ablowitz, M. J.; Beals, R.; Tenenblat, K.

    1986-01-01

    The generalized wave equation and generalized sine-Gordon equations are known to be natural multidimensional differential geometric generalizations of the classical two-dimensional versions. In this paper, a system of linear differential equations is associated with these equations, and it is shown how the direct and inverse problems can be solved for appropriately decaying data on suitable lines. An initial-boundary value problem is solved for these equations.

  5. Towards the Early Detection of Breast Cancer in Young Women

    DTIC Science & Technology

    2005-10-01

    T. Shiina, and F. Tranquart. Progress in Freehand Elastography of the Breast . IEICE Transactions on Information and Systems, E85D (1):5–14, 2002. [3...Meaney, Naomi R. Miller, Tsuyoshi Shiina, and Francois Tranquart. Progress in freehand elastography of the breast . IEICE Transactions on Information...solution of the non-linear inverse elasticity problem 28 [26] Liew HL and Pinsky PM. Recovery of shear modulus in elastography using an adjoint method

  6. An approximation theory for the identification of nonlinear distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Reich, Simeon; Rosen, I. G.

    1988-01-01

    An abstract approximation framework for the identification of nonlinear distributed parameter systems is developed. Inverse problems for nonlinear systems governed by strongly maximal monotone operators (satisfying a mild continuous dependence condition with respect to the unknown parameters to be identified) are treated. Convergence of Galerkin approximations and the corresponding solutions of finite dimensional approximating identification problems to a solution of the original finite dimensional identification problem is demonstrated using the theory of nonlinear evolution systems and a nonlinear analog of the Trotter-Kato approximation result for semigroups of bounded linear operators. The nonlinear theory developed here is shown to subsume an existing linear theory as a special case. It is also shown to be applicable to a broad class of nonlinear elliptic operators and the corresponding nonlinear parabolic partial differential equations to which they lead. An application of the theory to a quasilinear model for heat conduction or mass transfer is discussed.

  7. Solution of Linearized Drift Kinetic Equations in Neoclassical Transport Theory by the Method of Matched Asymptotic Expansions

    NASA Astrophysics Data System (ADS)

    Wong, S. K.; Chan, V. S.; Hinton, F. L.

    2001-10-01

    The classic solution of the linearized drift kinetic equations in neoclassical transport theory for large-aspect-ratio tokamak flux-surfaces relies on the variational principle and the choice of ``localized" distribution functions as trialfunctions.(M.N. Rosenbluth, et al., Phys. Fluids 15) (1972) 116. Somewhat unclear in this approach are the nature and the origin of the ``localization" and whether the results obtained represent the exact leading terms in an asymptotic expansion int he inverse aspect ratio. Using the method of matched asymptotic expansions, we were able to derive the leading approximations to the distribution functions and demonstrated the asymptotic exactness of the existing results. The method is also applied to the calculation of angular momentum transport(M.N. Rosenbluth, et al., Plasma Phys. and Contr. Nucl. Fusion Research, 1970, Vol. 1 (IAEA, Vienna, 1971) p. 495.) and the current driven by electron cyclotron waves.

  8. Hipergeometric solutions to some nonhomogeneous equations of fractional order

    NASA Astrophysics Data System (ADS)

    Olivares, Jorge; Martin, Pablo; Maass, Fernando

    2017-12-01

    In this paper a study is performed to the solution of the linear non homogeneous fractional order alpha differential equation equal to I 0(x), where I 0(x) is the modified Bessel function of order zero, the initial condition is f(0)=0 and 0 < alpha < 1. Caputo definition for the fractional derivatives is considered. Fractional derivatives have become important in physical and chemical phenomena as visco-elasticity and visco-plasticity, anomalous diffusion and electric circuits. In particular in this work the values of alpha=1/2, 1/4 and 3/4. are explicitly considered . In these cases Laplace transform is applied, and later the inverse Laplace transform leads to the solutions of the differential equation, which become hypergeometric functions.

  9. Cubic spline anchored grid pattern algorithm for high-resolution detection of subsurface cavities by the IR-CAT method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kassab, A.J.; Pollard, J.E.

    An algorithm is presented for the high-resolution detection of irregular-shaped subsurface cavities within irregular-shaped bodies by the IR-CAT method. The theoretical basis of the algorithm is rooted in the solution of an inverse geometric steady-state heat conduction problem. A Cauchy boundary condition is prescribed at the exposed surface, and the inverse geometric heat conduction problem is formulated by specifying the thermal condition at the inner cavities walls, whose unknown geometries are to be detected. The location of the inner cavities is initially estimated, and the domain boundaries are discretized. Linear boundary elements are used in conjunction with cubic splines formore » high resolution of the cavity walls. An anchored grid pattern (AGP) is established to constrain the cubic spline knots that control the inner cavity geometry to evolve along the AGP at each iterative step. A residual is defined measuring the difference between imposed and computed boundary conditions. A Newton-Raphson method with a Broyden update is used to automate the detection of inner cavity walls. During the iterative procedure, the movement of the inner cavity walls is restricted to physically realistic intermediate solutions. Numerical simulation demonstrates the superior resolution of the cubic spline AGP algorithm over the linear spline-based AGP in the detection of an irregular-shaped cavity. Numerical simulation is also used to test the sensitivity of the linear and cubic spline AGP algorithms by simulating bias and random error in measured surface temperature. The proposed AGP algorithm is shown to satisfactorily detect cavities with these simulated data.« less

  10. Analytic reconstruction of magnetic resonance imaging signal obtained from a periodic encoding field.

    PubMed

    Rybicki, F J; Hrovat, M I; Patz, S

    2000-09-01

    We have proposed a two-dimensional PERiodic-Linear (PERL) magnetic encoding field geometry B(x,y) = g(y)y cos(q(x)x) and a magnetic resonance imaging pulse sequence which incorporates two fields to image a two-dimensional spin density: a standard linear gradient in the x dimension, and the PERL field. Because of its periodicity, the PERL field produces a signal where the phase of the two dimensions is functionally different. The x dimension is encoded linearly, but the y dimension appears as the argument of a sinusoidal phase term. Thus, the time-domain signal and image spin density are not related by a two-dimensional Fourier transform. They are related by a one-dimensional Fourier transform in the x dimension and a new Bessel function integral transform (the PERL transform) in the y dimension. The inverse of the PERL transform provides a reconstruction algorithm for the y dimension of the spin density from the signal space. To date, the inverse transform has been computed numerically by a Bessel function expansion over its basis functions. This numerical solution used a finite sum to approximate an infinite summation and thus introduced a truncation error. This work analytically determines the basis functions for the PERL transform and incorporates them into the reconstruction algorithm. The improved algorithm is demonstrated by (1) direct comparison between the numerically and analytically computed basis functions, and (2) reconstruction of a known spin density. The new solution for the basis functions also lends proof of the system function for the PERL transform under specific conditions.

  11. Flexible polyelectrolyte chain in a strong electrolyte solution: Insight into equilibrium properties and force-extension behavior from mesoscale simulation

    NASA Astrophysics Data System (ADS)

    Malekzadeh Moghani, Mahdy; Khomami, Bamin

    2016-01-01

    Macromolecules with ionizable groups are ubiquitous in biological and synthetic systems. Due to the complex interaction between chain and electrostatic decorrelation lengths, both equilibrium properties and micro-mechanical response of dilute solutions of polyelectrolytes (PEs) are more complex than their neutral counterparts. In this work, the bead-rod micromechanical description of a chain is used to perform hi-fidelity Brownian dynamics simulation of dilute PE solutions to ascertain the self-similar equilibrium behavior of PE chains with various linear charge densities, scaling of the Kuhn step length (lE) with salt concentration cs and the force-extension behavior of the PE chain. In accord with earlier theoretical predictions, our results indicate that for a chain with n Kuhn segments, lE ˜ cs-0.5 as linear charge density approaches 1/n. Moreover, the constant force ensemble simulation results accurately predict the initial non-linear force-extension region of PE chain recently measured via single chain experiments. Finally, inspired by Cohen's extraction of Warner's force law from the inverse Langevin force law, a novel numerical scheme is developed to extract a new elastic force law for real chains from our discrete set of force-extension data similar to Padè expansion, which accurately depicts the initial non-linear region where the total Kuhn length is less than the thermal screening length.

  12. Flexible polyelectrolyte chain in a strong electrolyte solution: Insight into equilibrium properties and force-extension behavior from mesoscale simulation.

    PubMed

    Malekzadeh Moghani, Mahdy; Khomami, Bamin

    2016-01-14

    Macromolecules with ionizable groups are ubiquitous in biological and synthetic systems. Due to the complex interaction between chain and electrostatic decorrelation lengths, both equilibrium properties and micro-mechanical response of dilute solutions of polyelectrolytes (PEs) are more complex than their neutral counterparts. In this work, the bead-rod micromechanical description of a chain is used to perform hi-fidelity Brownian dynamics simulation of dilute PE solutions to ascertain the self-similar equilibrium behavior of PE chains with various linear charge densities, scaling of the Kuhn step length (lE) with salt concentration cs and the force-extension behavior of the PE chain. In accord with earlier theoretical predictions, our results indicate that for a chain with n Kuhn segments, lE ∼ cs (-0.5) as linear charge density approaches 1/n. Moreover, the constant force ensemble simulation results accurately predict the initial non-linear force-extension region of PE chain recently measured via single chain experiments. Finally, inspired by Cohen's extraction of Warner's force law from the inverse Langevin force law, a novel numerical scheme is developed to extract a new elastic force law for real chains from our discrete set of force-extension data similar to Padè expansion, which accurately depicts the initial non-linear region where the total Kuhn length is less than the thermal screening length.

  13. Imaging the complex geometry of a magma reservoir using FEM-based linear inverse modeling of InSAR data: application to Rabaul Caldera, Papua New Guinea

    NASA Astrophysics Data System (ADS)

    Ronchin, Erika; Masterlark, Timothy; Dawson, John; Saunders, Steve; Martì Molist, Joan

    2017-06-01

    We test an innovative inversion scheme using Green's functions from an array of pressure sources embedded in finite-element method (FEM) models to image, without assuming an a-priori geometry, the composite and complex shape of a volcano deformation source. We invert interferometric synthetic aperture radar (InSAR) data to estimate the pressurization and shape of the magma reservoir of Rabaul caldera, Papua New Guinea. The results image the extended shallow magmatic system responsible for a broad and long-term subsidence of the caldera between 2007 February and 2010 December. Elastic FEM solutions are integrated into the regularized linear inversion of InSAR data of volcano surface displacements in order to obtain a 3-D image of the source of deformation. The Green's function matrix is constructed from a library of forward line-of-sight displacement solutions for a grid of cubic elementary deformation sources. Each source is sequentially generated by removing the corresponding cubic elements from a common meshed domain and simulating the injection of a fluid mass flux into the cavity, which results in a pressurization and volumetric change of the fluid-filled cavity. The use of a single mesh for the generation of all FEM models avoids the computationally expensive process of non-linear inversion and remeshing a variable geometry domain. Without assuming an a-priori source geometry other than the configuration of the 3-D grid that generates the library of Green's functions, the geodetic data dictate the geometry of the magma reservoir as a 3-D distribution of pressure (or flux of magma) within the source array. The inversion of InSAR data of Rabaul caldera shows a distribution of interconnected sources forming an amorphous, shallow magmatic system elongated under two opposite sides of the caldera. The marginal areas at the sides of the imaged magmatic system are the possible feeding reservoirs of the ongoing Tavurvur volcano eruption of andesitic products on the east side and of the past Vulcan volcano eruptions of more evolved materials on the west side. The interconnection and spatial distributions of sources correspond to the petrography of the volcanic products described in the literature and to the dynamics of the single and twin eruptions that characterize the caldera. The ability to image the complex geometry of deformation sources in both space and time can improve our ability to monitor active volcanoes, widen our understanding of the dynamics of active volcanic systems and improve the predictions of eruptions.

  14. Assessing non-uniqueness: An algebraic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasco, Don W.

    Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.

  15. Bayesian linearized amplitude-versus-frequency inversion for quality factor and its application

    NASA Astrophysics Data System (ADS)

    Yang, Xinchao; Teng, Long; Li, Jingnan; Cheng, Jiubing

    2018-06-01

    We propose a straightforward attenuation inversion method by utilizing the amplitude-versus-frequency (AVF) characteristics of seismic data. A new linearized approximation equation of the angle and frequency dependent reflectivity in viscoelastic media is derived. We then use the presented equation to implement the Bayesian linear AVF inversion. The inversion result includes not only P-wave and S-wave velocities, and densities, but also P-wave and S-wave quality factors. Synthetic tests show that the AVF inversion surpasses the AVA inversion for quality factor estimation. However, a higher signal noise ratio (SNR) of data is necessary for the AVF inversion. To show its feasibility, we apply both the new Bayesian AVF inversion and conventional AVA inversion to a tight gas reservoir data in Sichuan Basin in China. Considering the SNR of the field data, a combination of AVF inversion for attenuation parameters and AVA inversion for elastic parameters is recommended. The result reveals that attenuation estimations could serve as a useful complement in combination with the AVA inversion results for the detection of tight gas reservoirs.

  16. Gravimetric control of active volcanic processes

    NASA Astrophysics Data System (ADS)

    Saltogianni, Vasso; Stiros, Stathis

    2017-04-01

    Volcanic activity includes phases of magma chamber inflation and deflation, produced by movement of magma and/or hydrothermal processes. Such effects usually leave their imprint as deformation of the ground surfaces which can be recorded by GNSS and other methods, on one hand, and on the other hand they can be modeled as elastic deformation processes, with deformation produced by volcanic masses of finite dimensions such as spheres, ellipsoids and parallelograms. Such volumes are modeled on the basis of inversion (non-linear, numerical solution) of systems of equations relating the unknown dimensions and location of magma sources with observations, currently mostly GNSS and INSAR data. Inversion techniques depend on the misfit between model predictions and observations, but because systems of equations are highly non-linear, and because adopted models for the geometry of magma sources is simple, non-unique solutions can be derived, constrained by local extrema. Assessment of derived magma models can be provided by independent observations and models, such as micro-seismicity distribution and changes in geophysical parameters. In the simplest case magmatic intrusions can be modeled as spheres with diameters of at least a few tens of meters at a depth of a few kilometers; hence they are expected to have a gravimetric signature in permanent recording stations on the ground surface, while larger intrusions may also have an imprint in sensors in orbit around the earth or along precisely defined air paths. Identification of such gravimetric signals and separation of the "true" signal from the measurement and ambient noise requires fine forward modeling of the wider areas based on realistic simulation of the ambient gravimetric field, and then modeling of its possible distortion because of magmatic anomalies. Such results are useful to remove ambiguities in inverse modeling of ground deformation, and also to detect magmatic anomalies offshore.

  17. Incomplete Sparse Approximate Inverses for Parallel Preconditioning

    DOE PAGES

    Anzt, Hartwig; Huckle, Thomas K.; Bräckle, Jürgen; ...

    2017-10-28

    In this study, we propose a new preconditioning method that can be seen as a generalization of block-Jacobi methods, or as a simplification of the sparse approximate inverse (SAI) preconditioners. The “Incomplete Sparse Approximate Inverses” (ISAI) is in particular efficient in the solution of sparse triangular linear systems of equations. Those arise, for example, in the context of incomplete factorization preconditioning. ISAI preconditioners can be generated via an algorithm providing fine-grained parallelism, which makes them attractive for hardware with a high concurrency level. Finally, in a study covering a large number of matrices, we identify the ISAI preconditioner as anmore » attractive alternative to exact triangular solves in the context of incomplete factorization preconditioning.« less

  18. A Strassen-Newton algorithm for high-speed parallelizable matrix inversion

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Ferguson, Helaman R. P.

    1988-01-01

    Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.

  19. On the use of the Reciprocity Gap Functional in inverse scattering with near-field data: An application to mammography

    NASA Astrophysics Data System (ADS)

    Delbary, Fabrice; Aramini, Riccardo; Bozza, Giovanni; Brignone, Massimo; Piana, Michele

    2008-11-01

    Microwave tomography is a non-invasive approach to the early diagnosis of breast cancer. However the problem of visualizing tumors from diffracted microwaves is a difficult nonlinear ill-posed inverse scattering problem. We propose a qualitative approach to the solution of such a problem, whereby the shape and location of cancerous tissues can be detected by means of a combination of the Reciprocity Gap Functional method and the Linear Sampling method. We validate this approach to synthetic near-fields produced by a finite element method for boundary integral equations, where the breast is mimicked by the axial view of two nested cylinders, the external one representing the skin and the internal one representing the fat tissue.

  20. Multi-Maneuver Clohessy-Wiltshire Targeting

    NASA Technical Reports Server (NTRS)

    Dannemiller, David P.

    2011-01-01

    Orbital rendezvous involves execution of a sequence of maneuvers by a chaser vehicle to bring the chaser to a desired state relative to a target vehicle while meeting intermediate and final relative constraints. Intermediate and final relative constraints are necessary to meet a multitude of requirements such as to control approach direction, ensure relative position is adequate for operation of space-to-space communication systems and relative sensors, provide fail-safe trajectory features, and provide contingency hold points. The effect of maneuvers on constraints is often coupled, so the maneuvers must be solved for as a set. For example, maneuvers that affect orbital energy change both the chaser's height and downrange position relative to the target vehicle. Rendezvous designers use experience and rules-of-thumb to design a sequence of maneuvers and constraints. A non-iterative method is presented for targeting a rendezvous scenario that includes a sequence of maneuvers and relative constraints. This method is referred to as Multi-Maneuver Clohessy-Wiltshire Targeting (MM_CW_TGT). When a single maneuver is targeted to a single relative position, the classic CW targeting solution is obtained. The MM_CW_TGT method involves manipulation of the CW state transition matrix to form a linear system. As a starting point for forming the algorithm, the effects of a series of impulsive maneuvers on the state are derived. Simple and moderately complex examples are used to demonstrate the pattern of the resulting linear system. The general form of the pattern results in an algorithm for formation of the linear system. The resulting linear system relates the effect of maneuver components and initial conditions on relative constraints specified by the rendezvous designer. Solution of the linear system includes the straight-forward inverse of a square matrix. Inversion of the square matrix is assured if the designer poses a controllable scenario - a scenario where the the constraints can be met by the sequence of maneuvers. Matrices in the linear system are dependent on selection of maneuvers and constraints by the designer, but the matrices are independent of the chaser's initial conditions. For scenarios where the sequence of maneuvers and constraints are fixed, the linear system can be formed and the square matrix inverted prior to real-time operations. Example solutions are presented for several rendezvous scenarios to illustrate the utility of the method. The MM_CW_TGT method has been used during the preliminary design of rendezvous scenarios and is expected to be useful for iterative methods in the generation of an initial guess and corrections.

  1. On certain families of rational functions arising in dynamics

    NASA Technical Reports Server (NTRS)

    Byrnes, C. I.

    1979-01-01

    It is noted that linear systems, depending on parameters, can occur in diverse situations including families of rational solutions to the Korteweg-de Vries equation or to the finite Toda lattice. The inverse scattering method used by Moser (1975) to obtain canonical coordinates for the finite homogeneous Toda lattice can be used for the synthesis of RC networks. It is concluded that the multivariable RC setting is ideal for the analysis of the periodic Toda lattice.

  2. Determination of elastic moduli from measured acoustic velocities.

    PubMed

    Brown, J Michael

    2018-06-01

    Methods are evaluated in solution of the inverse problem associated with determination of elastic moduli for crystals of arbitrary symmetry from elastic wave velocities measured in many crystallographic directions. A package of MATLAB functions provides a robust and flexible environment for analysis of ultrasonic, Brillouin, or Impulsive Stimulated Light Scattering datasets. Three inverse algorithms are considered: the gradient-based methods of Levenberg-Marquardt and Backus-Gilbert, and a non-gradient-based (Nelder-Mead) simplex approach. Several data types are considered: body wave velocities alone, surface wave velocities plus a side constraint on X-ray-diffraction-based axes compressibilities, or joint body and surface wave velocities. The numerical algorithms are validated through comparisons with prior published results and through analysis of synthetic datasets. Although all approaches succeed in finding low-misfit solutions, the Levenberg-Marquardt method consistently demonstrates effectiveness and computational efficiency. However, linearized gradient-based methods, when applied to a strongly non-linear problem, may not adequately converge to the global minimum. The simplex method, while slower, is less susceptible to being trapped in local misfit minima. A "multi-start" strategy (initiate searches from more than one initial guess) provides better assurance that global minima have been located. Numerical estimates of parameter uncertainties based on Monte Carlo simulations are compared to formal uncertainties based on covariance calculations. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Inverse scattering transform for the KPI equation on the background of a one-line soliton*Inverse scattering transform for the KPI equation on the background of a one-line soliton

    NASA Astrophysics Data System (ADS)

    Fokas, A. S.; Pogrebkov, A. K.

    2003-03-01

    We study the initial value problem of the Kadomtsev-Petviashvili I (KPI) equation with initial data u(x1,x2,0) = u1(x1)+u2(x1,x2), where u1(x1) is the one-soliton solution of the Korteweg-de Vries equation evaluated at zero time and u2(x1,x2) decays sufficiently rapidly on the (x1,x2)-plane. This involves the analysis of the nonstationary Schrödinger equation (with time replaced by x2) with potential u(x1,x2,0). We introduce an appropriate sectionally analytic eigenfunction in the complex k-plane where k is the spectral parameter. This eigenfunction has the novelty that in addition to the usual jump across the real k-axis, it also has a jump across a segment of the imaginary k-axis. We show that this eigenfunction can be reconstructed through a linear integral equation uniquely defined in terms of appropriate scattering data. In turn, these scattering data are uniquely constructed in terms of u1(x1) and u2(x1,x2). This result implies that the solution of the KPI equation can be obtained through the above linear integral equation where the scattering data have a simple t-dependence.

  4. Microwave tomography for GPR data processing in archaeology and cultural heritages diagnostics

    NASA Astrophysics Data System (ADS)

    Soldovieri, F.

    2009-04-01

    Ground Penetrating Radar (GPR) is one of the most feasible and friendly instrumentation to detect buried remains and perform diagnostics of archaeological structures with the aim of detecting hidden objects (defects, voids, constructive typology; etc..). In fact, GPR technique allows to perform measurements over large areas in a very fast way thanks to a portable instrumentation. Despite of the widespread exploitation of the GPR as data acquisition system, many difficulties arise in processing GPR data so to obtain images reliable and easily interpretable by the end-users. This difficulty is exacerbated when no a priori information is available as for example arises in the case of historical heritages for which the knowledge of the constructive modalities and materials of the structure might be completely missed. A possible answer to the above cited difficulties resides in the development and the exploitation of microwave tomography algorithms [1, 2], based on more refined electromagnetic scattering model with respect to the ones usually adopted in the classic radaristic approach. By exploitation of the microwave tomographic approach, it is possible to gain accurate and reliable "images" of the investigated structure in order to detect, localize and possibly determine the extent and the geometrical features of the embedded objects. In this framework, the adoption of simplified models of the electromagnetic scattering appears very convenient for practical and theoretical reasons. First, the linear inversion algorithms are numerically efficient thus allowing to investigate domains large in terms of the probing wavelength in a quasi real- time also in the case of 3D case also by adopting schemes based on the combination of 2D reconstruction [3]. In addition, the solution approaches are very robust against the uncertainties in the parameters of the measurement configuration and on the investigated scenario. From a theoretical point of view, the linear models allow further advantages such as: the absence of the false solutions (a question to be arisen in non linear inverse problems); the exploitation of well known regularization tools for achieving a stable solution of the problem; the possibility to analyze the reconstruction performances of the algorithm once the measurement configuration and the properties of the host medium are known. Here, we will present the main features and the reconstruction results of a linear inversion algorithm based on the Born approximation in realistic applications in archaeology and cultural heritage diagnostics. Born model is useful when penetrable objects are under investigations. As well known, the Born Approximation is used to solve the forward problem, that is the determination of the scattered field from a known object under the hypothesis of weak scatterer, i.e. an object whose dielectric permittivity is slightly different from the one of the host medium and whose extent is small in term of probing wavelength. Differently, for the inverse scattering problem, the above hypotheses can be relaxed at the cost to renounce to a "quantitative reconstruction" of the object. In fact, as already shown by results in realistic conditions [4, 5], the adoption of a Born model inversion scheme allows to detect, to localize and to determine the geometry of the object also in the case of not weak scattering objects. [1] R. Persico, R. Bernini, F. Soldovieri, "The role of the measurement configuration in inverse scattering from buried objects under the Born approximation", IEEE Trans. Antennas and Propagation, vol. 53, no.6, pp. 1875-1887, June 2005. [2] F. Soldovieri, J. Hugenschmidt, R. Persico and G. Leone, "A linear inverse scattering algorithm for realistic GPR applications", Near Surface Geophysics, vol. 5, no. 1, pp. 29-42, February 2007. [3] R. Solimene, F. Soldovieri, G. Prisco, R.Pierri, "Three-Dimensional Microwave Tomography by a 2-D Slice-Based Reconstruction Algorithm", IEEE Geoscience and Remote Sensing Letters, vol. 4, no. 4, pp. 556 - 560, Oct. 2007. [4] L. Orlando, F. Soldovieri, "Two different approaches for georadar data processing: a case study in archaeological prospecting", Journal of Applied Geophysics, vol. 64, pp. 1-13, March 2008. [5] F. Soldovieri, M. Bavusi, L. Crocco, S. Piscitelli, A. Giocoli, F. Vallianatos, S. Pantellis, A. Sarris, "A comparison between two GPR data processing techniques for fracture detection and characterization", Proc. of 70th EAGE Conference & Exhibition, Rome, Italy, 9 - 12 June 2008

  5. FOREWORD: Tackling inverse problems in a Banach space environment: from theory to applications Tackling inverse problems in a Banach space environment: from theory to applications

    NASA Astrophysics Data System (ADS)

    Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara

    2012-10-01

    Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.

  6. Double power series method for approximating cosmological perturbations

    NASA Astrophysics Data System (ADS)

    Wren, Andrew J.; Malik, Karim A.

    2017-04-01

    We introduce a double power series method for finding approximate analytical solutions for systems of differential equations commonly found in cosmological perturbation theory. The method was set out, in a noncosmological context, by Feshchenko, Shkil' and Nikolenko (FSN) in 1966, and is applicable to cases where perturbations are on subhorizon scales. The FSN method is essentially an extension of the well known Wentzel-Kramers-Brillouin (WKB) method for finding approximate analytical solutions for ordinary differential equations. The FSN method we use is applicable well beyond perturbation theory to solve systems of ordinary differential equations, linear in the derivatives, that also depend on a small parameter, which here we take to be related to the inverse wave-number. We use the FSN method to find new approximate oscillating solutions in linear order cosmological perturbation theory for a flat radiation-matter universe. Together with this model's well-known growing and decaying Mészáros solutions, these oscillating modes provide a complete set of subhorizon approximations for the metric potential, radiation and matter perturbations. Comparison with numerical solutions of the perturbation equations shows that our approximations can be made accurate to within a typical error of 1%, or better. We also set out a heuristic method for error estimation. A Mathematica notebook which implements the double power series method is made available online.

  7. A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications

    NASA Astrophysics Data System (ADS)

    Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.

    2018-04-01

    Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.

  8. Real-time solution of linear computational problems using databases of parametric reduced-order models with arbitrary underlying meshes

    NASA Astrophysics Data System (ADS)

    Amsallem, David; Tezaur, Radek; Farhat, Charbel

    2016-12-01

    A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.

  9. The numerical study and comparison of radial basis functions in applications of the dual reciprocity boundary element method to convection-diffusion problems

    NASA Astrophysics Data System (ADS)

    Chanthawara, Krittidej; Kaennakham, Sayan; Toutip, Wattana

    2016-02-01

    The methodology of Dual Reciprocity Boundary Element Method (DRBEM) is applied to the convection-diffusion problems and investigating its performance is our first objective of the work. Seven types of Radial Basis Functions (RBF); Linear, Thin-plate Spline, Cubic, Compactly Supported, Inverse Multiquadric, Quadratic, and that proposed by [12], were closely investigated in order to numerically compare their effectiveness drawbacks etc. and this is taken as our second objective. A sufficient number of simulations were performed covering as many aspects as possible. Varidated against both exacts and other numerical works, the final results imply strongly that the Thin-Plate Spline and Linear type of RBF are superior to others in terms of both solutions' quality and CPU-time spent while the Inverse Multiquadric seems to poorly yield the results. It is also found that DRBEM can perform relatively well at moderate level of convective force and as anticipated becomes unstable when the problem becomes more convective-dominated, as normally found in all classical mesh-dependence methods.

  10. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    ERIC Educational Resources Information Center

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  11. Expansion of a cold non-neutral plasma slab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karimov, A. R.; Department of Electrophysical Facilities, National Research Nuclear University MEPhI, Kashirskoye shosse 31, Moscow 115409; Yu, M. Y., E-mail: myyu@zju.edu.cn

    2014-12-15

    Expansion of the ion and electron fronts of a cold non-neutral plasma slab with a quasi-neutral core bounded by layers containing only ions is investigated analytically and exact solutions are obtained. It is found that on average, the plasma expansion time scales linearly with the initial inverse ion plasma frequency as well as the degree of charge imbalance, and no expansion occurs if the cold plasma slab is stationary and overall neutral. However, in both cases, there can exist prominent oscillations on the electron front.

  12. Robust Multiple Linear Regression.

    DTIC Science & Technology

    1982-12-01

    difficulty, but it might have more solutions corresponding to local minima. Influence Function of M-Estimates The influence function describes the effect...distributionn n function. In case of M-Estimates the influence function was found to be pro- portional to and given as T(X F)) " C(xpF,T) = .(X.T(F) F(dx...where the inverse of any distribution function F is defined in the usual way as F- (s) = inf{x IF(x) > s) 0<sə Influence Function of L-Estimates In a

  13. Scattering transform for nonstationary Schroedinger equation with bidimensionally perturbed N-soliton potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boiti, M.; Pempinelli, F.; Pogrebkov, A. K.

    2006-12-15

    In the framework of the extended resolvent approach the direct and inverse scattering problems for the nonstationary Schroedinger equation with a potential being a perturbation of the N-soliton potential by means of a generic bidimensional smooth function decaying at large spaces are introduced and investigated. The initial value problem of the Kadomtsev-Petviashvili I equation for a solution describing N wave solitons on a generic smooth decaying background is then linearized, giving the time evolution of the spectral data.

  14. Bloch equation and atom-field entanglement scenario in three-level systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Surajit; Nath, Mihir Ranjan; Dey, Tushar Kanti

    2011-09-23

    We study the exact solution of the lambda, vee and cascade type of three-level system with distinct Hamiltonian for each configuration expressed in the SU(3) basis. The semiclassical models are solved by solving respective Bloch equation and the existence of distinct non-linear constants are discussed which are different for different configuration. Apart from proposing a qutrit wave function, the atom-field entanglement is studied for the quantized three-level systems using the Phoenix-Knight formalism and corresponding population inversion are compared.

  15. Time-domain induced polarization - an analysis of Cole-Cole parameter resolution and correlation using Markov Chain Monte Carlo inversion

    NASA Astrophysics Data System (ADS)

    Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest

    2017-12-01

    The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo (MCMC) inversion algorithm, a full non-linear uncertainty analysis of the parameters and the parameter correlations can be accessed. This is essential to understand to what degree the spectral Cole-Cole parameters can be resolved from TDIP data. MCMC inversions of synthetic TDIP data, which show bell-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations increase and become non-linear. It is further investigated how waveform and parameter values influence the resolution of the Cole-Cole parameters. A limiting factor is the value of the frequency exponent, C. As C decreases, the resolution of all the Cole-Cole parameters decreases and the results become increasingly non-linear. While the values of the time constant, τ, must be in the acquisition range to resolve the parameters well, the choice between a 50 per cent and a 100 per cent duty cycle for the current injection does not have an influence on the parameter resolution. The limits of resolution and linearity are also studied in a comparison between the MCMC and a linearized gradient-based inversion approach. The two methods are consistent for resolved models, but the linearized approach tends to underestimate the uncertainties for poorly resolved parameters due to the corresponding non-linear features. Finally, an MCMC inversion of 1-D field data verifies that spectral Cole-Cole parameters can also be resolved from TD field measurements.

  16. Strongly nonlinear composite dielectrics: A perturbation method for finding the potential field and bulk effective properties

    NASA Astrophysics Data System (ADS)

    Blumenfeld, Raphael; Bergman, David J.

    1991-10-01

    A class of strongly nonlinear composite dielectrics is studied. We develop a general method to reduce the scalar-potential-field problem to the solution of a set of linear Poisson-type equations in rescaled coordinates. The method is applicable for a large variety of nonlinear materials. For a power-law relation between the displacement and the electric fields, it is used to solve explicitly for the value of the bulk effective dielectric constant ɛe to second order in the fluctuations of its local value. A simlar procedure for the vector potential, whose curl is the displacement field, yields a quantity analogous to the inverse dielectric constant in linear dielectrics. The bulk effective dielectric constant is given by a set of linear integral expressions in the rescaled coordinates and exact bounds for it are derived.

  17. A Non-linear Geodetic Data Inversion Using ABIC for Slip Distribution on a Fault With an Unknown dip Angle

    NASA Astrophysics Data System (ADS)

    Fukahata, Y.; Wright, T. J.

    2006-12-01

    We developed a method of geodetic data inversion for slip distribution on a fault with an unknown dip angle. When fault geometry is unknown, the problem of geodetic data inversion is non-linear. A common strategy for obtaining slip distribution is to first determine the fault geometry by minimizing the square misfit under the assumption of a uniform slip on a rectangular fault, and then apply the usual linear inversion technique to estimate a slip distribution on the determined fault. It is not guaranteed, however, that the fault determined under the assumption of a uniform slip gives the best fault geometry for a spatially variable slip distribution. In addition, in obtaining a uniform slip fault model, we have to simultaneously determine the values of the nine mutually dependent parameters, which is a highly non-linear, complicated process. Although the inverse problem is non-linear for cases with unknown fault geometries, the non-linearity of the problems is actually weak, when we can assume the fault surface to be flat. In particular, when a clear fault trace is observed on the EarthOs surface after an earthquake, we can precisely estimate the strike and the location of the fault. In this case only the dip angle has large ambiguity. In geodetic data inversion we usually need to introduce smoothness constraints in order to compromise reciprocal requirements for model resolution and estimation errors in a natural way. Strictly speaking, the inverse problem with smoothness constraints is also non-linear, even if the fault geometry is known. The non-linearity has been dissolved by introducing AkaikeOs Bayesian Information Criterion (ABIC), with which the optimal value of the relative weight of observed data to smoothness constraints is objectively determined. In this study, using ABIC in determining the optimal dip angle, we dissolved the non-linearity of the inverse problem. We applied the method to the InSAR data of the 1995 Dinar, Turkey earthquake and obtained a much shallower dip angle than before.

  18. Effects of two-temperature parameter and thermal nonlocal parameter on transient responses of a half-space subjected to ramp-type heating

    NASA Astrophysics Data System (ADS)

    Xue, Zhang-Na; Yu, Ya-Jun; Tian, Xiao-Geng

    2017-07-01

    Based upon the coupled thermoelasticity and Green and Lindsay theory, the new governing equations of two-temperature thermoelastic theory with thermal nonlocal parameter is formulated. To more realistically model thermal loading of a half-space surface, a linear temperature ramping function is adopted. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Specific attention is paid to study the effect of thermal nonlocal parameter, ramping time, and two-temperature parameter on the distributions of temperature, displacement and stress distribution.

  19. Joint inversion of shear wave travel time residuals and geoid and depth anomalies for long-wavelength variations in upper mantle temperature and composition along the Mid-Atlantic Ridge

    NASA Technical Reports Server (NTRS)

    Sheehan, Anne F.; Solomon, Sean C.

    1991-01-01

    Measurements were carried out for SS-S differential travel time residuals for nearly 500 paths crossing the northern Mid-Atlantic Ridge, assuming that the residuals are dominated by contributions from the upper mantle near the surface bounce point of the reflected phase SS. Results indicate that the SS-S travel time residuals decrease linearly with square root of age, to an age of 80-100 Ma, in general agreement with the plate cooling model. A joint inversion was formulated of travel time residuals and geoid and bathymetric anomalies for lateral variation in the upper mantle temperature and composition. The preferred inversion solutions were found to have variations in upper mantle temperature along the Mid-Atlantic Ridge of about 100 K. It was calculated that, for a constant bulk composition, such a temperature variation would produce about a 7-km variation in crustal thickness, larger than is generally observed.

  20. Efficient preconditioning of the electronic structure problem in large scale ab initio molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiffmann, Florian; VandeVondele, Joost, E-mail: Joost.VandeVondele@mat.ethz.ch

    2015-06-28

    We present an improved preconditioning scheme for electronic structure calculations based on the orbital transformation method. First, a preconditioner is developed which includes information from the full Kohn-Sham matrix but avoids computationally demanding diagonalisation steps in its construction. This reduces the computational cost of its construction, eliminating a bottleneck in large scale simulations, while maintaining rapid convergence. In addition, a modified form of Hotelling’s iterative inversion is introduced to replace the exact inversion of the preconditioner matrix. This method is highly effective during molecular dynamics (MD), as the solution obtained in earlier MD steps is a suitable initial guess. Filteringmore » small elements during sparse matrix multiplication leads to linear scaling inversion, while retaining robustness, already for relatively small systems. For system sizes ranging from a few hundred to a few thousand atoms, which are typical for many practical applications, the improvements to the algorithm lead to a 2-5 fold speedup per MD step.« less

  1. Simultaneous source and attenuation reconstruction in SPECT using ballistic and single scattering data

    NASA Astrophysics Data System (ADS)

    Courdurier, M.; Monard, F.; Osses, A.; Romero, F.

    2015-09-01

    In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.

  2. Estimation of splitting functions from Earth's normal mode spectra using the neighbourhood algorithm

    NASA Astrophysics Data System (ADS)

    Pachhai, Surya; Tkalčić, Hrvoje; Masters, Guy

    2016-01-01

    The inverse problem for Earth structure from normal mode data is strongly non-linear and can be inherently non-unique. Traditionally, the inversion is linearized by taking partial derivatives of the complex spectra with respect to the model parameters (i.e. structure coefficients), and solved in an iterative fashion. This method requires that the earthquake source model is known. However, the release of energy in large earthquakes used for the analysis of Earth's normal modes is not simple. A point source approximation is often inadequate, and a more complete account of energy release at the source is required. In addition, many earthquakes are required for the solution to be insensitive to the initial constraints and regularization. In contrast to an iterative approach, the autoregressive linear inversion technique conveniently avoids the need for earthquake source parameters, but it also requires a number of events to achieve full convergence when a single event does not excite all singlets well. To build on previous improvements, we develop a technique to estimate structure coefficients (and consequently, the splitting functions) using a derivative-free parameter search, known as neighbourhood algorithm (NA). We implement an efficient forward method derived using the autoregresssion of receiver strips, and this allows us to search over a multiplicity of structure coefficients in a relatively short time. After demonstrating feasibility of the use of NA in synthetic cases, we apply it to observations of the inner core sensitive mode 13S2. The splitting function of this mode is dominated by spherical harmonic degree 2 axisymmetric structure and is consistent with the results obtained from the autoregressive linear inversion. The sensitivity analysis of multiple events confirms the importance of the Bolivia, 1994 earthquake. When this event is used in the analysis, as little as two events are sufficient to constrain the splitting functions of 13S2 mode. Apart from not requiring the knowledge of earthquake source, the newly developed technique provides an approximate uncertainty measure of the structure coefficients and allows us to control the type of structure solved for, for example to establish if elastic structure is sufficient.

  3. THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)

    EPA Science Inventory

    This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...

  4. Linearized inversion of multiple scattering seismic energy

    NASA Astrophysics Data System (ADS)

    Aldawood, Ali; Hoteit, Ibrahim; Zuberi, Mohammad

    2014-05-01

    Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. So, imaging seismic data with the single-scattering assumption does not locate multiple bounces events in their actual subsurface positions. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single scattering energy such as nearly vertical faults. Standard migration of these multiples provides subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. The resultant image obtained by the adjoint operator is a smoothed depiction of the true subsurface reflectivity model and is heavily masked by migration artifacts and the source wavelet fingerprint that needs to be properly deconvolved. Hence, we proposed a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. The proposed algorithm uses the least-square image based on single-scattering assumption as a constraint to invert for the part of the image that is illuminated by internal scattering energy. Then, we posed the problem of imaging double-scattering energy as a least-square minimization problem that requires solving the normal equation of the following form: GTGv = GTd, (1) where G is a linearized forward modeling operator that predicts double-scattered seismic data. Also, GT is a linearized adjoint operator that image double-scattered seismic data. Gradient-based optimization algorithms solve this linear system. Hence, we used a quasi-Newton optimization technique to find the least-square minimizer. In this approach, an estimate of the Hessian matrix that contains curvature information is modified at every iteration by a low-rank update based on gradient changes at every step. At each iteration, the data residual is imaged using GT to determine the model update. Application of the linearized inversion to synthetic data to image a vertical fault plane demonstrate the effectiveness of this methodology to properly delineate the vertical fault plane and give better amplitude information than the standard migrated image using the adjoint operator that takes into account internal multiples. Thus, least-square imaging of multiple scattering enhances the spatial resolution of the events illuminated by internal scattering energy. It also deconvolves the source signature and helps remove the fingerprint of the acquisition geometry. The final image is obtained by the superposition of the least-square solution based on single scattering assumption and the least-square solution based on double scattering assumption.

  5. Sparseness- and continuity-constrained seismic imaging

    NASA Astrophysics Data System (ADS)

    Herrmann, Felix J.

    2005-04-01

    Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR < 0 dB); (ii) use the sparseness and locality (both in position and angle) of directional basis functions (such as curvelets and contourlets) on the model: the reflectivity; and (iii) exploit the near invariance of these basis functions under the normal operator, i.e., the scattering-followed-by-imaging operator. Signal-to-noise ratio and the continuity along the imaged reflectors are significantly enhanced by formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.

  6. Numerical Procedures for Inlet/Diffuser/Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Rubin, Stanley G.

    1998-01-01

    Two primitive variable, pressure based, flux-split, RNS/NS solution procedures for viscous flows are presented. Both methods are uniformly valid across the full Mach number range, Le., from the incompressible limit to high supersonic speeds. The first method is an 'optimized' version of a previously developed global pressure relaxation RNS procedure. Considerable reduction in the number of relatively expensive matrix inversion, and thereby in the computational time, has been achieved with this procedure. CPU times are reduced by a factor of 15 for predominantly elliptic flows (incompressible and low subsonic). The second method is a time-marching, 'linearized' convection RNS/NS procedure. The key to the efficiency of this procedure is the reduction to a single LU inversion at the inflow cross-plane. The remainder of the algorithm simply requires back-substitution with this LU and the corresponding residual vector at any cross-plane location. This method is not time-consistent, but has a convective-type CFL stability limitation. Both formulations are robust and provide accurate solutions for a variety of internal viscous flows to be provided herein.

  7. A Semianalytical Model for Pumping Tests in Finite Heterogeneous Confined Aquifers With Arbitrarily Shaped Boundary

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Dai, Cheng; Xue, Liang

    2018-04-01

    This study presents a Laplace-transform-based boundary element method to model the groundwater flow in a heterogeneous confined finite aquifer with arbitrarily shaped boundaries. The boundary condition can be Dirichlet, Neumann or Robin-type. The derived solution is analytical since it is obtained through the Green's function method within the domain. However, the numerical approximation is required on the boundaries, which essentially renders it a semi-analytical solution. The proposed method can provide a general framework to derive solutions for zoned heterogeneous confined aquifers with arbitrarily shaped boundary. The requirement of the boundary element method presented here is that the Green function must exist for a specific PDE equation. In this study, the linear equations for the two-zone and three-zone confined aquifers with arbitrarily shaped boundary is established in Laplace space, and the solution can be obtained by using any linear solver. Stehfest inversion algorithm can be used to transform it back into time domain to obtain the transient solution. The presented solution is validated in the two-zone cases by reducing the arbitrarily shaped boundaries to circular ones and comparing it with the solution in Lin et al. (2016, https://doi.org/10.1016/j.jhydrol.2016.07.028). The effect of boundary shape and well location on dimensionless drawdown in two-zone aquifers is investigated. Finally the drawdown distribution in three-zone aquifers with arbitrarily shaped boundary for constant-rate tests (CRT) and flow rate distribution for constant-head tests (CHT) are analyzed.

  8. Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel

    ERIC Educational Resources Information Center

    El-Gebeily, M.; Yushau, B.

    2008-01-01

    In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…

  9. Porosity Estimation By Artificial Neural Networks Inversion . Application to Algerian South Field

    NASA Astrophysics Data System (ADS)

    Eladj, Said; Aliouane, Leila; Ouadfeul, Sid-Ali

    2017-04-01

    One of the main geophysicist's current challenge is the discovery and the study of stratigraphic traps, this last is a difficult task and requires a very fine analysis of the seismic data. The seismic data inversion allows obtaining lithological and stratigraphic information for the reservoir characterization . However, when solving the inverse problem we encounter difficult problems such as: Non-existence and non-uniqueness of the solution add to this the instability of the processing algorithm. Therefore, uncertainties in the data and the non-linearity of the relationship between the data and the parameters must be taken seriously. In this case, the artificial intelligence techniques such as Artificial Neural Networks(ANN) is used to resolve this ambiguity, this can be done by integrating different physical properties data which requires a supervised learning methods. In this work, we invert the acoustic impedance 3D seismic cube using the colored inversion method, then, the introduction of the acoustic impedance volume resulting from the first step as an input of based model inversion method allows to calculate the Porosity volume using the Multilayer Perceptron Artificial Neural Network. Application to an Algerian South hydrocarbon field clearly demonstrate the power of the proposed processing technique to predict the porosity for seismic data, obtained results can be used for reserves estimation, permeability prediction, recovery factor and reservoir monitoring. Keywords: Artificial Neural Networks, inversion, non-uniqueness , nonlinear, 3D porosity volume, reservoir characterization .

  10. Effective one-dimensional approach to the source reconstruction problem of three-dimensional inverse optoacoustics

    NASA Astrophysics Data System (ADS)

    Stritzel, J.; Melchert, O.; Wollweber, M.; Roth, B.

    2017-09-01

    The direct problem of optoacoustic signal generation in biological media consists of solving an inhomogeneous three-dimensional (3D) wave equation for an initial acoustic stress profile. In contrast, the more defiant inverse problem requires the reconstruction of the initial stress profile from a proper set of observed signals. In this article, we consider an effectively 1D approach, based on the assumption of a Gaussian transverse irradiation source profile and plane acoustic waves, in which the effects of acoustic diffraction are described in terms of a linear integral equation. The respective inverse problem along the beam axis can be cast into a Volterra integral equation of the second kind for which we explore here efficient numerical schemes in order to reconstruct initial stress profiles from observed signals, constituting a methodical progress of computational aspects of optoacoustics. In this regard, we explore the validity as well as the limits of the inversion scheme via numerical experiments, with parameters geared toward actual optoacoustic problem instances. The considered inversion input consists of synthetic data, obtained in terms of the effectively 1D approach, and, more generally, a solution of the 3D optoacoustic wave equation. Finally, we also analyze the effect of noise and different detector-to-sample distances on the optoacoustic signal and the reconstructed pressure profiles.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donovan, Ellen M., E-mail: ellen.donovan@icr.ac.u; Ciurlionis, Laura; Fairfoul, Jamie

    Purpose: To establish planning solutions for a concomitant three-level radiation dose distribution to the breast using linear accelerator- or tomotherapy-based intensity-modulated radiotherapy (IMRT), for the U.K. Intensity Modulated and Partial Organ (IMPORT) High trial. Methods and Materials: Computed tomography data sets for 9 patients undergoing breast conservation surgery with implanted tumor bed gold markers were used to prepare three-level dose distributions encompassing the whole breast (36 Gy), partial breast (40 Gy), and tumor bed boost (48 or 53 Gy) treated concomitantly in 15 fractions within 3 weeks. Forward and inverse planned IMRT and tomotherapy were investigated as solutions. A standardmore » electron field was compared with a photon field arrangement encompassing the tumor bed boost volume. The out-of-field doses were measured for all methods. Results: Dose-volume constraints of volume >90% receiving 32.4 Gy and volume >95% receiving 50.4 Gy for the whole breast and tumor bed were achieved. The constraint of volume >90% receiving 36 Gy for the partial breast was fulfilled in the inverse IMRT and tomotherapy plans and in 7 of 9 cases of a forward planned IMRT distribution. An electron boost to the tumor bed was inadequate in 8 of 9 cases. The IMRT methods delivered a greater whole body dose than the standard breast tangents. A contralateral lung volume >2.5 Gy was increased in the inverse IMRT and tomotherapy plans, although it did not exceed the constraint. Conclusion: We have demonstrated a set of widely applicable solutions that fulfilled the stringent clinical trial requirements for the delivery of a concomitant three-level dose distribution to the breast.« less

  12. An efficient inverse radiotherapy planning method for VMAT using quadratic programming optimization.

    PubMed

    Hoegele, W; Loeschel, R; Merkle, N; Zygmanski, P

    2012-01-01

    The purpose of this study is to investigate the feasibility of an inverse planning optimization approach for the Volumetric Modulated Arc Therapy (VMAT) based on quadratic programming and the projection method. The performance of this method is evaluated against a reference commercial planning system (eclipse(TM) for rapidarc(TM)) for clinically relevant cases. The inverse problem is posed in terms of a linear combination of basis functions representing arclet dose contributions and their respective linear coefficients as degrees of freedom. MLC motion is decomposed into basic motion patterns in an intuitive manner leading to a system of equations with a relatively small number of equations and unknowns. These equations are solved using quadratic programming under certain limiting physical conditions for the solution, such as the avoidance of negative dose during optimization and Monitor Unit reduction. The modeling by the projection method assures a unique treatment plan with beneficial properties, such as the explicit relation between organ weightings and the final dose distribution. Clinical cases studied include prostate and spine treatments. The optimized plans are evaluated by comparing isodose lines, DVH profiles for target and normal organs, and Monitor Units to those obtained by the clinical treatment planning system eclipse(TM). The resulting dose distributions for a prostate (with rectum and bladder as organs at risk), and for a spine case (with kidneys, liver, lung and heart as organs at risk) are presented. Overall, the results indicate that similar plan qualities for quadratic programming (QP) and rapidarc(TM) could be achieved at significantly more efficient computational and planning effort using QP. Additionally, results for the quasimodo phantom [Bohsung et al., "IMRT treatment planning: A comparative inter-system and inter-centre planning exercise of the estro quasimodo group," Radiother. Oncol. 76(3), 354-361 (2005)] are presented as an example for an extreme concave case. Quadratic programming is an alternative approach for inverse planning which generates clinically satisfying plans in comparison to the clinical system and constitutes an efficient optimization process characterized by uniqueness and reproducibility of the solution.

  13. Dynamic Compression of the Signal in a Charge Sensitive Amplifier: From Concept to Design

    NASA Astrophysics Data System (ADS)

    Manghisoni, Massimo; Comotti, Daniele; Gaioni, Luigi; Ratti, Lodovico; Re, Valerio

    2015-10-01

    This work is concerned with the design of a low-noise Charge Sensitive Amplifier featuring a dynamic signal compression based on the non-linear features of an inversion-mode MOS capacitor. These features make the device suitable for applications where a non-linear characteristic of the front-end is required, such as in imaging instrumentation for free electron laser experiments. The aim of the paper is to discuss a methodology for the proper design of the feedback network enabling the dynamic signal compression. Starting from this compression solution, the design of a low-noise Charge Sensitive Amplifier is also discussed. The study has been carried out by referring to a 65 nm CMOS technology.

  14. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.

    PubMed

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR.

  15. Anisotropic k-essence cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chimento, Luis P.; Forte, Monica

    We investigate a Bianchi type-I cosmology with k-essence and find the set of models which dissipate the initial anisotropy. There are cosmological models with extended tachyon fields and k-essence having a constant barotropic index. We obtain the conditions leading to a regular bounce of the average geometry and the residual anisotropy on the bounce. For constant potential, we develop purely kinetic k-essence models which are dust dominated in their early stages, dissipate the initial anisotropy, and end in a stable de Sitter accelerated expansion scenario. We show that linear k-field and polynomial kinetic function models evolve asymptotically to Friedmann-Robertson-Walker cosmologies.more » The linear case is compatible with an asymptotic potential interpolating between V{sub l}{proportional_to}{phi}{sup -{gamma}{sub l}}, in the shear dominated regime, and V{sub l}{proportional_to}{phi}{sup -2} at late time. In the polynomial case, the general solution contains cosmological models with an oscillatory average geometry. For linear k-essence, we find the general solution in the Bianchi type-I cosmology when the k field is driven by an inverse square potential. This model shares the same geometry as a quintessence field driven by an exponential potential.« less

  16. Bukhvostov-Lipatov model and quantum-classical duality

    NASA Astrophysics Data System (ADS)

    Bazhanov, Vladimir V.; Lukyanov, Sergei L.; Runov, Boris A.

    2018-02-01

    The Bukhvostov-Lipatov model is an exactly soluble model of two interacting Dirac fermions in 1 + 1 dimensions. The model describes weakly interacting instantons and anti-instantons in the O (3) non-linear sigma model. In our previous work [arxiv:arXiv:1607.04839] we have proposed an exact formula for the vacuum energy of the Bukhvostov-Lipatov model in terms of special solutions of the classical sinh-Gordon equation, which can be viewed as an example of a remarkable duality between integrable quantum field theories and integrable classical field theories in two dimensions. Here we present a complete derivation of this duality based on the classical inverse scattering transform method, traditional Bethe ansatz techniques and analytic theory of ordinary differential equations. In particular, we show that the Bethe ansatz equations defining the vacuum state of the quantum theory also define connection coefficients of an auxiliary linear problem for the classical sinh-Gordon equation. Moreover, we also present details of the derivation of the non-linear integral equations determining the vacuum energy and other spectral characteristics of the model in the case when the vacuum state is filled by 2-string solutions of the Bethe ansatz equations.

  17. Simultaneous elastic parameter inversion in 2-D/3-D TTI medium combined later arrival times

    NASA Astrophysics Data System (ADS)

    Bai, Chao-ying; Wang, Tao; Yang, Shang-bei; Li, Xing-wang; Huang, Guo-jiao

    2016-04-01

    Traditional traveltime inversion for anisotropic medium is, in general, based on a "weak" assumption in the anisotropic property, which simplifies both the forward part (ray tracing is performed once only) and the inversion part (a linear inversion solver is possible). But for some real applications, a general (both "weak" and "strong") anisotropic medium should be considered. In such cases, one has to develop a ray tracing algorithm to handle with the general (including "strong") anisotropic medium and also to design a non-linear inversion solver for later tomography. Meanwhile, it is constructive to investigate how much the tomographic resolution can be improved by introducing the later arrivals. For this motivation, we incorporated our newly developed ray tracing algorithm (multistage irregular shortest-path method) for general anisotropic media with a non-linear inversion solver (a damped minimum norm, constrained least squares problem with a conjugate gradient approach) to formulate a non-linear inversion solver for anisotropic medium. This anisotropic traveltime inversion procedure is able to combine the later (reflected) arrival times. Both 2-D/3-D synthetic inversion experiments and comparison tests show that (1) the proposed anisotropic traveltime inversion scheme is able to recover the high contrast anomalies and (2) it is possible to improve the tomographic resolution by introducing the later (reflected) arrivals, but not as expected in the isotropic medium, because the different velocity (qP, qSV and qSH) sensitivities (or derivatives) respective to the different elastic parameters are not the same but are also dependent on the inclination angle.

  18. Acoustic Impedance Inversion of Seismic Data Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Eladj, Said; Djarfour, Noureddine; Ferahtia, Djalal; Ouadfeul, Sid-Ali

    2013-04-01

    The inversion of seismic data can be used to constrain estimates of the Earth's acoustic impedance structure. This kind of problem is usually known to be non-linear, high-dimensional, with a complex search space which may be riddled with many local minima, and results in irregular objective functions. We investigate here the performance and the application of a genetic algorithm, in the inversion of seismic data. The proposed algorithm has the advantage of being easily implemented without getting stuck in local minima. The effects of population size, Elitism strategy, uniform cross-over and lower mutation are examined. The optimum solution parameters and performance were decided as a function of the testing error convergence with respect to the generation number. To calculate the fitness function, we used L2 norm of the sample-to-sample difference between the reference and the inverted trace. The cross-over probability is of 0.9-0.95 and mutation has been tested at 0.01 probability. The application of such a genetic algorithm to synthetic data shows that the inverted acoustic impedance section was efficient. Keywords: Seismic, Inversion, acoustic impedance, genetic algorithm, fitness functions, cross-over, mutation.

  19. Three-dimensional forward modeling and inversion of marine CSEM data in anisotropic conductivity structures

    NASA Astrophysics Data System (ADS)

    Han, B.; Li, Y.

    2016-12-01

    We present a three-dimensional (3D) forward and inverse modeling code for marine controlled-source electromagnetic (CSEM) surveys in anisotropic media. The forward solution is based on a primary/secondary field approach, in which secondary fields are solved using a staggered finite-volume (FV) method and primary fields are solved for 1D isotropic background models analytically. It is shown that it is rather straightforward to extend the isotopic 3D FV algorithm to a triaxial anisotropic one, while additional coefficients are required to account for full tensor conductivity. To solve the linear system resulting from FV discretization of Maxwell' s equations, both iterative Krylov solvers (e.g. BiCGSTAB) and direct solvers (e.g. MUMPS) have been implemented, makes the code flexible for different computing platforms and different problems. For iterative soloutions, the linear system in terms of electromagnetic potentials (A-Phi) is used to precondition the original linear system, transforming the discretized Curl-Curl equations to discretized Laplace-like equations, thus much more favorable numerical properties can be obtained. Numerical experiments suggest that this A-Phi preconditioner can dramatically improve the convergence rate of an iterative solver and high accuracy can be achieved without divergence correction even for low frequencies. To efficiently calculate the sensitivities, i.e. the derivatives of CSEM data with respect to tensor conductivity, the adjoint method is employed. For inverse modeling, triaxial anisotropy is taken into account. Since the number of model parameters to be resolved of triaxial anisotropic medias is twice or thrice that of isotropic medias, the data-space version of the Gauss-Newton (GN) minimization method is preferred due to its lower computational cost compared with the traditional model-space GN method. We demonstrate the effectiveness of the code with synthetic examples.

  20. Predicting a future lifetime through Box-Cox transformation.

    PubMed

    Yang, Z

    1999-09-01

    In predicting a future lifetime based on a sample of past lifetimes, the Box-Cox transformation method provides a simple and unified procedure that is shown in this article to meet or often outperform the corresponding frequentist solution in terms of coverage probability and average length of prediction intervals. Kullback-Leibler information and second-order asymptotic expansion are used to justify the Box-Cox procedure. Extensive Monte Carlo simulations are also performed to evaluate the small sample behavior of the procedure. Certain popular lifetime distributions, such as Weibull, inverse Gaussian and Birnbaum-Saunders are served as illustrative examples. One important advantage of the Box-Cox procedure lies in its easy extension to linear model predictions where the exact frequentist solutions are often not available.

  1. Quintessence from virtual dark matter

    NASA Astrophysics Data System (ADS)

    Damdinsuren, Battsetseg; Sim, Jonghyun; Lee, Tae Hoon

    2017-09-01

    Considering a theory of Brans-Dicke gravity with general couplings of Higgs-like bosons including a non-renormalizable term, we derive the low-energy effective theory action in the Universe of a temperature much lower than the Higgs-like boson mass. Necessary equations containing gravitational field equations and an effective potential of the Brans-Dicke scalar field are obtained, which are induced through virtual interactions of the Higgs-like heavy field in the late-time Universe. We find a de Sitter cosmological solution with the inverse power law effective potential of the scalar field and discuss the possibility that the late-time acceleration of our Universe can be naturally explained by means of the solution. We also investigate stability properties of the quintessence model by using a linear approximation.

  2. Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Peng, E-mail: peng@ices.utexas.edu; Schwab, Christoph, E-mail: christoph.schwab@sam.math.ethz.ch

    2016-07-01

    We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov–Galerkin high-fidelity (“HiFi”) discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by themore » so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data assimilation and for Bayesian estimation. They also open a perspective for optimal experimental design.« less

  3. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  4. Prolongation structures of nonlinear evolution equations

    NASA Technical Reports Server (NTRS)

    Wahlquist, H. D.; Estabrook, F. B.

    1975-01-01

    A technique is developed for systematically deriving a 'prolongation structure' - a set of interrelated potentials and pseudopotentials - for nonlinear partial differential equations in two independent variables. When this is applied to the Korteweg-de Vries equation, a new infinite set of conserved quantities is obtained. Known solution techniques are shown to result from the discovery of such a structure: related partial differential equations for the potential functions, linear 'inverse scattering' equations for auxiliary functions, Backlund transformations. Generalizations of these techniques will result from the use of irreducible matrix representations of the prolongation structure.

  5. A genetic meta-algorithm-assisted inversion approach: hydrogeological study for the determination of volumetric rock properties and matrix and fluid parameters in unsaturated formations

    NASA Astrophysics Data System (ADS)

    Szabó, Norbert Péter

    2018-03-01

    An evolutionary inversion approach is suggested for the interpretation of nuclear and resistivity logs measured by direct-push tools in shallow unsaturated sediments. The efficiency of formation evaluation is improved by estimating simultaneously (1) the petrophysical properties that vary rapidly along a drill hole with depth and (2) the zone parameters that can be treated as constant, in one inversion procedure. In the workflow, the fractional volumes of water, air, matrix and clay are estimated in adjacent depths by linearized inversion, whereas the clay and matrix properties are updated using a float-encoded genetic meta-algorithm. The proposed inversion method provides an objective estimate of the zone parameters that appear in the tool response equations applied to solve the forward problem, which can significantly increase the reliability of the petrophysical model as opposed to setting these parameters arbitrarily. The global optimization meta-algorithm not only assures the best fit between the measured and calculated data but also gives a reliable solution, practically independent of the initial model, as laboratory data are unnecessary in the inversion procedure. The feasibility test uses engineering geophysical sounding logs observed in an unsaturated loessy-sandy formation in Hungary. The multi-borehole extension of the inversion technique is developed to determine the petrophysical properties and their estimation errors along a profile of drill holes. The genetic meta-algorithmic inversion method is recommended for hydrogeophysical logging applications of various kinds to automatically extract the volumetric ratios of rock and fluid constituents as well as the most important zone parameters in a reliable inversion procedure.

  6. Reassessment of the source of the 1976 Friuli, NE Italy, earthquake sequence from the joint inversion of high-precision levelling and triangulation data

    NASA Astrophysics Data System (ADS)

    Cheloni, D.; D'Agostino, N.; D'Anastasio, E.; Selvaggi, G.

    2012-08-01

    In this study, we revisit the mechanism of the 1976 Friuli (NE Italy) earthquake sequence (main shocks Mw 6.4, 5.9 and 6.0). We present a new source model that simultaneously fits all the available geodetic measurements of the observed deformation. We integrate triangulation measurements, which have never been previously used in the source modelling of this sequence, with high-precision levelling that covers the epicentral area. We adopt a mixed linear/non-linear optimization scheme, in which we iteratively search for the best-fitting solution by performing several linear slip inversions while varying fault location using a grid search method. Our preferred solution consists of a shallow north-dipping fault plane with assumed azimuth of 282° and accommodating a reverse dextral slip of about 1 m. The estimated geodetic moment is 6.6 × 1018 Nm (Mw 6.5), in agreement with seismological estimates. Yet, our preferred model shows that the geodetic solution is consistent with the activation of a single fault system during the entire sequence, the surface expression of which could be associated with the Buia blind thrust, supporting the hypothesis that the main activity of the Eastern Alps occurs close to the relief margin, as observed in other mountain belts. The retrieved slip pattern consists of a main coseismic patch located 3-5 km depth, in good agreement with the distribution of the main shocks. Additional slip is required in the shallower portions of the fault to reproduce the local uplift observed in the region characterized by Quaternary active folding. We tentatively interpret this patch as postseismic deformation (afterslip) occurring at the edge of the main coseismic patch. Finally, our rupture plane spatially correlates with the area of the locked fault determined from interseismic measurements, supporting the hypothesis that interseismic slip on the creeping dislocation causes strain to accumulate on the shallow (above ˜10 km depth) locked section. Assuming that all the long-term accommodation between Adria and Eurasia is seismically released, a time span of 500-700 years of strain-accumulating plate motion would result in a 1976-like earthquake.

  7. Estimating uncertainty of Full Waveform Inversion with Ensemble-based methods

    NASA Astrophysics Data System (ADS)

    Thurin, J.; Brossier, R.; Métivier, L.

    2017-12-01

    Uncertainty estimation is one key feature of tomographic applications for robust interpretation. However, this information is often missing in the frame of large scale linearized inversions, and only the results at convergence are shown, despite the ill-posed nature of the problem. This issue is common in the Full Waveform Inversion community.While few methodologies have already been proposed in the literature, standard FWI workflows do not include any systematic uncertainty quantifications methods yet, but often try to assess the result's quality through cross-comparison with other results from seismic or comparison with other geophysical data. With the development of large seismic networks/surveys, the increase in computational power and the more and more systematic application of FWI, it is crucial to tackle this problem and to propose robust and affordable workflows, in order to address the uncertainty quantification problem faced for near surface targets, crustal exploration, as well as regional and global scales.In this work (Thurin et al., 2017a,b), we propose an approach which takes advantage of the Ensemble Transform Kalman Filter (ETKF) proposed by Bishop et al., (2001), in order to estimate a low-rank approximation of the posterior covariance matrix of the FWI problem, allowing us to evaluate some uncertainty information of the solution. Instead of solving the FWI problem through a Bayesian inversion with the ETKF, we chose to combine a conventional FWI, based on local optimization, and the ETKF strategies. This scheme allows combining the efficiency of local optimization for solving large scale inverse problems and make the sampling of the local solution space possible thanks to its embarrassingly parallel property. References:Bishop, C. H., Etherton, B. J. and Majumdar, S. J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), 420-436.Thurin, J., Brossier, R. and Métivier, L. 2017,a.: Ensemble-Based Uncertainty Estimation in Full Waveform Inversion. 79th EAGE Conference and Exhibition 2017, (12 - 15 June, 2017)Thurin, J., Brossier, R. and Métivier, L. 2017,b.: An Ensemble-Transform Kalman Filter - Full Waveform Inversion scheme for Uncertainty estimation; SEG Technical Program Expanded Abstracts 2012

  8. Nonlinear Waves and Inverse Scattering

    DTIC Science & Technology

    1989-01-01

    transform provides a linearization.’ Well known systems include the Kadomtsev - Petviashvili , Davey-Stewartson and Self-Dual Yang-Mills equations . The d...which employs inverse scattering theory in order to linearize the given nonlinear equation . I.S.T. has led to new developments in both fields: inverse...scattering and nonlinear wave equations . Listed below are some of the problems studied and a short description of results. - Multidimensional

  9. Fracture characterization by hybrid enumerative search and Gauss-Newton least-squares inversion methods

    NASA Astrophysics Data System (ADS)

    Alkharji, Mohammed N.

    Most fracture characterization methods provide a general description of the fracture parameters as part of the reservoirs parameters; the fracture interaction and geometry within the reservoir is given less attention. T-Matrix and Linear Slip effective medium fracture models are implemented to invert the elastic tensor for the parameters and geometries of the fractures within the reservoir. The fracture inverse problem has an ill-posed, overdetermined, underconstrained rank-deficit system of equations. Least-squares inverse methods are used to solve the problem. A good starting initial model for the parameters is a key factor in the reliability of the inversion. Most methods assume that the starting parameters are close to the solution to avoid inaccurate local minimum solutions. The prior knowledge of the fracture parameters and their geometry is not available. We develop a hybrid, enumerative and Gauss-Newton, method that estimates the fracture parameters and geometry from the elastic tensor with no prior knowledge of the initial parameter values. The fracture parameters are separated into two groups. The first group contains the fracture parameters with no prior information, and the second group contains the parameters with known prior information. Different models are generated from the first group parameters by sampling the solution space over a predefined range of possible solutions for each parameter. Each model generated by the first group is fixed and used as a starting model to invert for the second group of parameters using the Gauss-Newton method. The least-squares residual between the observed elastic tensor and the estimated elastic tensor is calculated for each model. The model parameters that yield the least-squares residual corresponds to the correct fracture reservoir parameters and geometry. Two synthetic examples of fractured reservoirs with oil and gas saturations were inverted with no prior information about the fracture properties. The results showed that the hybrid algorithm successfully predicted the fracture parametrization, geometry, and the fluid content within the modeled reservoir. The method was also applied on an elastic tensor extracted from the Weyburn field in Saskatchewan, Canada. The solution suggested no presence of fractures but only a VTI system caused by the shale layering in the targeted reservoir, this interpretation is supported by other Weyburn field data.

  10. Iterative updating of model error for Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew

    2018-02-01

    In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.

  11. Guidance of Nonlinear Nonminimum-Phase Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1996-01-01

    The research work has advanced the inversion-based guidance theory for: systems with non-hyperbolic internal dynamics; systems with parameter jumps; and systems where a redesign of the output trajectory is desired. A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics was developed. This approach integrated stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics was used (a) to remove non-hyperbolicity which is an obstruction to applying stable inversion techniques and (b) to reduce large preactuation times needed to apply stable inversion for near non-hyperbolic cases. The method was applied to an example helicopter hover control problem with near non-hyperbolic internal dynamics for illustrating the trade-off between exact tracking and reduction of preactuation time. Future work will extend these results to guidance of nonlinear non-hyperbolic systems. The exact output tracking problem for systems with parameter jumps was considered. Necessary and sufficient conditions were derived for the elimination of switching-introduced output transient. While previous works had studied this problem by developing a regulator that maintains exact tracking through parameter jumps (switches), such techniques are, however, only applicable to minimum-phase systems. In contrast, our approach is also applicable to nonminimum-phase systems and leads to bounded but possibly non-causal solutions. In addition, for the case when the reference trajectories are generated by an exosystem, we developed an exact-tracking controller which could be written in a feedback form. As in standard regulator theory, we also obtained a linear map from the states of the exosystem to the desired system state, which was defined via a matrix differential equation.

  12. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods

    PubMed Central

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-01-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452

  13. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.

    PubMed

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-05-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.

  14. Semi-analytical solution of flow to a well in an unconfined-fractured aquifer system separated by an aquitard

    NASA Astrophysics Data System (ADS)

    Sedghi, Mohammad M.; Samani, Nozar; Barry, D. A.

    2018-04-01

    Semi-analytical solutions are presented for flow to a well in an extensive homogeneous and anisotropic unconfined-fractured aquifer system separated by an aquitard. The pumping well is of infinitesimal radius and screened in either the overlying unconfined aquifer or the underlying fractured aquifer. An existing linearization method was used to determine the watertable drainage. The solution was obtained via Laplace and Hankel transforms, with results calculated by numerical inversion. The main findings are presented in the form of non-dimensional drawdown-time curves, as well as scaled sensitivity-dimensionless time curves. The new solution permits determination of the influence of fractures, matrix blocks and watertable drainage parameters on the aquifer drawdown. The effect of the aquitard on the drawdown response of the overlying unconfined aquifer and the underlying fractured aquifer was also explored. The results permit estimation of the unconfined and fractured aquifer hydraulic parameters via type-curve matching or coupling of the solution with a parameter estimation code. The solution can also be used to determine aquifer hydraulic properties from an optimal pumping test set up and duration.

  15. Lax Integrability and the Peakon Problem for the Modified Camassa-Holm Equation

    NASA Astrophysics Data System (ADS)

    Chang, Xiangke; Szmigielski, Jacek

    2018-02-01

    Peakons are special weak solutions of a class of nonlinear partial differential equations modelling non-linear phenomena such as the breakdown of regularity and the onset of shocks. We show that the natural concept of weak solutions in the case of the modified Camassa-Holm equation studied in this paper is dictated by the distributional compatibility of its Lax pair and, as a result, it differs from the one proposed and used in the literature based on the concept of weak solutions used for equations of the Burgers type. Subsequently, we give a complete construction of peakon solutions satisfying the modified Camassa-Holm equation in the sense of distributions; our approach is based on solving certain inverse boundary value problem, the solution of which hinges on a combination of classical techniques of analysis involving Stieltjes' continued fractions and multi-point Padé approximations. We propose sufficient conditions needed to ensure the global existence of peakon solutions and analyze the large time asymptotic behaviour whose special features include a formation of pairs of peakons that share asymptotic speeds, as well as Toda-like sorting property.

  16. A problem with inverse time for a singularly perturbed integro-differential equation with diagonal degeneration of the kernel of high order

    NASA Astrophysics Data System (ADS)

    Bobodzhanov, A. A.; Safonov, V. F.

    2016-04-01

    We consider an algorithm for constructing asymptotic solutions regularized in the sense of Lomov (see [1], [2]). We show that such problems can be reduced to integro-differential equations with inverse time. But in contrast to known papers devoted to this topic (see, for example, [3]), in this paper we study a fundamentally new case, which is characterized by the absence, in the differential part, of a linear operator that isolates, in the asymptotics of the solution, constituents described by boundary functions and by the fact that the integral operator has kernel with diagonal degeneration of high order. Furthermore, the spectrum of the regularization operator A(t) (see below) may contain purely imaginary eigenvalues, which causes difficulties in the application of the methods of construction of asymptotic solutions proposed in the monograph [3]. Based on an analysis of the principal term of the asymptotics, we isolate a class of inhomogeneities and initial data for which the exact solution of the original problem tends to the limit solution (as \\varepsilon\\to+0) on the entire time interval under consideration, also including a boundary-layer zone (that is, we solve the so-called initialization problem). The paper is of a theoretical nature and is designed to lead to a greater understanding of the problems in the theory of singular perturbations. There may be applications in various applied areas where models described by integro-differential equations are used (for example, in elasticity theory, the theory of electrical circuits, and so on).

  17. Three-dimensional unsteady lifting surface theory in the subsonic range

    NASA Technical Reports Server (NTRS)

    Kuessner, H. G.

    1985-01-01

    The methods of the unsteady lifting surface theory are surveyed. Linearized Euler's equations are simplified by means of a Galileo-Lorentz transformation and a Laplace transformation so that the time and the compressibility of the fluid are limited to two constants. The solutions to this simplified problem are represented as integrals with a differential nucleus; these results in tolerance conditions, for which any exact solution must suffice. It is shown that none of the existing three-dimensional lifting surface theories in subsonic range satisfy these conditions. An oscillating elliptic lifting surface which satisfies the tolerance conditions is calculated through the use of Lame's functions. Numerical examples are calculated for the borderline cases of infinitely stretched elliptic lifting surfaces and of circular lifting surfaces. Out of the harmonic solutions any such temporal changes of the down current are calculated through the use of an inverse Laplace transformation.

  18. Anthropic versus cosmological solutions to the coincidence problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barreira, A.; Avelino, P. P.; Departamento de Fisica da Faculdade de Ciencias da Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto

    2011-05-15

    In this paper, we investigate possible solutions to the coincidence problem in flat phantom dark-energy models with a constant dark-energy equation of state and quintessence models with a linear scalar field potential. These models are representative of a broader class of cosmological scenarios in which the universe has a finite lifetime. We show that, in the absence of anthropic constraints, including a prior probability for the models inversely proportional to the total lifetime of the universe excludes models very close to the {Lambda} cold dark matter model. This relates a cosmological solution to the coincidence problem with a dynamical dark-energymore » component having an equation-of-state parameter not too close to -1 at the present time. We further show that anthropic constraints, if they are sufficiently stringent, may solve the coincidence problem without the need for dynamical dark energy.« less

  19. On the theory of thermometric titration.

    PubMed

    Piloyan, G O; Dolinina, Y V

    1974-09-01

    The general equation defining the change in solution temperature DeltaT during a thermometric titration is DeltaT = T - T(0) = - AV 1 + BV where A and B are constants, V is the volume of titrant used to produce temperature T, and T(0) is the initial temperature. There is a linear relation between the inverse values of DeltaT and V: 1 Delta T = - a V - b where a = 1/A and b = B/A, both a and b being constants. A linear relation between DeltaT and V is usually a special case of this general relation, and is valid only over a narrow range of V. Graphs of 1/DeltaTvs. 1/V are more suitable for practical calculations than the usual graphs of DeltaTvs. V.

  20. Kinematics of an in-parallel actuated manipulator based on the Stewart platform mechanism

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1992-01-01

    This paper presents kinematic equations and solutions for an in-parallel actuated robotic mechanism based on Stewart's platform. These equations are required for inverse position and resolved rate (inverse velocity) platform control. NASA LaRC has a Vehicle Emulator System (VES) platform designed by MIT which is based on Stewart's platform. The inverse position solution is straight-forward and computationally inexpensive. Given the desired position and orientation of the moving platform with respect to the base, the lengths of the prismatic leg actuators are calculated. The forward position solution is more complicated and theoretically has 16 solutions. The position and orientation of the moving platform with respect to the base is calculated given the leg actuator lengths. Two methods are pursued in this paper to solve this problem. The resolved rate (inverse velocity) solution is derived. Given the desired Cartesian velocity of the end-effector, the required leg actuator rates are calculated. The Newton-Raphson Jacobian matrix resulting from the second forward position kinematics solution is a modified inverse Jacobian matrix. Examples and simulations are given for the VES.

  1. The novel high-performance 3-D MT inverse solver

    NASA Astrophysics Data System (ADS)

    Kruglyakov, Mikhail; Geraskin, Alexey; Kuvshinov, Alexey

    2016-04-01

    We present novel, robust, scalable, and fast 3-D magnetotelluric (MT) inverse solver. The solver is written in multi-language paradigm to make it as efficient, readable and maintainable as possible. Separation of concerns and single responsibility concepts go through implementation of the solver. As a forward modelling engine a modern scalable solver extrEMe, based on contracting integral equation approach, is used. Iterative gradient-type (quasi-Newton) optimization scheme is invoked to search for (regularized) inverse problem solution, and adjoint source approach is used to calculate efficiently the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT responses, and supports massive parallelization. Moreover, different parallelization strategies implemented in the code allow optimal usage of available computational resources for a given problem statement. To parameterize an inverse domain the so-called mask parameterization is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to HPC Piz Daint (6th supercomputer in the world) demonstrate practically linear scalability of the code up to thousands of nodes.

  2. Novel Dry-Type Glucose Sensor Based on a Metal-Oxide-Semiconductor Capacitor Structure with Horseradish Peroxidase + Glucose Oxidase Catalyzing Layer

    NASA Astrophysics Data System (ADS)

    Lin, Jing-Jenn; Wu, You-Lin; Hsu, Po-Yen

    2007-10-01

    In this paper, we present a novel dry-type glucose sensor based on a metal-oxide-semiconductor capacitor (MOSC) structure using SiO2 as a gate dielectric in conjunction with a horseradish peroxidase (HRP) + glucose oxidase (GOD) catalyzing layer. The tested glucose solution was dropped directly onto the window opened on the SiO2 layer, with a coating of HRP + GOD catalyzing layer on top of the gate dielectric. From the capacitance-voltage (C-V) characteristics of the sensor, we found that the glucose solution can induce an inversion layer on the silicon surface causing a gate leakage current flowing along the SiO2 surface. The gate current changes Δ I before and after the drop of glucose solution exhibits a near-linear relationship with increasing glucose concentration. The Δ I sensitivity is about 1.76 nA cm-2 M-1, and the current is quite stable 20 min after the drop of the glucose solution is tested.

  3. The influence of pH on biotite dissolution and alteration kinetics at low temperature

    USGS Publications Warehouse

    Acker, James G.; Bricker, O.P.

    1992-01-01

    Biotite dissolution rates in acidic solutions were determined in fluidized-bed reactors and flowthrough columns. Biotite dissolution rates increased inversely as a linear function of pH in the pH range 3-7, where the rate order n = -0.34. Biotite dissolved incongruently over this pH range, with preferential release of magnesium and iron from the octahedral layer. Release of tetrahedral silicon was much greater at pH 3 than at higher pH. Iron release was significantly enhanced by low pH conditions. Solution compositions from a continuous exposure flow-through column of biotite indicated biotite dissolves incongruently at pH 4, consistent with alteration to a vermiculite-type product. Solution compositions from a second intermittent-flow column exhibited elevated cation release rates upon the initiation of each exposure to solution. The presence of strong oxidizing agents, the mineral surface area, and sample preparation methodology also influenced the dissolution or alteration kinetics of biotite. ?? 1992.

  4. Applications of He's semi-inverse method, ITEM and GGM to the Davey-Stewartson equation

    NASA Astrophysics Data System (ADS)

    Zinati, Reza Farshbaf; Manafian, Jalil

    2017-04-01

    We investigate the Davey-Stewartson (DS) equation. Travelling wave solutions were found. In this paper, we demonstrate the effectiveness of the analytical methods, namely, He's semi-inverse variational principle method (SIVPM), the improved tan(φ/2)-expansion method (ITEM) and generalized G'/G-expansion method (GGM) for seeking more exact solutions via the DS equation. These methods are direct, concise and simple to implement compared to other existing methods. The exact solutions containing four types solutions have been achieved. The results demonstrate that the aforementioned methods are more efficient than the Ansatz method applied by Mirzazadeh (2015). Abundant exact travelling wave solutions including solitons, kink, periodic and rational solutions have been found by the improved tan(φ/2)-expansion and generalized G'/G-expansion methods. By He's semi-inverse variational principle we have obtained dark and bright soliton wave solutions. Also, the obtained semi-inverse variational principle has profound implications in physical understandings. These solutions might play important role in engineering and physics fields. Moreover, by using Matlab, some graphical simulations were done to see the behavior of these solutions.

  5. The role of iron species on the turbidity of oxidized phenol solutions in a photo-Fenton system.

    PubMed

    Villota, Natalia; Camarero, Luis M; Lomas, Jose M; Perez-Arce, Jonatan

    2015-01-01

    This work aims at establishing the contribution of the iron species to the turbidity of phenol solutions oxidized with photo-Fenton technology. During oxidation, turbidity increases linearly with time till a maximum value, according to a formation rate that shows a dependence of second order with respect to the catalyst concentration. Next, the decrease in turbidity shows the evolution of second-order kinetics, where the kinetics constant is inversely proportional to the dosage of iron, of order 0.7. The concentration of iron species is analysed at the point of maximum turbidity, as a function of the total amount of iron. Then, it is found that using dosages FeT=0-15.0 mg/L, the majority iron species was found to be ferrous ions, indicating that its concentration increases linearly with the dosage of total iron. This result may indicate that the photo-reaction of ferric ion occurs leading to the regeneration of ferrous ion. The results, obtained by operating with initial dosages FeT=15.0 and 25.0 mg/L, suggest that ferrous ion concentration decreases while ferric ion concentration increases in a complementary manner. This fact could be explained as a regeneration cycle of the iron species. The observed turbidity is generated due to the iron being added as a catalyst and the organic matter present in the system. Later, it was found that at the point of maximum turbidity, the concentration of ferrous ions is inversely proportional to the concentration of phenol and its dihydroxylated intermediates.

  6. A theoretical formulation of the electrophysiological inverse problem on the sphere

    NASA Astrophysics Data System (ADS)

    Riera, Jorge J.; Valdés, Pedro A.; Tanabe, Kunio; Kawashima, Ryuta

    2006-04-01

    The construction of three-dimensional images of the primary current density (PCD) produced by neuronal activity is a problem of great current interest in the neuroimaging community, though being initially formulated in the 1970s. There exist even now enthusiastic debates about the authenticity of most of the inverse solutions proposed in the literature, in which low resolution electrical tomography (LORETA) is a focus of attention. However, in our opinion, the capabilities and limitations of the electro and magneto encephalographic techniques to determine PCD configurations have not been extensively explored from a theoretical framework, even for simple volume conductor models of the head. In this paper, the electrophysiological inverse problem for the spherical head model is cast in terms of reproducing kernel Hilbert spaces (RKHS) formalism, which allows us to identify the null spaces of the implicated linear integral operators and also to define their representers. The PCD are described in terms of a continuous basis for the RKHS, which explicitly separates the harmonic and non-harmonic components. The RKHS concept permits us to bring LORETA into the scope of the general smoothing splines theory. A particular way of calculating the general smoothing splines is illustrated, avoiding a brute force discretization prematurely. The Bayes information criterion is used to handle dissimilarities in the signal/noise ratios and physical dimensions of the measurement modalities, which could affect the estimation of the amount of smoothness required for that class of inverse solution to be well specified. In order to validate the proposed method, we have estimated the 3D spherical smoothing splines from two data sets: electric potentials obtained from a skull phantom and magnetic fields recorded from subjects performing an experiment of human faces recognition.

  7. Quantifying uncertainties of seismic Bayesian inversion of Northern Great Plains

    NASA Astrophysics Data System (ADS)

    Gao, C.; Lekic, V.

    2017-12-01

    Elastic waves excited by earthquakes are the fundamental observations of the seismological studies. Seismologists measure information such as travel time, amplitude, and polarization to infer the properties of earthquake source, seismic wave propagation, and subsurface structure. Across numerous applications, seismic imaging has been able to take advantage of complimentary seismic observables to constrain profiles and lateral variations of Earth's elastic properties. Moreover, seismic imaging plays a unique role in multidisciplinary studies of geoscience by providing direct constraints on the unreachable interior of the Earth. Accurate quantification of uncertainties of inferences made from seismic observations is of paramount importance for interpreting seismic images and testing geological hypotheses. However, such quantification remains challenging and subjective due to the non-linearity and non-uniqueness of geophysical inverse problem. In this project, we apply a reverse jump Markov chain Monte Carlo (rjMcMC) algorithm for a transdimensional Bayesian inversion of continental lithosphere structure. Such inversion allows us to quantify the uncertainties of inversion results by inverting for an ensemble solution. It also yields an adaptive parameterization that enables simultaneous inversion of different elastic properties without imposing strong prior information on the relationship between them. We present retrieved profiles of shear velocity (Vs) and radial anisotropy in Northern Great Plains using measurements from USArray stations. We use both seismic surface wave dispersion and receiver function data due to their complementary constraints of lithosphere structure. Furthermore, we analyze the uncertainties of both individual and joint inversion of those two data types to quantify the benefit of doing joint inversion. As an application, we infer the variation of Moho depths and crustal layering across the northern Great Plains.

  8. Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations

    NASA Astrophysics Data System (ADS)

    Zhi, Longxiao; Gu, Hanming

    2018-03-01

    The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor series expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain the P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion doesn't need certain assumptions and can estimate more parameters simultaneously. It has a better applicability. Meanwhile, by using the generalized linear method, the inversion is easily implemented and its calculation cost is small. We use the theoretical model to generate synthetic seismic records to test and analyze the influence of random noise. The results can prove the availability and anti-noise-interference ability of our method. We also apply the inversion to actual field data and prove the feasibility of our method in actual situation.

  9. Bayesian inversion of the global present-day GIA signal uncertainty from RSL data

    NASA Astrophysics Data System (ADS)

    Caron, Lambert; Ivins, Erik R.; Adhikari, Surendra; Larour, Eric

    2017-04-01

    Various geophysical signals measured in the process of studying the present-day climate change (such as changes in the Earth gravitational potential, ocean altimery or GPS data) include a secular Glacial Isostatic Adjustment contribution that has to be corrected for. Yet, one of the current major challenges that Glacial Isostatic Adjustment modelling is currently struggling with is to accurately determine the uncertainty of the predicted present-day GIA signal. This is especially true at the global scale, where coupling between ice history and mantle rheology greatly contributes to the non-uniqueness of the solutions. Here we propose to use more than 11000 paleo sea level records to constrain a set of GIA Bayesian inversions and thoroughly explore its parameters space. We include two linearly relaxing models to represent the mantle rheology and couple them with a scalable ice history model in order to better assess the non-uniqueness of the solutions. From the resulting estimates of the Probability Density Function, we then extract maps of uncertainty affecting the present-day vertical land motion and geoid due to GIA at the global scale, and their associated expectation of the signal.

  10. A microwave tomography strategy for structural monitoring

    NASA Astrophysics Data System (ADS)

    Catapano, I.; Crocco, L.; Isernia, T.

    2009-04-01

    The capability of the electromagnetic waves to penetrate optical dense regions can be conveniently exploited to provide high informative images of the internal status of manmade structures in a non destructive and minimally invasive way. In this framework, as an alternative to the wide adopted radar techniques, Microwave Tomography approaches are worth to be considered. As a matter of fact, they may accurately reconstruct the permittivity and conductivity distributions of a given region from the knowledge of a set of incident fields and measures of the corresponding scattered fields. As far as cultural heritage conservation is concerned, this allow not only to detect the anomalies, which can possibly damage the integrity and the stability of the structure, but also characterize their morphology and electric features, which are useful information to properly address the repair actions. However, since a non linear and ill-posed inverse scattering problem has to be solved, proper regularization strategies and sophisticated data processing tools have to be adopt to assure the reliability of the results. To pursue this aim, in the last years huge attention has been focused on the advantages introduced by diversity in data acquisition (multi-frequency/static/view data) [1,2] as well as on the analysis of the factors affecting the solution of an inverse scattering problem [3]. Moreover, how the degree of non linearity of the relationship between the scattered field and the electromagnetic parameters of the targets can be changed by properly choosing the mathematical model adopt to formulate the scattering problem has been shown in [4]. Exploiting the above results, in this work we propose an imaging procedure in which the inverse scattering problem is formulated as an optimization problem where the mathematical relationship between data and unknowns is expressed by means of a convenient integral equations model and the sought solution is defined as the global minimum of a cost functional. In particular, a local minimization scheme is exploited and a pre-processing step, devoted to preliminary asses the location and the shape of the anomalies, is exploited. The effectiveness of the proposed strategy has been preliminary assessed by means of numerical examples concerning the diagnostic of masonry structures, which will be shown in the Conference. [1] O. M. Bucci, L. Crocco, T. Isernia, and V. Pascazio, Subsurface inverse scattering problems: Quantifying, qualifying and achieving the available information, IEEE Trans. Geosci. Remote Sens., 39(5), 2527-2538, 2001. [2] R. Persico, R. Bernini, and F. Soldovieri, "The role of the measurement configuration in inverse scattering from buried objects under the distorted Born approximation," IEEE Trans. Antennas Propag., vol. 53, no. 6, pp. 1875-1887, Jun. 2005. [3] I. Catapano, L. Crocco, M. D'Urso, T. Isernia, "On the Effect of Support Estimation and of a New Model in 2-D Inverse Scattering Problems," IEEE Trans. Antennas Propagat., vol.55, no.6, pp.1895-1899, 2007. [4] M. D'Urso, I. Catapano, L. Crocco and T. Isernia, Effective solution of 3D scattering problems via series expansions: applicability and a new hybrid scheme, IEEE Trans. On Geosci. Remote Sens., vol.45, no.3, pp. 639-648, 2007.

  11. Dispersive estimates for rational symbols and local well-posedness of the nonzero energy NV equation. II

    NASA Astrophysics Data System (ADS)

    Kazeykina, Anna; Muñoz, Claudio

    2018-04-01

    We continue our study on the Cauchy problem for the two-dimensional Novikov-Veselov (NV) equation, integrable via the inverse scattering transform for the two dimensional Schrödinger operator at a fixed energy parameter. This work is concerned with the more involved case of a positive energy parameter. For the solution of the linearized equation we derive smoothing and Strichartz estimates by combining new estimates for two different frequency regimes, extending our previous results for the negative energy case [18]. The low frequency regime, which our previous result was not able to treat, is studied in detail. At non-low frequencies we also derive improved smoothing estimates with gain of almost one derivative. Then we combine the linear estimates with a Fourier decomposition method and Xs,b spaces to obtain local well-posedness of NV at positive energy in Hs, s > 1/2. Our result implies, in particular, that at least for s > 1/2, NV does not change its behavior from semilinear to quasilinear as energy changes sign, in contrast to the closely related Kadomtsev-Petviashvili equations. As a complement to our LWP results, we also provide some new explicit solutions of NV at zero energy, generalizations of the lumps solutions, which exhibit new and nonstandard long time behavior. In particular, these solutions blow up in infinite time in L2.

  12. An analytic approach for the study of pulsar spindown

    NASA Astrophysics Data System (ADS)

    Chishtie, F. A.; Zhang, Xiyang; Valluri, S. R.

    2018-07-01

    In this work we develop an analytic approach to study pulsar spindown. We use the monopolar spindown model by Alvarez and Carramiñana (2004 Astron. Astrophys. 414 651–8), which assumes an inverse linear law of magnetic field decay of the pulsar, to extract an all-order formula for the spindown parameters using the Taylor series representation of Jaranowski et al (1998 Phys. Rev. D 58 6300). We further extend the analytic model to incorporate the quadrupole term that accounts for the emission of gravitational radiation, and obtain expressions for the period P and frequency f in terms of transcendental equations. We derive the analytic solution for pulsar frequency spindown in the absence of glitches. We examine the different cases that arise in the analysis of the roots in the solution of the non-linear differential equation for pulsar period evolution. We provide expressions for the spin-down parameters and find that the spindown values are in reasonable agreement with observations. A detection of gravitational waves from pulsars will be the next landmark in the field of multi-messenger gravitational wave astronomy.

  13. Combining experimental techniques with non-linear numerical models to assess the sorption of pesticides on soils

    NASA Astrophysics Data System (ADS)

    Magga, Zoi; Tzovolou, Dimitra N.; Theodoropoulou, Maria A.; Tsakiroglou, Christos D.

    2012-03-01

    The risk assessment of groundwater pollution by pesticides may be based on pesticide sorption and biodegradation kinetic parameters estimated with inverse modeling of datasets from either batch or continuous flow soil column experiments. In the present work, a chemical non-equilibrium and non-linear 2-site sorption model is incorporated into solute transport models to invert the datasets of batch and soil column experiments, and estimate the kinetic sorption parameters for two pesticides: N-phosphonomethyl glycine (glyphosate) and 2,4-dichlorophenoxy-acetic acid (2,4-D). When coupling the 2-site sorption model with the 2-region transport model, except of the kinetic sorption parameters, the soil column datasets enable us to estimate the mass-transfer coefficients associated with solute diffusion between mobile and immobile regions. In order to improve the reliability of models and kinetic parameter values, a stepwise strategy that combines batch and continuous flow tests with adequate true-to-the mechanism analytical of numerical models, and decouples the kinetics of purely reactive steps of sorption from physical mass-transfer processes is required.

  14. Analytical Solutions for the Surface States of Bi1-xSbx (0 ≤ x ≲ 0.1)

    NASA Astrophysics Data System (ADS)

    Fuseya, Yuki; Fukuyama, Hidetoshi

    2018-04-01

    Analytical solutions for the surface state (SS) of an extended Wolff Hamiltonian, which is a common Hamiltonian for strongly spin-orbit coupled systems, are obtained both for semi-infinite and finite-thickness boundary conditions. For the semi-infinite system, there are two types of SS solutions: (I-a) linearly crossing SSs in the direct bulk band gap, and (I-b) SSs with linear dispersions entering the bulk conduction or valence bands away from the band edge. For the finite-thickness system, a gap opens in the SS of solution I-a. Numerical solutions for the SS are also obtained based on the tight-binding model of Liu and Allen [Phys. Rev. B 52, 1566 (1995)] for Bi1-xSbx (0 ≤ x ≤ 0.1). A perfect correspondence between the analytic and numerical solutions is obtained around the \\bar{M} point including their thickness dependence. This is the first time that the character of the SS numerically obtained is identified with the help of analytical solutions. The size of the gap for I-a SS can be larger than that of bulk band gap even for a "thick" films ( ≲ 200 bilayers ≃ 80 nm) of pure bismuth. Consequently, in such a film of Bi1-xSbx, there is no apparent change in the SSs through the band inversion at x ≃ 0.04, even though the nature of the SS is changed from solution I-a to I-b. Based on our theoretical results, the experimental results on the SS of Bi1-xSbx (0 ≤ x ≲ 0.1) are discussed.

  15. Multidimensional deconvolution of optical microscope and ultrasound imaging using adaptive least-mean-square (LMS) inverse filtering

    NASA Astrophysics Data System (ADS)

    Sapia, Mark Angelo

    2000-11-01

    Three-dimensional microscope images typically suffer from reduced resolution due to the effects of convolution, optical aberrations and out-of-focus blurring. Two- dimensional ultrasound images are also degraded by convolutional bluffing and various sources of noise. Speckle noise is a major problem in ultrasound images. In microscopy and ultrasound, various methods of digital filtering have been used to improve image quality. Several methods of deconvolution filtering have been used to improve resolution by reversing the convolutional effects, many of which are based on regularization techniques and non-linear constraints. The technique discussed here is a unique linear filter for deconvolving 3D fluorescence microscopy or 2D ultrasound images. The process is to solve for the filter completely in the spatial-domain using an adaptive algorithm to converge to an optimum solution for de-blurring and resolution improvement. There are two key advantages of using an adaptive solution: (1)it efficiently solves for the filter coefficients by taking into account all sources of noise and degraded resolution at the same time, and (2)achieves near-perfect convergence to the ideal linear deconvolution filter. This linear adaptive technique has other advantages such as avoiding artifacts of frequency-domain transformations and concurrent adaptation to suppress noise. Ultimately, this approach results in better signal-to-noise characteristics with virtually no edge-ringing. Many researchers have not adopted linear techniques because of poor convergence, noise instability and negative valued data in the results. The methods presented here overcome many of these well-documented disadvantages and provide results that clearly out-perform other linear methods and may also out-perform regularization and constrained algorithms. In particular, the adaptive solution is most responsible for overcoming the poor performance associated with linear techniques. This linear adaptive approach to deconvolution is demonstrated with results of restoring blurred phantoms for both microscopy and ultrasound and restoring 3D microscope images of biological cells and 2D ultrasound images of human subjects (courtesy of General Electric and Diasonics, Inc.).

  16. Zinc oxide inverse opal enzymatic biosensor

    NASA Astrophysics Data System (ADS)

    You, Xueqiu; Pikul, James H.; King, William P.; Pak, James J.

    2013-06-01

    We report ZnO inverse opal- and nanowire (NW)-based enzymatic glucose biosensors with extended linear detection ranges. The ZnO inverse opal sensors have 0.01-18 mM linear detection range, which is 2.5 times greater than that of ZnO NW sensors and 1.5 times greater than that of other reported ZnO sensors. This larger range is because of reduced glucose diffusivity through the inverse opal geometry. The ZnO inverse opal sensors have an average sensitivity of 22.5 μA/(mM cm2), which diminished by 10% after 35 days, are more stable than ZnO NW sensors whose sensitivity decreased by 10% after 7 days.

  17. R-Function Relationships for Application in the Fractional Calculus

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Hartley, Tom T.

    2000-01-01

    The F-function, and its generalization the R-function, are of fundamental importance in the fractional calculus. It has been shown that the solution of the fundamental linear fractional differential equation may be expressed in terms of these functions. These functions serve as generalizations of the exponential function in the solution of fractional differential equations. Because of this central role in the fractional calculus, this paper explores various intrarelationships of the R-function, which will be useful in further analysis. Relationships of the R-function to the common exponential function, e(t), and its fractional derivatives are shown. From the relationships developed, some important approximations are observed. Further, the inverse relationships of the exponential function, el, in terms of the R-function are developed. Also, some approximations for the R-function are developed.

  18. The boundary element method applied to 3D magneto-electro-elastic dynamic problems

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Markov, I. P.; Kuznetsov, Iu A.

    2017-11-01

    Due to the coupling properties, the magneto-electro-elastic materials possess a wide number of applications. They exhibit general anisotropic behaviour. Three-dimensional transient analyses of magneto-electro-elastic solids can hardly be found in the literature. 3D direct boundary element formulation based on the weakly-singular boundary integral equations in Laplace domain is presented in this work for solving dynamic linear magneto-electro-elastic problems. Integral expressions of the three-dimensional fundamental solutions are employed. Spatial discretization is based on a collocation method with mixed boundary elements. Convolution quadrature method is used as a numerical inverse Laplace transform scheme to obtain time domain solutions. Numerical examples are provided to illustrate the capability of the proposed approach to treat highly dynamic problems.

  19. R-function relationships for application in the fractional calculus.

    PubMed

    Lorenzo, Carl F; Hartley, Tom T

    2008-01-01

    The F-function, and its generalization the R-function, are of fundamental importance in the fractional calculus. It has been shown that the solution of the fundamental linear fractional differential equation may be expressed in terms of these functions. These functions serve as generalizations of the exponential function in the solution of fractional differential equations. Because of this central role in the fractional calculus, this paper explores various intrarelationships of the R-function, which will be useful in further analysis. Relationships of the R-function to the common exponential function, et, and its fractional derivatives are shown. From the relationships developed, some important approximations are observed. Further, the inverse relationships of the exponential function, et, in terms of the R-function are developed. Also, some approximations for the R-function are developed.

  20. Influence of Non-linear Radiation Heat Flux on Rotating Maxwell Fluid over a Deformable Surface: A Numerical Study

    NASA Astrophysics Data System (ADS)

    Mustafa, M.; Mushtaq, A.; Hayat, T.; Alsaedi, A.

    2018-04-01

    Mathematical model for Maxwell fluid flow in rotating frame induced by an isothermal stretching wall is explored numerically. Scale analysis based boundary layer approximations are applied to simplify the conservation relations which are later converted to similar forms via appropriate substitutions. A numerical approach is utilized to derive similarity solutions for broad range of Deborah number. The results predict that velocity distributions are inversely proportional to the stress relaxation time. This outcome is different from that observed for the elastic parameter of second grade fluid. Unlike non-rotating frame, the solution curves are oscillatory decaying functions of similarity variable. As angular velocity enlarges, temperature rises and significant drop in the heat transfer coefficient occurs. We note that the wall slope of temperature has an asymptotically decaying profile against the wall to ambient ratio parameter. From the qualitative view point, temperature ratio parameter and radiation parameter have similar effect on the thermal boundary layer. Furthermore, radiation parameter has a definite role in improving the cooling process of the stretching boundary. A comparative study of current numerical computations and those from the existing studies is also presented in a limiting case. To our knowledge, the phenomenon of non-linear radiation in rotating viscoelastic flow due to linearly stretched plate is just modeled here.

  1. The 2-D magnetotelluric inverse problem solved with optimization

    NASA Astrophysics Data System (ADS)

    van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven

    2011-02-01

    The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.

  2. Thermal Diagnostics with the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory: A Validated Method for Differential Emission Measure Inversions

    NASA Astrophysics Data System (ADS)

    Cheung, Mark C. M.; Boerner, P.; Schrijver, C. J.; Testa, P.; Chen, F.; Peter, H.; Malanushenko, A.

    2015-07-01

    We present a new method for performing differential emission measure (DEM) inversions on narrow-band EUV images from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory. The method yields positive definite DEM solutions by solving a linear program. This method has been validated against a diverse set of thermal models of varying complexity and realism. These include (1) idealized Gaussian DEM distributions, (2) 3D models of NOAA Active Region 11158 comprising quasi-steady loop atmospheres in a nonlinear force-free field, and (3) thermodynamic models from a fully compressible, 3D MHD simulation of active region (AR) corona formation following magnetic flux emergence. We then present results from the application of the method to AIA observations of Active Region 11158, comparing the region's thermal structure on two successive solar rotations. Additionally, we show how the DEM inversion method can be adapted to simultaneously invert AIA and Hinode X-ray Telescope data, and how supplementing AIA data with the latter improves the inversion result. The speed of the method allows for routine production of DEM maps, thus facilitating science studies that require tracking of the thermal structure of the solar corona in time and space.

  3. THERMAL DIAGNOSTICS WITH THE ATMOSPHERIC IMAGING ASSEMBLY ON BOARD THE SOLAR DYNAMICS OBSERVATORY: A VALIDATED METHOD FOR DIFFERENTIAL EMISSION MEASURE INVERSIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Mark C. M.; Boerner, P.; Schrijver, C. J.

    We present a new method for performing differential emission measure (DEM) inversions on narrow-band EUV images from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory. The method yields positive definite DEM solutions by solving a linear program. This method has been validated against a diverse set of thermal models of varying complexity and realism. These include (1) idealized Gaussian DEM distributions, (2) 3D models of NOAA Active Region 11158 comprising quasi-steady loop atmospheres in a nonlinear force-free field, and (3) thermodynamic models from a fully compressible, 3D MHD simulation of active region (AR) corona formation following magneticmore » flux emergence. We then present results from the application of the method to AIA observations of Active Region 11158, comparing the region's thermal structure on two successive solar rotations. Additionally, we show how the DEM inversion method can be adapted to simultaneously invert AIA and Hinode X-ray Telescope data, and how supplementing AIA data with the latter improves the inversion result. The speed of the method allows for routine production of DEM maps, thus facilitating science studies that require tracking of the thermal structure of the solar corona in time and space.« less

  4. Full waveform time domain solutions for source and induced magnetotelluric and controlled-source electromagnetic fields using quasi-equivalent time domain decomposition and GPU parallelization

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2015-12-01

    Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.

  5. Phenanthrene and 2,2',5,5'-PCB sorption by several soils from methanol-water solutions: the effect of weathering and solute structure.

    PubMed

    Hyun, Seunghun; Kim, Minhee; Baek, Kitae; Lee, Linda S

    2010-01-01

    The effect of the sorption of phenanthrene and 2,2',5,5'-polychlorinated biphenyl (PCB52) by five differently weathered soils were measured in water and low methanol volume fraction (f(c)0.5) as a function of the apparent solution pH (pH(app)). Two weathered oxisols (A2 and DRC), and moderately weathered alfisols (Toronto) and two young soils (K5 and Webster) were used. The K(m) (linear sorption coefficient) values, which log-linearly decreases with f(c), were interpreted using a cosolvency sorption model. For phenanthrene sorption at the natural pH, the empirical constant (alpha) ranged between 0.95 and 1.14, and was in the order of oxisols (A2 and DRC)

  6. The Baker-Akhiezer Function and Factorization of the Chebotarev-Khrapkov Matrix

    NASA Astrophysics Data System (ADS)

    Antipov, Yuri A.

    2014-10-01

    A new technique is proposed for the solution of the Riemann-Hilbert problem with the Chebotarev-Khrapkov matrix coefficient {G(t) = α1(t)I + α2(t)Q(t)} , {α1(t), α2(t) in H(L)} , I = diag{1, 1}, Q(t) is a {2×2} zero-trace polynomial matrix. This problem has numerous applications in elasticity and diffraction theory. The main feature of the method is the removal of essential singularities of the solution to the associated homogeneous scalar Riemann-Hilbert problem on the hyperelliptic surface of an algebraic function by means of the Baker-Akhiezer function. The consequent application of this function for the derivation of the general solution to the vector Riemann-Hilbert problem requires the finding of the {ρ} zeros of the Baker-Akhiezer function ({ρ} is the genus of the surface). These zeros are recovered through the solution to the associated Jacobi problem of inversion of abelian integrals or, equivalently, the determination of the zeros of the associated degree-{ρ} polynomial and solution of a certain linear algebraic system of {ρ} equations.

  7. A new, double-inversion mechanism of the F- + CH3Cl SN2 reaction in aqueous solution.

    PubMed

    Liu, Peng; Wang, Dunyou; Xu, Yulong

    2016-11-23

    Atomic-level, bimolecular nucleophilic substitution reaction mechanisms have been studied mostly in the gas phase, but the gas-phase results cannot be expected to reliably describe condensed-phase chemistry. As a novel, double-inversion mechanism has just been found for the F - + CH 3 Cl S N 2 reaction in the gas phase [Nat. Commun., 2015, 6, 5972], here, using multi-level quantum mechanics methods combined with the molecular mechanics method, we discovered a new, double-inversion mechanism for this reaction in aqueous solution. However, the structures of the stationary points along the reaction path show significant differences from those in the gas phase due to the strong influence of solvent and solute interactions, especially due to the hydrogen bonds formed between the solute and the solvent. More importantly, the relationship between the two double-inversion transition states is not clear in the gas phase, but, here we revealed a novel intermediate complex serving as a "connecting link" between the two transition states of the abstraction-induced inversion and the Walden-inversion mechanisms. A detailed reaction path was constructed to show the atomic-level evolution of this novel double reaction mechanism in aqueous solution. The potentials of mean force were calculated and the obtained Walden-inversion barrier height agrees well with the available experimental value.

  8. Errors in Tsunami Source Estimation from Tide Gauges

    NASA Astrophysics Data System (ADS)

    Arcas, D.

    2012-12-01

    Linearity of tsunami waves in deep water can be assessed as a comparison of flow speed, u to wave propagation speed √gh. In real tsunami scenarios this evaluation becomes impractical due to the absence of observational data of tsunami flow velocities in shallow water. Consequently the extent of validity of the linear regime in the ocean is unclear. Linearity is the fundamental assumption behind tsunami source inversion processes based on linear combinations of unit propagation runs from a deep water propagation database (Gica et al., 2008). The primary tsunami elevation data for such inversion is usually provided by National Oceanic and Atmospheric (NOAA) deep-water tsunami detection systems known as DART. The use of tide gauge data for such inversions is more controversial due to the uncertainty of wave linearity at the depth of the tide gauge site. This study demonstrates the inaccuracies incurred in source estimation using tide gauge data in conjunction with a linear combination procedure for tsunami source estimation.

  9. The effect of delays on filament oscillations and stability

    NASA Astrophysics Data System (ADS)

    van den Oord, G. H. J.; Schutgens, N. A. J.; Kuperus, M.

    1998-11-01

    We discuss the linear response of a filament to perturbations, taking the finite communication time between the filament and the photosphere into account. The finite communication time introduces delays in the system. Recently Schutgens (1997ab) investigated the solutions of the delay equation for vertical perturbations. In this paper we expand his analysis by considering also horizontal and coupled oscillations. The latter occur in asymmetric coronal fields. We also discuss the effect of Alfven wave emission on filament oscillations and show that wave emission is important for stabilizing filaments. We introduce a fairly straightforward method to study the solutions of delay equations as a function of the filament-photosphere communication time. A solution can be described by a linear combination of damped harmonic oscillations each characterized by a frequency, a damping/growth time and, accordingly, a quality factor. As a secondary result of our analysis we show that, within the context of line current models, Kippenhahn/Schlüter-type filament equilibria can never be stable in the horizontal and the vertical direction at the same time but we also demonstrate that Kuperus/Raadu-type equilibria can account for both an inverse or a normal polarity signature. The diagnostic value of our analysis for determining, e.g., the filament current from observations of oscillating filaments is discussed.

  10. A constrained regularization method for inverting data represented by linear algebraic or integral equations

    NASA Astrophysics Data System (ADS)

    Provencher, Stephen W.

    1982-09-01

    CONTIN is a portable Fortran IV package for inverting noisy linear operator equations. These problems occur in the analysis of data from a wide variety experiments. They are generally ill-posed problems, which means that errors in an unregularized inversion are unbounded. Instead, CONTIN seeks the optimal solution by incorporating parsimony and any statistical prior knowledge into the regularizor and absolute prior knowledge into equallity and inequality constraints. This can be greatly increase the resolution and accuracyh of the solution. CONTIN is very flexible, consisting of a core of about 50 subprograms plus 13 small "USER" subprograms, which the user can easily modify to specify special-purpose constraints, regularizors, operator equations, simulations, statistical weighting, etc. Specjial collections of USER subprograms are available for photon correlation spectroscopy, multicomponent spectra, and Fourier-Bessel, Fourier and Laplace transforms. Numerically stable algorithms are used throughout CONTIN. A fairly precise definition of information content in terms of degrees of freedom is given. The regularization parameter can be automatically chosen on the basis of an F-test and confidence region. The interpretation of the latter and of error estimates based on the covariance matrix of the constrained regularized solution are discussed. The strategies, methods and options in CONTIN are outlined. The program itself is described in the following paper.

  11. Solutions to inverse plume in a crosswind problem using a predictor - corrector method

    NASA Astrophysics Data System (ADS)

    Vanderveer, Joseph; Jaluria, Yogesh

    2013-11-01

    Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.

  12. Constitutive error based parameter estimation technique for plate structures using free vibration signatures

    NASA Astrophysics Data System (ADS)

    Guchhait, Shyamal; Banerjee, Biswanath

    2018-04-01

    In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.

  13. Solution of the symmetric eigenproblem AX=lambda BX by delayed division

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.; Bains, N. J. C.

    1986-01-01

    Delayed division is an iterative method for solving the linear eigenvalue problem AX = lambda BX for a limited number of small eigenvalues and their corresponding eigenvectors. The distinctive feature of the method is the reduction of the problem to an approximate triangular form by systematically dropping quadratic terms in the eigenvalue lambda. The report describes the pivoting strategy in the reduction and the method for preserving symmetry in submatrices at each reduction step. Along with the approximate triangular reduction, the report extends some techniques used in the method of inverse subspace iteration. Examples are included for problems of varying complexity.

  14. Periodic Pulay method for robust and efficient convergence acceleration of self-consistent field iterations

    DOE PAGES

    Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.

    2016-01-21

    Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.

  15. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  16. Kinematic equations for control of the redundant eight-degree-of-freedom advanced research manipulator 2

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1992-01-01

    The forward position and velocity kinematics for the redundant eight-degree-of-freedom Advanced Research Manipulator 2 (ARM2) are presented. Inverse position and velocity kinematic solutions are also presented. The approach in this paper is to specify two of the unknowns and solve for the remaining six unknowns. Two unknowns can be specified with two restrictions. First, the elbow joint angle and rate cannot be specified because they are known from the end-effector position and velocity. Second, one unknown must be specified from the four-jointed wrist, and the second from joints that translate the wrist, elbow joint excluded. There are eight solutions to the inverse position problem. The inverse velocity solution is unique, assuming the Jacobian matrix is not singular. A discussion of singularities is based on specifying two joint rates and analyzing the reduced Jacobian matrix. When this matrix is singular, the generalized inverse may be used as an alternate solution. Computer simulations were developed to verify the equations. Examples demonstrate agreement between forward and inverse solutions.

  17. A complete analytical solution for the inverse instantaneous kinematics of a spherical-revolute-spherical (7R) redundant manipulator

    NASA Technical Reports Server (NTRS)

    Podhorodeski, R. P.; Fenton, R. G.; Goldenberg, A. A.

    1989-01-01

    Using a method based upon resolving joint velocities using reciprocal screw quantities, compact analytical expressions are generated for the inverse solution of the joint rates of a seven revolute (spherical-revolute-spherical) manipulator. The method uses a sequential decomposition of screw coordinates to identify reciprocal screw quantities used in the resolution of a particular joint rate solution, and also to identify a Jacobian null-space basis used for the direct solution of optimal joint rates. The results of the screw decomposition are used to study special configurations of the manipulator, generating expressions for the inverse velocity solution for all non-singular configurations of the manipulator, and identifying singular configurations and their characteristics. Two functions are therefore served: a new general method for the solution of the inverse velocity problem is presented; and complete analytical expressions are derived for the resolution of the joint rates of a seven degree of freedom manipulator useful for telerobotic and industrial robotic application.

  18. Reverse Flood Routing with the Lag-and-Route Storage Model

    NASA Astrophysics Data System (ADS)

    Mazi, K.; Koussis, A. D.

    2010-09-01

    This work presents a method for reverse routing of flood waves in open channels, which is an inverse problem of the signal identification type. Inflow determination from outflow measurements is useful in hydrologic forensics and in optimal reservoir control, but has been seldom studied. Such problems are ill posed and their solution is sensitive to small perturbations present in the data, or to any related uncertainty. Therefore the major difficulty in solving this inverse problem consists in controlling the amplification of errors that inevitably befall flow measurements, from which the inflow signal is to be determined. The lag-and-route model offers a convenient framework for reverse routing, because not only is formal deconvolution not required, but also reverse routing is through a single linear reservoir. In addition, this inversion degenerates to calculating the intermediate inflow (prior to the lag step) simply as the sum of the outflow and of its time derivative multiplied by the reservoir’s time constant. The remaining time shifting (lag) of the intermediate, reversed flow presents no complications, as pure translation causes no error amplification. Note that reverse routing with the inverted Muskingum scheme (Koussis et al., submitted to the 12th Plinius Conference) fails when that scheme is specialised to the Kalinin-Miljukov model (linear reservoirs in series). The principal functioning of the reverse routing procedure was verified first with perfect field data (outflow hydrograph generated by forward routing of a known inflow hydrograph). The field data were then seeded with random error. To smooth the oscillations caused by the imperfect (measured) outflow data, we applied a multipoint Savitzky-Golay low-pass filter. The combination of reverse routing and filtering achieved an effective recovery of the inflow signal extremely efficiently. Specifically, we compared the reverse routing results of the inverted lag-and-route model and of the inverted Kalinin-Miljukov model. The latter applies the lag-and-route model’s single-reservoir inversion scheme sequentially to its cascade of linear reservoirs, the number of which is related to the stream's hydromorphology. For this purpose, we used the example of Bruen & Dooge (2007), who back-routed flow hydrographs in a 100-km long prismatic channel using a scheme for the reverse solution of the St. Venant equations of flood wave motion. The lag-and-route reverse routing model recovered the inflow hydrograph with comparable accuracy to that of the multi-reservoir, inverted Kalinin-Miljukov model, both performing as well as the box-scheme for reverse routing with the St. Venant equations. In conclusion, the success in the regaining of the inflow signal by the devised single-reservoir reverse routing procedure, with multipoint low-pass filtering, can be attributed to its simple computational structure that endows it with remarkable robustness and exceptional efficiency.

  19. Inverse kinematic solution for near-simple robots and its application to robot calibration

    NASA Technical Reports Server (NTRS)

    Hayati, Samad A.; Roston, Gerald P.

    1986-01-01

    This paper provides an inverse kinematic solution for a class of robot manipulators called near-simple manipulators. The kinematics of these manipulators differ from those of simple-robots by small parameter variations. Although most robots are by design simple, in practice, due to manufacturing tolerances, every robot is near-simple. The method in this paper gives an approximate inverse kinematics solution for real time applications based on the nominal solution for these robots. The validity of the results are tested both by a simulation study and by applying the algorithm to a PUMA robot.

  20. Deformed coset models from gauged WZW actions

    NASA Astrophysics Data System (ADS)

    Park, Q.-Han

    1994-06-01

    A general Lagrangian formulation of integrably deformed G/H-coset models is given. We consider the G/H-coset model in terms of the gauged Wess-Zumino-Witten action and obtain an integrable deformation by adding a potential energy term Tr(gTg -1overlineT) , where algebra elements T, overlineT belong to the center of the algebra h associated with the subgroup H. We show that the classical equation of motion of the deformed coset model can be identified with the integrability condition of certain linear equations which makes the use of the inverse scattering method possible. Using the linear equation, we give a systematic way to construct infinitely many conserved currents as well as soliton solutions. In the case of the parafermionic SU(2)/U(1)-coset model, we derive n-solitons and conserved currents explicitly.

  1. Pareto joint inversion of 2D magnetotelluric and gravity data

    NASA Astrophysics Data System (ADS)

    Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek

    2015-04-01

    In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where interesting density distributions are relatively shallow and resistivity changes are related to deeper parts. This kind of conditions are well suited for joint inversion of MT and gravity data. In the next stage of the solution development of further code optimization and extensive tests for real data will be realized. Presented work was supported by Polish National Centre for Research and Development under the contract number POIG.01.04.00-12-279/13

  2. Fault-slip inversions: Their importance in terms of strain, heterogeneity, and kinematics of brittle deformation

    NASA Astrophysics Data System (ADS)

    Riller, U.; Clark, M. D.; Daxberger, H.; Doman, D.; Lenauer, I.; Plath, S.; Santimano, T.

    2017-08-01

    Heterogeneous deformation is intrinsic in natural deformation, but often underestimated in the analysis and interpretation of mesoscopic brittle shear faults. Based on the analysis of 11,222 faults from two distinct tectonic settings, the Central Andes in Argentina and the Sudbury area in Canada, interpolation of principal strain directions and scaled analogue modelling, we revisit controversial issues of fault-slip inversions, collectively adhering to heterogeneous deformation. These issues include the significance of inversion solutions in terms of (1) strain or paleo-stress; (2) displacement, notably plate convergence; (3) local versus far-field deformation; (4) strain perturbations and (5) spacing between stations of fault-slip data acquisition. Furthermore, we highlight the value of inversions for identifying the kinematics of master fault zones in the absence of displaced geological markers. A key result of our assessment is that fault-slip inversions relate to local strain, not paleo-stress, and thus can aid in inferring, the kinematics of master faults. Moreover, strain perturbations caused by mechanical anomalies of the deforming upper crust significantly influence local principal strain directions. Thus, differently oriented principal strain axes inferred from fault-slip inversions in a given region may not point to regional deformation caused by successive and distinct deformation regimes. This outcome calls into question the common practice of separating heterogeneous fault-slip data sets into apparently homogeneous subsets. Finally, the fact that displacement vectors and principal strains are rarely co-linear defies the use of brittle fault data as proxy for estimating directions of plate-scale motions.

  3. Transurethral Ultrasound Diffraction Tomography

    DTIC Science & Technology

    2007-03-01

    the covariance matrix was derived. The covariance reduced to that of the X- ray CT under the assumptions of linear operator and real data.[5] The...the covariance matrix in the linear x- ray computed tomography is a special case of the inverse scattering matrix derived in this paper. The matrix was...is derived in Sec. IV, and its relation to that of the linear x- ray computed tomography appears in Sec. V. In Sec. VI, the inverse scattering

  4. Sources of unbounded priority inversions in real-time systems and a comparative study of possible solutions

    NASA Technical Reports Server (NTRS)

    Davari, Sadegh; Sha, Lui

    1992-01-01

    In the design of real-time systems, tasks are often assigned priorities. Preemptive priority driven schedulers are used to schedule tasks to meet the timing requirements. Priority inversion is the term used to describe the situation when a higher priority task's execution is delayed by lower priority tasks. Priority inversion can occur when there is contention for resources among tasks of different priorities. The duration of priority inversion could be long enough to cause tasks to miss their dead lines. Priority inversion cannot be completely eliminated. However, it is important to identify sources of priority inversion and minimize the duration of priority inversion. In this paper, a comprehensive review of the problem of and solutions to unbounded priority inversion is presented.

  5. Coupled Hydrogeophysical Inversion and Hydrogeological Data Fusion

    NASA Astrophysics Data System (ADS)

    Cirpka, O. A.; Schwede, R. L.; Li, W.

    2012-12-01

    Tomographic geophysical monitoring methods give the opportunity to observe hydrogeological tests at higher spatial resolution than is possible with classical hydraulic monitoring tools. This has been demonstrated in a substantial number of studies in which electrical resistivity tomography (ERT) has been used to monitor salt-tracer experiments. It is now accepted that inversion of such data sets requires a fully coupled framework, explicitly accounting for the hydraulic processes (groundwater flow and solute transport), the relationship between solute and geophysical properties (petrophysical relationship such as Archie's law), and the governing equations of the geophysical surveying techniques (e.g., the Poisson equation) as consistent coupled system. These data sets can be amended with data from other - more direct - hydrogeological tests to infer the distribution of hydraulic aquifer parameters. In the inversion framework, meaningful condensation of data does not only contribute to inversion efficiency but also increases the stability of the inversion. In particular, transient concentration data themselves only weakly depend on hydraulic conductivity, and model improvement using gradient-based methods is only possible when a substantial agreement between measurements and model output already exists. The latter also holds when concentrations are monitored by ERT. Tracer arrival times, by contrast, show high sensitivity and a more monotonic dependence on hydraulic conductivity than concentrations themselves. Thus, even without using temporal-moment generating equations, inverting travel times rather than concentrations or related geoelectrical signals themselves is advantageous. We have applied this approach to concentrations measured directly or via ERT, and to heat-tracer data. We present a consistent inversion framework including temporal moments of concentrations, geoelectrical signals obtained during salt-tracer tests, drawdown data from hydraulic tomography and flowmeter measurements to identify mainly the hydraulic-conductivity distribution. By stating the inversion as geostatistical conditioning problem, we obtain parameter sets together with their correlated uncertainty. While we have applied the quasi-linear geostatistical approach as inverse kernel, other methods - such as ensemble Kalman methods - may suit the same purpose, particularly when many data points are to be included. In order to identify 3-D fields, discretized by about 50 million grid points, we use the high-performance-computing framework DUNE to solve the involved partial differential equations on midrange computer cluster. We have quantified the worth of different data types in these inference problems. In practical applications, the constitutive relationships between geophysical, thermal, and hydraulic properties can pose a problem, requiring additional inversion. However, not well constrained transient boundary conditions may put inversion efforts on larger (e.g. regional) scales even more into question. We envision that future hydrogeophysical inversion efforts will target boundary conditions, such as groundwater recharge rates, in conjunction with - or instead of - aquifer parameters. By this, the distinction between data assimilation and parameter estimation will gradually vanish.

  6. Quantification of Uncertainty in Full-Waveform Moment Tensor Inversion for Regional Seismicity

    NASA Astrophysics Data System (ADS)

    Jian, P.; Hung, S.; Tseng, T.

    2013-12-01

    Routinely and instantaneously determined moment tensor solutions deliver basic information for investigating faulting nature of earthquakes and regional tectonic structure. The accuracy of full-waveform moment tensor inversion mostly relies on azimuthal coverage of stations, data quality and previously known earth's structure (i.e., impulse responses or Green's functions). However, intrinsically imperfect station distribution, noise-contaminated waveform records and uncertain earth structure can often result in large deviations of the retrieved source parameters from the true ones, which prohibits the use of routinely reported earthquake catalogs for further structural and tectonic interferences. Duputel et al. (2012) first systematically addressed the significance of statistical uncertainty estimation in earthquake source inversion and exemplified that the data covariance matrix, if prescribed properly to account for data dependence and uncertainty due to incomplete and erroneous data and hypocenter mislocation, cannot only be mapped onto the uncertainty estimate of resulting source parameters, but it also aids obtaining more stable and reliable results. Over the past decade, BATS (Broadband Array in Taiwan for Seismology) has steadily devoted to building up a database of good-quality centroid moment tensor (CMT) solutions for moderate to large magnitude earthquakes that occurred in Taiwan area. Because of the lack of the uncertainty quantification and reliability analysis, it remains controversial to use the reported CMT catalog directly for further investigation of regional tectonics, near-source strong ground motions, and seismic hazard assessment. In this study, we develop a statistical procedure to make quantitative and reliable estimates of uncertainty in regional full-waveform CMT inversion. The linearized inversion scheme adapting efficient estimation of the covariance matrices associated with oversampled noisy waveform data and errors of biased centroid positions is implemented and inspected for improving source parameter determination of regional seismicity in Taiwan. Synthetic inversion tests demonstrate the resolved moment tensors would better match the hypothetical CMT solutions, and tend to suppress unreal non-double-couple components and reduce the trade-off between focal mechanism and centroid depth if individual signal-to-noise ratios and correlation lengths for 3-component seismograms at each station and mislocation uncertainties are properly taken into account. We further testify the capability of our scheme in retrieving the robust CMT information for mid-sized (Mw~3.5) and offshore earthquakes in Taiwan, which offers immediate and broad applications in detailed modelling of regional stress field and deformation pattern and mapping of subsurface velocity structures.

  7. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  8. Elastic robot control - Nonlinear inversion and linear stabilization

    NASA Technical Reports Server (NTRS)

    Singh, S. N.; Schy, A. A.

    1986-01-01

    An approach to the control of elastic robot systems for space applications using inversion, servocompensation, and feedback stabilization is presented. For simplicity, a robot arm (PUMA type) with three rotational joints is considered. The third link is assumed to be elastic. Using an inversion algorithm, a nonlinear decoupling control law u(d) is derived such that in the closed-loop system independent control of joint angles by the three joint torquers is accomplished. For the stabilization of elastic oscillations, a linear feedback torquer control law u(s) is obtained applying linear quadratic optimization to the linearized arm model augmented with a servocompensator about the terminal state. Simulation results show that in spite of uncertainties in the payload and vehicle angular velocity, good joint angle control and damping of elastic oscillations are obtained with the torquer control law u = u(d) + u(s).

  9. Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations

    NASA Astrophysics Data System (ADS)

    Zhi, L.; Gu, H.

    2017-12-01

    The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion has a better applicability. It doesn't need some assumptions and can estimate more parameters simultaneously. Meanwhile, by using the generalized linear method, the inversion is easily realized and its calculation amount is small. We use the Marmousi model to generate synthetic seismic records to test and analyze the influence of random noise. Without noise, all estimation results are relatively accurate. With the increase of noise, P-wave velocity change and oil saturation change are stable and less affected by noise. S-wave velocity change is most affected by noise. Finally we use the actual field data of time-lapse seismic prospecting to process and the results can prove the availability and feasibility of our method in actual situation.

  10. Angle-domain inverse scattering migration/inversion in isotropic media

    NASA Astrophysics Data System (ADS)

    Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan

    2018-07-01

    The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.

  11. Sealing glass-ceramics with near-linear thermal strain, Part II: Sequence of crystallization and phase stability

    DOE PAGES

    Rodriguez, Mark A.; Griego, James J. M.; Dai, Steve

    2016-08-22

    The sequence of crystallization in a recrystallizable lithium silicate sealing glass-ceramic Li 2O–SiO 2–Al 2O 3–K 2O–B 2O 3–P 2O 5–ZnO was analyzed by in situ high-temperature X-ray diffraction (HTXRD). Glass-ceramic specimens have been subjected to a two-stage heat-treatment schedule, including rapid cooling from sealing temperature to a first hold temperature 650°C, followed by heating to a second hold temperature of 810°C. Notable growth and saturation of Quartz was observed at 650°C (first hold). Cristobalite crystallized at the second hold temperature of 810°C, growing from the residual glass rather than converting from the Quartz. The coexistence of quartz and cristobalitemore » resulted in a glass-ceramic having a near-linear thermal strain, as opposed to the highly nonlinear glass-ceramic where the cristobalite is the dominant silica crystalline phase. HTXRD was also performed to analyze the inversion and phase stability of the two types of fully crystallized glass-ceramics. While the inversion in cristobalite resembles the character of a first-order displacive phase transformation, i.e., step changes in lattice parameters and thermal hysteresis in the transition temperature, the inversion in quartz appears more diffuse and occurs over a much broader temperature range. Furthermore, localized tensile stresses on quartz and possible solid-solution effects have been attributed to the transition behavior of quartz crystals embedded in the glass-ceramics.« less

  12. New Additions to the Toolkit for Forward/Inverse Problems in Electrocardiography within the SCIRun Problem Solving Environment.

    PubMed

    Coll-Font, Jaume; Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrel J; Wang, Dafang; Brooks, Dana H; van Dam, Peter; Macleod, Rob S

    2014-09-01

    Cardiac electrical imaging often requires the examination of different forward and inverse problem formulations based on mathematical and numerical approximations of the underlying source and the intervening volume conductor that can generate the associated voltages on the surface of the body. If the goal is to recover the source on the heart from body surface potentials, the solution strategy must include numerical techniques that can incorporate appropriate constraints and recover useful solutions, even though the problem is badly posed. Creating complete software solutions to such problems is a daunting undertaking. In order to make such tools more accessible to a broad array of researchers, the Center for Integrative Biomedical Computing (CIBC) has made an ECG forward/inverse toolkit available within the open source SCIRun system. Here we report on three new methods added to the inverse suite of the toolkit. These new algorithms, namely a Total Variation method, a non-decreasing TMP inverse and a spline-based inverse, consist of two inverse methods that take advantage of the temporal structure of the heart potentials and one that leverages the spatial characteristics of the transmembrane potentials. These three methods further expand the possibilities of researchers in cardiology to explore and compare solutions to their particular imaging problem.

  13. Hydrodynamic Modeling of Free Surface Interactions and Implications for P and Rg Waves Recorded on the Source Physics Experiments

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Rougier, E.; Knight, E.; Yang, X.; Patton, H. J.

    2013-12-01

    A goal of the Source Physics Experiments (SPE) is to develop explosion source models expanding monitoring capabilities beyond empirical methods. The SPE project combines field experimentation with numerical modelling. The models take into account non-linear processes occurring from the first moment of the explosion as well as complex linear propagation effects of signals reaching far-field recording stations. The hydrodynamic code CASH is used for modelling high-strain rate, non-linear response occurring in the material near the source. Our development efforts focused on incorporating in-situ stress and fracture processes. CASH simulates the material response from the near-source, strong shock zone out to the small-strain and ultimately the elastic regime where a linear code can take over. We developed an interface with the Spectral Element Method code, SPECFEM3D, that is an efficient implementation on parallel computers of a high-order finite element method. SPECFEM3D allows accurate modelling of wave propagation to remote monitoring distance at low cost. We will present CASH-SPECFEM3D results for SPE1, which was a chemical detonation of about 85 kg of TNT at 55 m depth in a granitic geologic unit. Spallation was observed for SPE1. Keeping yield fixed we vary the depth of the source systematically and compute synthetic seismograms to distances where the P and Rg waves are separated, so that analysis can be performed without concern about interference effects due to overlapping energy. We study the time and frequency characteristics of P and Rg waves and analyse them in regard to the impact of free-surface interactions and rock damage resulting from those interactions. We also perform traditional CMT inversions as well as advanced CMT inversions, developed at LANL to take into account the damage. This will allow us to assess the effect of spallation on CMT solutions as well as to validate our inversion procedure. Further work will aim to validate the developed models with the data recorded on SPEs. This long-term goal requires taking into account the 3D structure and thus a comprehensive characterization of the site.

  14. Electromagnetic inverse scattering

    NASA Technical Reports Server (NTRS)

    Bojarski, N. N.

    1972-01-01

    A three-dimensional electromagnetic inverse scattering identity, based on the physical optics approximation, is developed for the monostatic scattered far field cross section of perfect conductors. Uniqueness of this inverse identity is proven. This identity requires complete scattering information for all frequencies and aspect angles. A nonsingular integral equation is developed for the arbitrary case of incomplete frequence and/or aspect angle scattering information. A general closed-form solution to this integral equation is developed, which yields the shape of the scatterer from such incomplete information. A specific practical radar solution is presented. The resolution of this solution is developed, yielding short-pulse target resolution radar system parameter equations. The special cases of two- and one-dimensional inverse scattering and the special case of a priori knowledge of scatterer symmetry are treated in some detail. The merits of this solution over the conventional radar imaging technique are discussed.

  15. Parallelized Three-Dimensional Resistivity Inversion Using Finite Elements And Adjoint State Methods

    NASA Astrophysics Data System (ADS)

    Schaa, Ralf; Gross, Lutz; Du Plessis, Jaco

    2015-04-01

    The resistivity method is one of the oldest geophysical exploration methods, which employs one pair of electrodes to inject current into the ground and one or more pairs of electrodes to measure the electrical potential difference. The potential difference is a non-linear function of the subsurface resistivity distribution described by an elliptic partial differential equation (PDE) of the Poisson type. Inversion of measured potentials solves for the subsurface resistivity represented by PDE coefficients. With increasing advances in multichannel resistivity acquisition systems (systems with more than 60 channels and full waveform recording are now emerging), inversion software require efficient storage and solver algorithms. We developed the finite element solver Escript, which provides a user-friendly programming environment in Python to solve large-scale PDE-based problems (see https://launchpad.net/escript-finley). Using finite elements, highly irregular shaped geology and topography can readily be taken into account. For the 3D resistivity problem, we have implemented the secondary potential approach, where the PDE is decomposed into a primary potential caused by the source current and the secondary potential caused by changes in subsurface resistivity. The primary potential is calculated analytically, and the boundary value problem for the secondary potential is solved using nodal finite elements. This approach removes the singularity caused by the source currents and provides more accurate 3D resistivity models. To solve the inversion problem we apply a 'first optimize then discretize' approach using the quasi-Newton scheme in form of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method (see Gross & Kemp 2013). The evaluation of the cost function requires the solution of the secondary potential PDE for each source current and the solution of the corresponding adjoint-state PDE for the cost function gradients with respect to the subsurface resistivity. The Hessian of the regularization term is used as preconditioner which requires an additional PDE solution in each iteration step. As it turns out, the relevant PDEs are naturally formulated in the finite element framework. Using the domain decomposition method provided in Escript, the inversion scheme has been parallelized for distributed memory computers with multi-core shared memory nodes. We show numerical examples from simple layered models to complex 3D models and compare with the results from other methods. The inversion scheme is furthermore tested on a field data example to characterise localised freshwater discharge in a coastal environment.. References: L. Gross and C. Kemp (2013) Large Scale Joint Inversion of Geophysical Data using the Finite Element Method in escript. ASEG Extended Abstracts 2013, http://dx.doi.org/10.1071/ASEG2013ab306

  16. Probabilistic estimation of splitting coefficients of normal modes of the Earth, and their uncertainties, using an autoregressive technique

    NASA Astrophysics Data System (ADS)

    Pachhai, S.; Masters, G.; Laske, G.

    2017-12-01

    Earth's normal-mode spectra are crucial to studying the long wavelength structure of the Earth. Such observations have been used extensively to estimate "splitting coefficients" which, in turn, can be used to determine the three-dimensional velocity and density structure. Most past studies apply a non-linear iterative inversion to estimate the splitting coefficients which requires that the earthquake source is known. However, it is challenging to know the source details, particularly for big events as used in normal-mode analyses. Additionally, the final solution of the non-linear inversion can depend on the choice of damping parameter and starting model. To circumvent the need to know the source, a two-step linear inversion has been developed and successfully applied to many mantle and core sensitive modes. The first step takes combinations of the data from a single event to produce spectra known as "receiver strips". The autoregressive nature of the receiver strips can then be used to estimate the structure coefficients without the need to know the source. Based on this approach, we recently employed a neighborhood algorithm to measure the splitting coefficients for an isolated inner-core sensitive mode (13S2). This approach explores the parameter space efficiently without any need of regularization and finds the structure coefficients which best fit the observed strips. Here, we implement a Bayesian approach to data collected for earthquakes from early 2000 and more recent. This approach combines the data (through likelihood) and prior information to provide rigorous parameter values and their uncertainties for both isolated and coupled modes. The likelihood function is derived from the inferred errors of the receiver strips which allows us to retrieve proper uncertainties. Finally, we apply model selection criteria that balance the trade-offs between fit (likelihood) and model complexity to investigate the degree and type of structure (elastic and anelastic) required to explain the data.

  17. Travel time tomography with local image regularization by sparsity constrained dictionary learning

    NASA Astrophysics Data System (ADS)

    Bianco, M.; Gerstoft, P.

    2017-12-01

    We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.

  18. Deformation Time-Series of the Lost-Hills Oil Field using a Multi-Baseline Interferometric SAR Inversion Algorithm with Finite Difference Smoothing Constraints

    NASA Astrophysics Data System (ADS)

    Werner, C. L.; Wegmüller, U.; Strozzi, T.

    2012-12-01

    The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities during the time intervals spanned by the interferogram and a DEM height correction. The sensitivity of the phase to the height correction depends on the length of the perpendicular baseline of each interferogram. This design matrix is augmented with a set of additional weighted constraints on the acceleration that penalize rapid velocity variations. The weighting factor γ can be varied from 0 (no smoothing) to a large values (> 10) that yield an essentially linear time-series solution. The factor can be tuned to take into account a priori knowledge of the deformation non-linearity. The difference between the time-series solution and the unconstrained time-series can be interpreted as due to a combination of tropospheric path delay and baseline error. Spatial smoothing of the residual phase leads to an improved atmospheric model that can be fed back into the model and iterated. Our analysis shows non-linear deformation related to changes in the oil extraction as well as local height corrections improving on the low resolution 3 arc-sec SRTM DEM.

  19. A full potential inverse method based on a density linearization scheme for wing design

    NASA Technical Reports Server (NTRS)

    Shankar, V.

    1982-01-01

    A mixed analysis inverse procedure based on the full potential equation in conservation form was developed to recontour a given base wing to produce density linearization scheme in applying the pressure boundary condition in terms of the velocity potential. The FL030 finite volume analysis code was modified to include the inverse option. The new surface shape information, associated with the modified pressure boundary condition, is calculated at a constant span station based on a mass flux integration. The inverse method is shown to recover the original shape when the analysis pressure is not altered. Inverse calculations for weakening of a strong shock system and for a laminar flow control (LFC) pressure distribution are presented. Two methods for a trailing edge closure model are proposed for further study.

  20. Recursive inverse kinematics for robot arms via Kalman filtering and Bryson-Frazier smoothing

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1987-01-01

    This paper applies linear filtering and smoothing theory to solve recursively the inverse kinematics problem for serial multilink manipulators. This problem is to find a set of joint angles that achieve a prescribed tip position and/or orientation. A widely applicable numerical search solution is presented. The approach finds the minimum of a generalized distance between the desired and the actual manipulator tip position and/or orientation. Both a first-order steepest-descent gradient search and a second-order Newton-Raphson search are developed. The optimal relaxation factor required for the steepest descent method is computed recursively using an outward/inward procedure similar to those used typically for recursive inverse dynamics calculations. The second-order search requires evaluation of a gradient and an approximate Hessian. A Gauss-Markov approach is used to approximate the Hessian matrix in terms of products of first-order derivatives. This matrix is inverted recursively using a two-stage process of inward Kalman filtering followed by outward smoothing. This two-stage process is analogous to that recently developed by the author to solve by means of spatial filtering and smoothing the forward dynamics problem for serial manipulators.

  1. Data fitting and image fine-tuning approach to solve the inverse problem in fluorescence molecular imaging

    NASA Astrophysics Data System (ADS)

    Gorpas, Dimitris; Politopoulos, Kostas; Yova, Dido; Andersson-Engels, Stefan

    2008-02-01

    One of the most challenging problems in medical imaging is to "see" a tumour embedded into tissue, which is a turbid medium, by using fluorescent probes for tumour labeling. This problem, despite the efforts made during the last years, has not been fully encountered yet, due to the non-linear nature of the inverse problem and the convergence failures of many optimization techniques. This paper describes a robust solution of the inverse problem, based on data fitting and image fine-tuning techniques. As a forward solver the coupled radiative transfer equation and diffusion approximation model is proposed and compromised via a finite element method, enhanced with adaptive multi-grids for faster and more accurate convergence. A database is constructed by application of the forward model on virtual tumours with known geometry, and thus fluorophore distribution, embedded into simulated tissues. The fitting procedure produces the best matching between the real and virtual data, and thus provides the initial estimation of the fluorophore distribution. Using this information, the coupled radiative transfer equation and diffusion approximation model has the required initial values for a computational reasonable and successful convergence during the image fine-tuning application.

  2. The inverse problems of wing panel manufacture processes

    NASA Astrophysics Data System (ADS)

    Oleinikov, A. I.; Bormotin, K. S.

    2013-12-01

    It is shown that inverse problems of steady-state creep bending of plates in both the geometrically linear and nonlinear formulations can be represented in a variational formulation. Steady-state values of the obtained functionals corresponding to the solutions of the problems of inelastic deformation and springback are determined by applying a finite element procedure to the functionals. Optimal laws of creep deformation are formulated using the criterion of minimizing damage in the functionals of the inverse problems. The formulated problems are reduced to the problems solved by the finite element method using MSC.Marc software. Currently, forming of light metals poses tremendous challenges due to their low ductility at room temperature and their unusual deformation characteristics at hot-cold work: strong asymmetry between tensile and compressive behavior, and a very pronounced anisotropy. We used the constitutive models of steady-state creep of initially transverse isotropy structural materials the kind of the stress state has influence. The paper gives basics of the developed computer-aided system of design, modeling, and electronic simulation targeting the processes of manufacture of wing integral panels. The modeling results can be used to calculate the die tooling, determine the panel processibility, and control panel rejection in the course of forming.

  3. The neural network approximation method for solving multidimensional nonlinear inverse problems of geophysics

    NASA Astrophysics Data System (ADS)

    Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.

    2017-07-01

    The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.

  4. Multi-level quantum mechanics theories and molecular mechanics study of the double-inversion mechanism of the F- + CH3I reaction in aqueous solution.

    PubMed

    Liu, Peng; Zhang, Jingxue; Wang, Dunyou

    2017-06-07

    A double-inversion mechanism of the F - + CH 3 I reaction was discovered in aqueous solution using combined multi-level quantum mechanics theories and molecular mechanics. The stationary points along the reaction path show very different structures to the ones in the gas phase due to the interactions between the solvent and solute, especially strong hydrogen bonds. An intermediate complex, a minimum on the potential of mean force, was found to serve as a connecting-link between the abstraction-induced inversion transition state and the Walden-inversion transition state. The potentials of mean force were calculated with both the DFT/MM and CCSD(T)/MM levels of theory. Our calculated free energy barrier of the abstraction-induced inversion is 69.5 kcal mol -1 at the CCSD(T)/MM level of theory, which agrees with the one at 72.9 kcal mol -1 calculated using the Born solvation model and gas-phase data; and our calculated free energy barrier of the Walden inversion is 24.2 kcal mol -1 , which agrees very well with the experimental value at 25.2 kcal mol -1 in aqueous solution. The calculations show that the aqueous solution makes significant contributions to the potentials of mean force and exerts a big impact on the molecular-level evolution along the reaction pathway.

  5. Dirac electron in a chiral space-time crystal created by counterpropagating circularly polarized plane electromagnetic waves

    NASA Astrophysics Data System (ADS)

    Borzdov, G. N.

    2017-10-01

    The family of solutions to the Dirac equation for an electron moving in an electromagnetic lattice with the chiral structure created by counterpropagating circularly polarized plane electromagnetic waves is obtained. At any nonzero quasimomentum, the dispersion equation has two solutions which specify bispinor wave functions describing electron states with different energies and mean values of momentum and spin operators. The inversion of the quasimomentum results in two other linearly independent solutions. These four basic wave functions are uniquely defined by eight complex scalar functions (structural functions), which serve as convenient building blocks of the relations describing the electron properties. These properties are illustrated in graphical form over a wide range of quasimomenta. The superpositions of two basic wave functions describing different spin states and corresponding to (i) the same quasimomentum (unidirectional electron states with the spin precession) and (ii) the two equal-in-magnitude but oppositely directed quasimomenta (bidirectional electron states) are also treated.

  6. Effects of adaptive refinement on the inverse EEG solution

    NASA Astrophysics Data System (ADS)

    Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.

    1995-10-01

    One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.

  7. Numerical method for the solution of large systems of differential equations of the boundary layer type

    NASA Technical Reports Server (NTRS)

    Green, M. J.; Nachtsheim, P. R.

    1972-01-01

    A numerical method for the solution of large systems of nonlinear differential equations of the boundary-layer type is described. The method is a modification of the technique for satisfying asymptotic boundary conditions. The present method employs inverse interpolation instead of the Newton method to adjust the initial conditions of the related initial-value problem. This eliminates the so-called perturbation equations. The elimination of the perturbation equations not only reduces the user's preliminary work in the application of the method, but also reduces the number of time-consuming initial-value problems to be numerically solved at each iteration. For further ease of application, the solution of the overdetermined system for the unknown initial conditions is obtained automatically by applying Golub's linear least-squares algorithm. The relative ease of application of the proposed numerical method increases directly as the order of the differential-equation system increases. Hence, the method is especially attractive for the solution of large-order systems. After the method is described, it is applied to a fifth-order problem from boundary-layer theory.

  8. Simultaneous analysis of large INTEGRAL/SPI1 datasets: Optimizing the computation of the solution and its variance using sparse matrix algorithms

    NASA Astrophysics Data System (ADS)

    Bouchet, L.; Amestoy, P.; Buttari, A.; Rouet, F.-H.; Chauvin, M.

    2013-02-01

    Nowadays, analyzing and reducing the ever larger astronomical datasets is becoming a crucial challenge, especially for long cumulated observation times. The INTEGRAL/SPI X/γ-ray spectrometer is an instrument for which it is essential to process many exposures at the same time in order to increase the low signal-to-noise ratio of the weakest sources. In this context, the conventional methods for data reduction are inefficient and sometimes not feasible at all. Processing several years of data simultaneously requires computing not only the solution of a large system of equations, but also the associated uncertainties. We aim at reducing the computation time and the memory usage. Since the SPI transfer function is sparse, we have used some popular methods for the solution of large sparse linear systems; we briefly review these methods. We use the Multifrontal Massively Parallel Solver (MUMPS) to compute the solution of the system of equations. We also need to compute the variance of the solution, which amounts to computing selected entries of the inverse of the sparse matrix corresponding to our linear system. This can be achieved through one of the latest features of the MUMPS software that has been partly motivated by this work. In this paper we provide a brief presentation of this feature and evaluate its effectiveness on astrophysical problems requiring the processing of large datasets simultaneously, such as the study of the entire emission of the Galaxy. We used these algorithms to solve the large sparse systems arising from SPI data processing and to obtain both their solutions and the associated variances. In conclusion, thanks to these newly developed tools, processing large datasets arising from SPI is now feasible with both a reasonable execution time and a low memory usage.

  9. Exact Solutions of Coupled Multispecies Linear Reaction–Diffusion Equations on a Uniformly Growing Domain

    PubMed Central

    Simpson, Matthew J.; Sharp, Jesse A.; Morrow, Liam C.; Baker, Ruth E.

    2015-01-01

    Embryonic development involves diffusion and proliferation of cells, as well as diffusion and reaction of molecules, within growing tissues. Mathematical models of these processes often involve reaction–diffusion equations on growing domains that have been primarily studied using approximate numerical solutions. Recently, we have shown how to obtain an exact solution to a single, uncoupled, linear reaction–diffusion equation on a growing domain, 0 < x < L(t), where L(t) is the domain length. The present work is an extension of our previous study, and we illustrate how to solve a system of coupled reaction–diffusion equations on a growing domain. This system of equations can be used to study the spatial and temporal distributions of different generations of cells within a population that diffuses and proliferates within a growing tissue. The exact solution is obtained by applying an uncoupling transformation, and the uncoupled equations are solved separately before applying the inverse uncoupling transformation to give the coupled solution. We present several example calculations to illustrate different types of behaviour. The first example calculation corresponds to a situation where the initially–confined population diffuses sufficiently slowly that it is unable to reach the moving boundary at x = L(t). In contrast, the second example calculation corresponds to a situation where the initially–confined population is able to overcome the domain growth and reach the moving boundary at x = L(t). In its basic format, the uncoupling transformation at first appears to be restricted to deal only with the case where each generation of cells has a distinct proliferation rate. However, we also demonstrate how the uncoupling transformation can be used when each generation has the same proliferation rate by evaluating the exact solutions as an appropriate limit. PMID:26407013

  10. Exact Solutions of Coupled Multispecies Linear Reaction-Diffusion Equations on a Uniformly Growing Domain.

    PubMed

    Simpson, Matthew J; Sharp, Jesse A; Morrow, Liam C; Baker, Ruth E

    2015-01-01

    Embryonic development involves diffusion and proliferation of cells, as well as diffusion and reaction of molecules, within growing tissues. Mathematical models of these processes often involve reaction-diffusion equations on growing domains that have been primarily studied using approximate numerical solutions. Recently, we have shown how to obtain an exact solution to a single, uncoupled, linear reaction-diffusion equation on a growing domain, 0 < x < L(t), where L(t) is the domain length. The present work is an extension of our previous study, and we illustrate how to solve a system of coupled reaction-diffusion equations on a growing domain. This system of equations can be used to study the spatial and temporal distributions of different generations of cells within a population that diffuses and proliferates within a growing tissue. The exact solution is obtained by applying an uncoupling transformation, and the uncoupled equations are solved separately before applying the inverse uncoupling transformation to give the coupled solution. We present several example calculations to illustrate different types of behaviour. The first example calculation corresponds to a situation where the initially-confined population diffuses sufficiently slowly that it is unable to reach the moving boundary at x = L(t). In contrast, the second example calculation corresponds to a situation where the initially-confined population is able to overcome the domain growth and reach the moving boundary at x = L(t). In its basic format, the uncoupling transformation at first appears to be restricted to deal only with the case where each generation of cells has a distinct proliferation rate. However, we also demonstrate how the uncoupling transformation can be used when each generation has the same proliferation rate by evaluating the exact solutions as an appropriate limit.

  11. Thermal activation parameters of plastic flow reveal deformation mechanisms in the CrMnFeCoNi high-entropy alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laplanche, Guillaume; Bonneville, J.; Varvenne, C.

    To reveal the operating mechanisms of plastic deformation in an FCC high-entropy alloy, the activation volumes in CrMnFeCoNi have been measured as a function of plastic strain and temperature between 77 K and 423 K using repeated load relaxation experiments. At the yield stress, σ y, the activation volume varies from ~60 b3 at 77 K to ~360 b 3 at 293 K and scales inversely with yield stress. With increasing plastic strain, the activation volume decreases and the trends follow the Cottrell-Stokes law, according to which the inverse activation volume should increase linearly with σ - σ y (Haasenmore » plot). This is consistent with the notion that hardening due to an increase in the density of forest dislocations is naturally associated with a decrease in the activation volume because the spacing between dislocations decreases. The values and trends in activation volume agree with theoretical predictions that treat the HEA as a high-concentration solid-solution-strengthened alloy. Lastly, these results demonstrate that this HEA deforms by the mechanisms typical of solute strengthening in FCC alloys, and thus indicate that the high compositional/structural complexity does not introduce any new intrinsic deformation mechanisms.« less

  12. Thermal activation parameters of plastic flow reveal deformation mechanisms in the CrMnFeCoNi high-entropy alloy

    DOE PAGES

    Laplanche, Guillaume; Bonneville, J.; Varvenne, C.; ...

    2017-10-06

    To reveal the operating mechanisms of plastic deformation in an FCC high-entropy alloy, the activation volumes in CrMnFeCoNi have been measured as a function of plastic strain and temperature between 77 K and 423 K using repeated load relaxation experiments. At the yield stress, σ y, the activation volume varies from ~60 b3 at 77 K to ~360 b 3 at 293 K and scales inversely with yield stress. With increasing plastic strain, the activation volume decreases and the trends follow the Cottrell-Stokes law, according to which the inverse activation volume should increase linearly with σ - σ y (Haasenmore » plot). This is consistent with the notion that hardening due to an increase in the density of forest dislocations is naturally associated with a decrease in the activation volume because the spacing between dislocations decreases. The values and trends in activation volume agree with theoretical predictions that treat the HEA as a high-concentration solid-solution-strengthened alloy. Lastly, these results demonstrate that this HEA deforms by the mechanisms typical of solute strengthening in FCC alloys, and thus indicate that the high compositional/structural complexity does not introduce any new intrinsic deformation mechanisms.« less

  13. Using a derivative-free optimization method for multiple solutions of inverse transport problems

    DOE PAGES

    Armstrong, Jerawan C.; Favorite, Jeffrey A.

    2016-01-14

    Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less

  14. Near constant-time optimal piecewise LDR to HDR inverse tone mapping

    NASA Astrophysics Data System (ADS)

    Chen, Qian; Su, Guan-Ming; Yin, Peng

    2015-02-01

    In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.

  15. Determination of Nerve Fiber Diameter Distribution From Compound Action Potential: A Continuous Approach.

    PubMed

    Un, M Kerem; Kaghazchi, Hamed

    2018-01-01

    When a signal is initiated in the nerve, it is transmitted along each nerve fiber via an action potential (called single fiber action potential (SFAP)) which travels with a velocity that is related with the diameter of the fiber. The additive superposition of SFAPs constitutes the compound action potential (CAP) of the nerve. The fiber diameter distribution (FDD) in the nerve can be computed from the CAP data by solving an inverse problem. This is usually achieved by dividing the fibers into a finite number of diameter groups and solve a corresponding linear system to optimize FDD. However, number of fibers in a nerve can be measured sometimes in thousands and it is possible to assume a continuous distribution for the fiber diameters which leads to a gradient optimization problem. In this paper, we have evaluated this continuous approach to the solution of the inverse problem. We have utilized an analytical function for SFAP and an assumed a polynomial form for FDD. The inverse problem involves the optimization of polynomial coefficients to obtain the best estimate for the FDD. We have observed that an eighth order polynomial for FDD can capture both unimodal and bimodal fiber distributions present in vivo, even in case of noisy CAP data. The assumed FDD distribution regularizes the ill-conditioned inverse problem and produces good results.

  16. Optimal experimental designs for estimating Henry's law constants via the method of phase ratio variation.

    PubMed

    Kapelner, Adam; Krieger, Abba; Blanford, William J

    2016-10-14

    When measuring Henry's law constants (k H ) using the phase ratio variation (PRV) method via headspace gas chromatography (G C ), the value of k H of the compound under investigation is calculated from the ratio of the slope to the intercept of a linear regression of the inverse G C response versus the ratio of gas to liquid volumes of a series of vials drawn from the same parent solution. Thus, an experimenter collects measurements consisting of the independent variable (the gas/liquid volume ratio) and dependent variable (the G C -1 peak area). A review of the literature found that the common design is a simple uniform spacing of liquid volumes. We present an optimal experimental design which estimates k H with minimum error and provides multiple means for building confidence intervals for such estimates. We illustrate performance improvements of our design with an example measuring the k H for Naphthalene in aqueous solution as well as simulations on previous studies. Our designs are most applicable after a trial run defines the linear G C response and the linear phase ratio to the G C -1 region (where the PRV method is suitable) after which a practitioner can collect measurements in bulk. The designs can be easily computed using our open source software optDesignSlopeInt, an R package on CRAN. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Theory of end-labeled free-solution electrophoresis: is the end effect important?

    PubMed

    Chubynsky, Mykyta V; Slater, Gary W

    2014-03-01

    In the theory of free-solution electrophoresis of a polyelectrolyte (such as the DNA) conjugated with a "drag-tag," the conjugate is divided into segments of equal hydrodynamic friction and its electrophoretic mobility is calculated as a weighted average of the mobilities of individual segments. If all the weights are assumed equal, then for an electrically neutral drag-tag, the elution time t is predicted to depend linearly on the inverse DNA length 1/M. While it is well-known that the equal-weights assumption is approximate and in reality the weights increase toward the ends, this "end effect" has been assumed to be small, since in experiments the t(1/M) dependence seems to be nearly perfectly linear. We challenge this assumption pointing out that some experimental linear fits do not extrapolate to the free (i.e. untagged) DNA elution time in the limit 1/M→0, indicating nonlinearity outside the fitting range. We show that a theory for a flexible polymer taking the end effect into account produces a nonlinear curve that, however, can be fitted with a straight line over a limited range of 1/M typical of experiments, but with a "wrong" intercept, which explains the experimental results without additional assumptions. We also study the influence of the flexibilities of the charged and neutral parts. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Frequency-domain full-waveform inversion with non-linear descent directions

    NASA Astrophysics Data System (ADS)

    Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.

    2018-05-01

    Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a benchmark FWI approach involving the standard gradient.

  19. Regolith thermal property inversion in the LUNAR-A heat-flow experiment

    NASA Astrophysics Data System (ADS)

    Hagermann, A.; Tanaka, S.; Yoshida, S.; Fujimura, A.; Mizutani, H.

    2001-11-01

    In 2003, two penetrators of the LUNAR--A mission of ISAS will investigate the internal structure of the Moon by conducting seismic and heat--flow experiments. Heat-flow is the product of thermal gradient tial T / tial z, and thermal conductivity λ of the lunar regolith. For measuring the thermal conductivity (or dissusivity), each penetrator will carry five thermal property sensors, consisting of small disc heaters. The thermal response Ts(t) of the heater itself to the constant known power supply of approx. 50 mW serves as the data for the subsequent data interpretation. Horai et al. (1991) found a forward analytical solution to the problem of determining the thermal inertia λ ρ c of the regolith for constant thermal properties and a simplyfied geometry. In the inversion, the problem of deriving the unknown thermal properties of a medium from known heat sources and temperatures is an Identification Heat Conduction Problem (IDHCP), an ill--posed inverse problem. Assuming that thermal conductivity λ and heat capacity ρ c are linear functions of temperature (which is reasonable in most cases), one can apply a Kirchhoff transformation to linearize the heat conduction equation, which minimizes computing time. Then the error functional, i.e. the difference between the measured temperature response of the heater and the predicted temperature response, can be minimized, thus solving for thermal dissusivity κ = λ / (ρ c), wich will complete the set of parameters needed for a detailed description of thermal properties of the lunar regolith. Results of model calculations will be presented, in which synthetic data and calibration data are used to invert the unknown thermal diffusivity of the unknown medium by means of a modified Newton Method. Due to the ill-posedness of the problem, the number of parameters to be solved for should be limited. As the model calculations reveal, a homogeneous regolith allows for a fast and accurate inversion.

  20. A numerical analysis for non-linear radiation in MHD flow around a cylindrical surface with chemically reactive species

    NASA Astrophysics Data System (ADS)

    Khan, Junaid Ahmad; Mustafa, M.

    2018-03-01

    Boundary layer flow around a stretchable rough cylinder is modeled by taking into account boundary slip and transverse magnetic field effects. The main concern is to resolve heat/mass transfer problem considering non-linear radiative heat transfer and temperature/concentration jump aspects. Using conventional similarity approach, the equations of motion and heat transfer are converted into a boundary value problem whose solution is computed by shooting method for broad range of slip coefficients. The proposed numerical scheme appears to improve as the strengths of magnetic field and slip coefficients are enhanced. Axial velocity and temperature are considerably influenced by a parameter M which is inversely proportional to the radius of cylinder. A significant change in temperature profile is depicted for growing wall to ambient temperature ratio. Relevant physical quantities such as wall shear stress, local Nusselt number and local Sherwood number are elucidated in detail.

  1. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    NASA Astrophysics Data System (ADS)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  2. Electron dynamics inside a vacuum tube diode through linear differential equations

    NASA Astrophysics Data System (ADS)

    González, Gabriel; Orozco, Fco. Javier González; Orozco

    2014-04-01

    In this paper we analyze the motion of charged particles in a vacuum tube diode by solving linear differential equations. Our analysis is based on expressing the volume charge density as a function of the current density and coordinates only, i.e. ρ=ρ(J,z), while in the usual scheme the volume charge density is expressed as a function of the current density and electrostatic potential, i.e. ρ=ρ(J,V). We show that, in the case of slow varying charge density, the space-charge-limited current is reduced up to 50%. Our approach gives the well-known behavior of the classical current density proportional to the three-halves power of the bias potential and inversely proportional to the square of the gap distance between electrodes, and does not require the solution of the nonlinear differential equation normally associated with the Child-Langmuir formulation.

  3. Constraints to solve parallelogram grid problems in 2D non separable linear canonical transform

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Healy, John J.; Muniraj, Inbarasan; Cui, Xiao-Guang; Malallah, Ra'ed; Ryle, James P.; Sheridan, John T.

    2017-05-01

    The 2D non-separable linear canonical transform (2D-NS-LCT) can model a range of various paraxial optical systems. Digital algorithms to evaluate the 2D-NS-LCTs are important in modeling the light field propagations and also of interest in many digital signal processing applications. In [Zhao 14] we have reported that a given 2D input image with rectangular shape/boundary, in general, results in a parallelogram output sampling grid (generally in an affine coordinates rather than in a Cartesian coordinates) thus limiting the further calculations, e.g. inverse transform. One possible solution is to use the interpolation techniques; however, it reduces the speed and accuracy of the numerical approximations. To alleviate this problem, in this paper, some constraints are derived under which the output samples are located in the Cartesian coordinates. Therefore, no interpolation operation is required and thus the calculation error can be significantly eliminated.

  4. Photonic band gap structure simulator

    DOEpatents

    Chen, Chiping; Shapiro, Michael A.; Smirnova, Evgenya I.; Temkin, Richard J.; Sirigiri, Jagadishwar R.

    2006-10-03

    A system and method for designing photonic band gap structures. The system and method provide a user with the capability to produce a model of a two-dimensional array of conductors corresponding to a unit cell. The model involves a linear equation. Boundary conditions representative of conditions at the boundary of the unit cell are applied to a solution of the Helmholtz equation defined for the unit cell. The linear equation can be approximated by a Hermitian matrix. An eigenvalue of the Helmholtz equation is calculated. One computation approach involves calculating finite differences. The model can include a symmetry element, such as a center of inversion, a rotation axis, and a mirror plane. A graphical user interface is provided for the user's convenience. A display is provided to display to a user the calculated eigenvalue, corresponding to a photonic energy level in the Brilloin zone of the unit cell.

  5. A fractional Fourier transform analysis of the scattering of ultrasonic waves.

    PubMed

    Tant, Katherine M M; Mulholland, Anthony J; Langer, Matthias; Gachagan, Anthony

    2015-03-08

    Many safety critical structures, such as those found in nuclear plants, oil pipelines and in the aerospace industry, rely on key components that are constructed from heterogeneous materials. Ultrasonic non-destructive testing (NDT) uses high-frequency mechanical waves to inspect these parts, ensuring they operate reliably without compromising their integrity. It is possible to employ mathematical models to develop a deeper understanding of the acquired ultrasonic data and enhance defect imaging algorithms. In this paper, a model for the scattering of ultrasonic waves by a crack is derived in the time-frequency domain. The fractional Fourier transform (FrFT) is applied to an inhomogeneous wave equation where the forcing function is prescribed as a linear chirp, modulated by a Gaussian envelope. The homogeneous solution is found via the Born approximation which encapsulates information regarding the flaw geometry. The inhomogeneous solution is obtained via the inverse Fourier transform of a Gaussian-windowed linear chirp excitation. It is observed that, although the scattering profile of the flaw does not change, it is amplified. Thus, the theory demonstrates the enhanced signal-to-noise ratio permitted by the use of coded excitation, as well as establishing a time-frequency domain framework to assist in flaw identification and classification.

  6. Earthquake mechanisms from linear-programming inversion of seismic-wave amplitude ratios

    USGS Publications Warehouse

    Julian, B.R.; Foulger, G.R.

    1996-01-01

    The amplitudes of radiated seismic waves contain far more information about earthquake source mechanisms than do first-motion polarities, but amplitudes are severely distorted by the effects of heterogeneity in the Earth. This distortion can be reduced greatly by using the ratios of amplitudes of appropriately chosen seismic phases, rather than simple amplitudes, but existing methods for inverting amplitude ratios are severely nonlinear and require computationally intensive searching methods to ensure that solutions are globally optimal. Searching methods are particularly costly if general (moment tensor) mechanisms are allowed. Efficient linear-programming methods, which do not suffer from these problems, have previously been applied to inverting polarities and wave amplitudes. We extend these methods to amplitude ratios, in which formulation on inequality constraint for an amplitude ratio takes the same mathematical form as a polarity observation. Three-component digital data for an earthquake at the Hengill-Grensdalur geothermal area in southwestern Iceland illustrate the power of the method. Polarities of P, SH, and SV waves, unusually well distributed on the focal sphere, cannot distinguish between diverse mechanisms, including a double couple. Amplitude ratios, on the other hand, clearly rule out the double-couple solution and require a large explosive isotropic component.

  7. Determination and analysis of non-linear index profiles in electron-beam-deposited MgOAl2O3ZrO2 ternary composite thin-film optical coatings

    NASA Astrophysics Data System (ADS)

    Sahoo, N. K.; Thakur, S.; Senthilkumar, M.; Das, N. C.

    2005-02-01

    Thickness-dependent index non-linearity in thin films has been a thought provoking as well as intriguing topic in the field of optical coatings. The characterization and analysis of such inhomogeneous index profiles pose several degrees of challenges to thin-film researchers depending upon the availability of relevant experimental and process-monitoring-related information. In the present work, a variety of novel experimental non-linear index profiles have been observed in thin films of MgOAl2O3ZrO2 ternary composites in solid solution under various electron-beam deposition parameters. Analysis and derivation of these non-linear spectral index profiles have been carried out by an inverse-synthesis approach using a real-time optical monitoring signal and post-deposition transmittance and reflection spectra. Most of the non-linear index functions are observed to fit polynomial equations of order seven or eight very well. In this paper, the application of such a non-linear index function has also been demonstrated in designing electric-field-optimized high-damage-threshold multilayer coatings such as normal- and oblique-incidence edge filters and a broadband beam splitter for p-polarized light. Such designs can also advantageously maintain the microstructural stability of the multilayer structure due to the low stress factor of the non-linear ternary composite layers.

  8. Probabilistic numerical methods for PDE-constrained Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Cockayne, Jon; Oates, Chris; Sullivan, Tim; Girolami, Mark

    2017-06-01

    This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse problems. This method is tested on a challenging inverse problem with a nonlinear forward model.

  9. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  10. Deforming black hole and cosmological solutions by quasiperiodic and/or pattern forming structures in modified and Einstein gravity

    NASA Astrophysics Data System (ADS)

    Bubuianu, Laurenţiu; Vacaru, Sergiu I.

    2018-05-01

    We elaborate on the anholonomic frame deformation method, AFDM, for constructing exact solutions with quasiperiodic structure in modified gravity theories, MGTs, and general relativity, GR. Such solutions are described by generic off-diagonal metrics, nonlinear and linear connections and (effective) matter sources with coefficients depending on all spacetime coordinates via corresponding classes of generation and integration functions and (effective) matter sources. There are studied effective free energy functionals and nonlinear evolution equations for generating off-diagonal quasiperiodic deformations of black hole and/or homogeneous cosmological metrics. The physical data for such functionals are stated by different values of constants and prescribed symmetries for defining quasiperiodic structures at cosmological scales, or astrophysical objects in nontrivial gravitational backgrounds some similar forms as in condensed matter physics. It is shown how quasiperiodic structures determined by general nonlinear, or additive, functionals for generating functions and (effective) sources may transform black hole like configurations into cosmological metrics and inversely. We speculate on possible implications of quasiperiodic solutions in dark energy and dark matter physics. Finally, it is concluded that geometric methods for constructing exact solutions consist an important alternative tool to numerical relativity for investigating nonlinear effects in astrophysics and cosmology.

  11. An investigation on a two-dimensional problem of Mode-I crack in a thermoelastic medium

    NASA Astrophysics Data System (ADS)

    Kant, Shashi; Gupta, Manushi; Shivay, Om Namha; Mukhopadhyay, Santwana

    2018-04-01

    In this work, we consider a two-dimensional dynamical problem of an infinite space with finite linear Mode-I crack and employ a recently proposed heat conduction model: an exact heat conduction with a single delay term. The thermoelastic medium is taken to be homogeneous and isotropic. However, the boundary of the crack is subjected to a prescribed temperature and stress distributions. The Fourier and Laplace transform techniques are used to solve the problem. Mathematical modeling of the present problem reduces the solution of the problem into the solution of a system of four dual integral equations. The solution of these equations is equivalent to the solution of the Fredholm's integral equation of the first kind which has been solved by using the regularization method. Inverse Laplace transform is carried out by using the Bellman method, and we obtain the numerical solution for all the physical field variables in the physical domain. Results are shown graphically, and we highlight the effects of the presence of crack in the behavior of thermoelastic interactions inside the medium in the present context, and its results are compared with the results of the thermoelasticity of type-III.

  12. A unifying model for elongational flow of polymer melts and solutions based on the interchain tube pressure concept

    NASA Astrophysics Data System (ADS)

    Wagner, Manfred Hermann; Rolón-Garrido, Víctor Hugo

    2015-04-01

    An extended interchain tube pressure model for polymer melts and concentrated solutions is presented, based on the idea that the pressures exerted by a polymer chain on the walls of an anisotropic confinement are anisotropic (M. Doi and S. F. Edwards, The Theory of Polymer Dynamics, Oxford University Press, New York, 1986). In a tube model with variable tube diameter, chain stretch and tube diameter reduction are related, and at deformation rates larger than the inverse Rouse time τR, the chain is stretched and its confining tube becomes increasingly anisotropic. Tube diameter reduction leads to an interchain pressure in the lateral direction of the tube, which is proportional to the 3rd power of stretch (G. Marrucci and G. Ianniruberto. Macromolecules 37, 3934-3942, 2004). In the extended interchain tube pressure (EIP) model, it is assumed that chain stretch is balanced by interchain tube pressure in the lateral direction, and by a spring force in the longitudinal direction of the tube, which is linear in stretch. The scaling relations established for the relaxation modulus of concentrated solutions of polystyrene in oligomeric styrene (M. H. Wagner, Rheol. Acta 53, 765-777, 2014, M. H. Wagner, J. Non-Newtonian Fluid Mech. http://dx.doi.org/10.1016/j.jnnfm.2014.09.017, 2014) are applied to the solutions of polystyrene (PS) in diethyl phthalate (DEP) investigated by Bhattacharjee et al. (P. K. Bhattacharjee et al., Macromolecules 35, 10131-10148, 2002) and Acharya et al. (M. V. Acharya et al. AIP Conference Proceedings 1027, 391-393, 2008). The scaling relies on the difference ΔTg between the glass-transition temperatures of the melt and the glass-transition temperatures of the solutions. ΔTg can be inferred from the reported zero-shear viscosities, and the BSW spectra of the solutions are obtained from the BSW spectrum of the reference melt with good accuracy. Predictions of the EIP model are compared to the steady-state elongational viscosity data of PS/DEP solutions. Except for a possible influence of solvent quality, linear and nonlinear viscoelasticity of entangled polystyrene solutions can thus be obtained from the linear-viscoelastic characteristics of a reference polymer melt and the shift of the glass transition temperature between melt and solution.

  13. ScaffoldScaffolder: solving contig orientation via bidirected to directed graph reduction.

    PubMed

    Bodily, Paul M; Fujimoto, M Stanley; Snell, Quinn; Ventura, Dan; Clement, Mark J

    2016-01-01

    The contig orientation problem, which we formally define as the MAX-DIR problem, has at times been addressed cursorily and at times using various heuristics. In setting forth a linear-time reduction from the MAX-CUT problem to the MAX-DIR problem, we prove the latter is NP-complete. We compare the relative performance of a novel greedy approach with several other heuristic solutions. Our results suggest that our greedy heuristic algorithm not only works well but also outperforms the other algorithms due to the nature of scaffold graphs. Our results also demonstrate a novel method for identifying inverted repeats and inversion variants, both of which contradict the basic single-orientation assumption. Such inversions have previously been noted as being difficult to detect and are directly involved in the genetic mechanisms of several diseases. http://bioresearch.byu.edu/scaffoldscaffolder. paulmbodily@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Towards a new technique to construct a 3D shear-wave velocity model based on converted waves

    NASA Astrophysics Data System (ADS)

    Hetényi, G.; Colavitti, L.

    2017-12-01

    A 3D model is essential in all branches of solid Earth sciences because geological structures can be heterogeneous and change significantly in their lateral dimension. The main target of this research is to build a crustal S-wave velocity structure in 3D. The currently popular methodologies to construct 3D shear-wave velocity models are Ambient Noise Tomography (ANT) and Local Earthquake Tomography (LET). Here we propose a new technique to map Earth discontinuities and velocities at depth based on the analysis of receiver functions. The 3D model is obtained by simultaneously inverting P-to-S converted waveforms recorded at a dense array. The individual velocity models corresponding to each trace are extracted from the 3D initial model along ray paths that are calculated using the shooting method, and the velocity model is updated during the inversion. We consider a spherical approximation of ray propagation using a global velocity model (iasp91, Kennett and Engdahl, 1991) for the teleseismic part, while we adopt Cartesian coordinates and a local velocity model for the crust. During the inversion process we work with a multi-layer crustal model for shear-wave velocity, with a flexible mesh for the depth of the interfaces. The RFs inversion represents a complex problem because the amplitude and the arrival time of different phases depend in a non-linear way on the depth of interfaces and the characteristics of the velocity structure. The solution we envisage to manage the inversion problem is the stochastic Neighbourhood Algorithm (NA, Sambridge, 1999), whose goal is to find an ensemble of models that sample the good data-fitting regions of a multidimensional parameter space. Depending on the studied area, this method can accommodate possible independent and complementary geophysical data (gravity, active seismics, LET, ANT, etc.), helping to reduce the non-linearity of the inversion. Our first focus of application is the Central Alps, where a 20-year long dataset of high-quality teleseismic events recorded at 81 stations is available, and we have high-resolution P-wave velocity model available (Diehl et al., 2009). We plan to extend the 3D shear-wave velocity inversion method to the entire Alpine domain in frame of the AlpArray project, and apply it to other areas with a dense network of broadband seismometers.

  15. Kinematics and design of a class of parallel manipulators

    NASA Astrophysics Data System (ADS)

    Hertz, Roger Barry

    1998-12-01

    This dissertation is concerned with the kinematic analysis and design of a class of three degree-of-freedom, spatial parallel manipulators. The class of manipulators is characterized by two platforms, between which are three legs, each possessing a succession of revolute, spherical, and revolute joints. The class is termed the "revolute-spherical-revolute" class of parallel manipulators. Two members of this class are examined. The first mechanism is a double-octahedral variable-geometry truss, and the second is termed a double tripod. The history the mechanisms is explored---the variable-geometry truss dates back to 1984, while predecessors of the double tripod mechanism date back to 1869. This work centers on the displacement analysis of these three-degree-of-freedom mechanisms. Two types of problem are solved: the forward displacement analysis (forward kinematics) and the inverse displacement analysis (inverse kinematics). The kinematic model of the class of mechanism is general in nature. A classification scheme for the revolute-spherical-revolute class of mechanism is introduced, which uses dominant geometric features to group designs into 8 different sub-classes. The forward kinematics problem is discussed: given a set of independently controllable input variables, solve for the relative position and orientation between the two platforms. For the variable-geometry truss, the controllable input variables are assumed to be the linear (prismatic) joints. For the double tripod, the controllable input variables are the three revolute joints adjacent to the base (proximal) platform. Multiple solutions are presented to the forward kinematics problem, indicating that there are many different positions (assemblies) that the manipulator can assume with equivalent inputs. For the double tripod these solutions can be expressed as a 16th degree polynomial in one unknown, while for the variable-geometry truss there exist two 16th degree polynomials, giving rise to 256 solutions. For special cases of the double tripod, the forward kinematics problem is shown to have a closed-form solution. Numerical examples are presented for the solution to the forward kinematics. A double tripod is presented that admits 16 unique and real forward kinematics solutions. Another example for a variable geometry truss is given that possesses 64 real solutions: 8 for each 16th order polynomial. The inverse kinematics problem is also discussed: given the relative position of the hand (end-effector), which is rigidly attached to one platform, solve for the independently controlled joint variables. Iterative solutions are proposed for both the variable-geometry truss and the double tripod. For special cases of both mechanisms, closed-form solutions are given. The practical problems of designing, building, and controlling a double-tripod manipulator are addressed. The resulting manipulator is a first-of-its kind prototype of a tapered (asymmetric) double-tripod manipulator. Real-time forward and inverse kinematics algorithms on an industrial robot controller is presented. The resulting performance of the prototype is impressive, since it was to achieve a maximum tool-tip speed of 4064 mm/s, maximum acceleration of 5 g, and a cycle time of 1.2 seconds for a typical pick-and-place pattern.

  16. [Baseline correction of spectrum for the inversion of chlorophyll-a concentration in the turbidity water].

    PubMed

    Wei, Yu-Chun; Wang, Guo-Xiang; Cheng, Chun-Mei; Zhang, Jing; Sun, Xiao-Peng

    2012-09-01

    Suspended particle material is the main factor affecting remote sensing inversion of chlorophyll-a concentration (Chla) in turbidity water. According to the optical property of suspended material in water, the present paper proposed a linear baseline correction method to weaken the suspended particle contribution in the spectrum above turbidity water surface. The linear baseline was defined as the connecting line of reflectance from 450 to 750 nm, and baseline correction is that spectrum reflectance subtracts the baseline. Analysis result of field data in situ of Meiliangwan, Taihu Lake in April, 2011 and March, 2010 shows that spectrum linear baseline correction can improve the inversion precision of Chl a and produce the better model diagnoses. As the data in March, 2010, RMSE of band ratio model built by original spectrum is 4.11 mg x m(-3), and that built by spectrum baseline correction is 3.58 mg x m(-3). Meanwhile, residual distribution and homoscedasticity in the model built by baseline correction spectrum is improved obviously. The model RMSE of April, 2011 shows the similar result. The authors suggest that using linear baseline correction as the spectrum processing method to improve Chla inversion accuracy in turbidity water without algae bloom.

  17. Pumping Test Determination of Unsaturated Aquifer Properties

    NASA Astrophysics Data System (ADS)

    Mishra, P. K.; Neuman, S. P.

    2008-12-01

    Tartakovsky and Neuman [2007] presented a new analytical solution for flow to a partially penetrating well pumping at a constant rate from a compressible unconfined aquifer considering the unsaturated zone. In their solution three-dimensional, axially symmetric unsaturated flow is described by a linearized version of Richards' equation in which both hydraulic conductivity and water content vary exponentially with incremental capillary pressure head relative to its air entry value, the latter defining the interface between the saturated and unsaturated zones. Both exponential functions are characterized by a common exponent k having the dimension of inverse length, or equivalently a dimensionless exponent kd=kb where b is initial saturated thickness. The authors used their solution to analyze drawdown data from a pumping test conducted by Moench et al. [2001] in a Glacial Outwash Deposit at Cape Cod, Massachusetts. Their analysis yielded estimates of horizontal and vertical saturated hydraulic conductivities, specific storage, specific yield and k . Recognizing that hydraulic conductivity and water content seldom vary identically with incremental capillary pressure head, as assumed by Tartakovsky and Neuman [2007], we note that k is at best an effective rather than a directly measurable soil parameter. We therefore ask to what extent does interpretation of a pumping test based on the Tartakovsky-Neuman solution allow estimating aquifer unsaturated parameters as described by more common constitutive water retention and relative hydraulic conductivity models such as those of Brooks and Corey [1964] or van Genuchten [1980] and Mualem [1976a]? We address this question by showing how may be used to estimate the capillary air entry pressure head k and the parameters of such constitutive models directly, without a need for inverse unsaturated numerical simulations of the kind described by Moench [2003]. To assess the validity of such direct estimates we use maximum likelihood- based model selection criteria to compare the abilities of numerical models based on the STOMP code to reproduce observed drawdowns during the test when saturated and unsaturated aquifer parameters are estimated either in the above manner or by means of the inverse code PEST.

  18. Robust inverse kinematics using damped least squares with dynamic weighting

    NASA Technical Reports Server (NTRS)

    Schinstock, D. E.; Faddis, T. N.; Greenway, R. B.

    1994-01-01

    This paper presents a general method for calculating the inverse kinematics with singularity and joint limit robustness for both redundant and non-redundant serial-link manipulators. Damped least squares inverse of the Jacobian is used with dynamic weighting matrices in approximating the solution. This reduces specific joint differential vectors. The algorithm gives an exact solution away from the singularities and joint limits, and an approximate solution at or near the singularities and/or joint limits. The procedure is here implemented for a six d.o.f. teleoperator and a well behaved slave manipulator resulted under teleoperational control.

  19. Inverse opal photonic crystal of chalcogenide glass by solution processing.

    PubMed

    Kohoutek, Tomas; Orava, Jiri; Sawada, Tsutomu; Fudouzi, Hiroshi

    2011-01-15

    Chalcogenide opal and inverse opal photonic crystals were successfully fabricated by low-cost and low-temperature solution-based process, which is well developed in polymer films processing. Highly ordered silica colloidal crystal films were successfully infilled with nano-colloidal solution of the high refractive index As(30)S(70) chalcogenide glass by using spin-coating method. The silica/As-S opal film was etched in HF acid to dissolve the silica opal template and fabricate the inverse opal As-S photonic crystal. Both, the infilled silica/As-S opal film (Δn ~ 0.84 near λ=770 nm) and the inverse opal As-S photonic structure (Δn ~ 1.26 near λ=660 nm) had significantly enhanced reflectivity values and wider photonic bandgaps in comparison with the silica opal film template (Δn ~ 0.434 near λ=600 nm). The key aspects of opal film preparation by spin-coating of nano-colloidal chalcogenide glass solution are discussed. The solution fabricated "inorganic polymer" opal and the inverse opal structures exceed photonic properties of silica or any organic polymer opal film. The fabricated photonic structures are proposed for designing novel flexible colloidal crystal laser devices, photonic waveguides and chemical sensors. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. The inverse problem of refraction travel times, part II: Quantifying refraction nonuniqueness using a three-layer model

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.

    2005-01-01

    This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.

  1. Physically-Retrieving Cloud and Thermodynamic Parameters from Ultraspectral IR Measurements

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Smith, William L., Sr.; Liu, Xu; Larar, Allen M.; Mango, Stephen A.; Huang, Hung-Lung

    2007-01-01

    A physical inversion scheme has been developed, dealing with cloudy as well as cloud-free radiance observed with ultraspectral infrared sounders, to simultaneously retrieve surface, atmospheric thermodynamic, and cloud microphysical parameters. A fast radiative transfer model, which applies to the clouded atmosphere, is used for atmospheric profile and cloud parameter retrieval. A one-dimensional (1-d) variational multi-variable inversion solution is used to improve an iterative background state defined by an eigenvector-regression-retrieval. The solution is iterated in order to account for non-linearity in the 1-d variational solution. It is shown that relatively accurate temperature and moisture retrievals can be achieved below optically thin clouds. For optically thick clouds, accurate temperature and moisture profiles down to cloud top level are obtained. For both optically thin and thick cloud situations, the cloud top height can be retrieved with relatively high accuracy (i.e., error < 1 km). NPOESS Airborne Sounder Testbed Interferometer (NAST-I) retrievals from the Atlantic-THORPEX Regional Campaign are compared with coincident observations obtained from dropsondes and the nadir-pointing Cloud Physics Lidar (CPL). This work was motivated by the need to obtain solutions for atmospheric soundings from infrared radiances observed for every individual field of view, regardless of cloud cover, from future ultraspectral geostationary satellite sounding instruments, such as the Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and the Hyperspectral Environmental Suite (HES). However, this retrieval approach can also be applied to the ultraspectral sounding instruments to fly on Polar satellites, such as the Infrared Atmospheric Sounding Interferometer (IASI) on the European MetOp satellite, the Cross-track Infrared Sounder (CrIS) on the NPOESS Preparatory Project and the following NPOESS series of satellites.

  2. The inverse electroencephalography pipeline

    NASA Astrophysics Data System (ADS)

    Weinstein, David Michael

    The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.

  3. Parana Basin Structure from Multi-Objective Inversion of Surface Wave and Receiver Function by Competent Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    An, M.; Assumpcao, M.

    2003-12-01

    The joint inversion of receiver function and surface wave is an effective way to diminish the influences of the strong tradeoff among parameters and the different sensitivity to the model parameters in their respective inversions, but the inversion problem becomes more complex. Multi-objective problems can be much more complicated than single-objective inversion in the model selection and optimization. If objectives are involved and conflicting, models can be ordered only partially. In this case, Pareto-optimal preference should be used to select solutions. On the other hand, the inversion to get only a few optimal solutions can not deal properly with the strong tradeoff between parameters, the uncertainties in the observation, the geophysical complexities and even the incompetency of the inversion technique. The effective way is to retrieve the geophysical information statistically from many acceptable solutions, which requires more competent global algorithms. Competent genetic algorithms recently proposed are far superior to the conventional genetic algorithm and can solve hard problems quickly, reliably and accurately. In this work we used one of competent genetic algorithms, Bayesian Optimization Algorithm as the main inverse procedure. This algorithm uses Bayesian networks to draw out inherited information and can use Pareto-optimal preference in the inversion. With this algorithm, the lithospheric structure of Paran"› basin is inverted to fit both the observations of inter-station surface wave dispersion and receiver function.

  4. A reversible-jump Markov chain Monte Carlo algorithm for 1D inversion of magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Mandolesi, Eric; Ogaya, Xenia; Campanyà, Joan; Piana Agostinetti, Nicola

    2018-04-01

    This paper presents a new computer code developed to solve the 1D magnetotelluric (MT) inverse problem using a Bayesian trans-dimensional Markov chain Monte Carlo algorithm. MT data are sensitive to the depth-distribution of rock electric conductivity (or its reciprocal, resistivity). The solution provided is a probability distribution - the so-called posterior probability distribution (PPD) for the conductivity at depth, together with the PPD of the interface depths. The PPD is sampled via a reversible-jump Markov Chain Monte Carlo (rjMcMC) algorithm, using a modified Metropolis-Hastings (MH) rule to accept or discard candidate models along the chains. As the optimal parameterization for the inversion process is generally unknown a trans-dimensional approach is used to allow the dataset itself to indicate the most probable number of parameters needed to sample the PPD. The algorithm is tested against two simulated datasets and a set of MT data acquired in the Clare Basin (County Clare, Ireland). For the simulated datasets the correct number of conductive layers at depth and the associated electrical conductivity values is retrieved, together with reasonable estimates of the uncertainties on the investigated parameters. Results from the inversion of field measurements are compared with results obtained using a deterministic method and with well-log data from a nearby borehole. The PPD is in good agreement with the well-log data, showing as a main structure a high conductive layer associated with the Clare Shale formation. In this study, we demonstrate that our new code go beyond algorithms developend using a linear inversion scheme, as it can be used: (1) to by-pass the subjective choices in the 1D parameterizations, i.e. the number of horizontal layers in the 1D parameterization, and (2) to estimate realistic uncertainties on the retrieved parameters. The algorithm is implemented using a simple MPI approach, where independent chains run on isolated CPU, to take full advantage of parallel computer architectures. In case of a large number of data, a master/slave appoach can be used, where the master CPU samples the parameter space and the slave CPUs compute forward solutions.

  5. Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.

    PubMed

    Song, C; Zhuang, T; Wu, Q

    2005-01-01

    This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.

  6. Relative sensitivity of depth discrimination for ankle inversion and plantar flexion movements.

    PubMed

    Black, Georgia; Waddington, Gordon; Adams, Roger

    2014-02-01

    25 participants (20 women, 5 men) were tested for sensitivity in discrimination between sets of six movements centered on 8 degrees, 11 degrees, and 14 degrees, and separated by 0.3 degrees. Both inversion and plantar flexion movements were tested. Discrimination of the extent of inversion movement was observed to decline linearly with increasing depth; however, for plantar flexion, the discrimination function for movement extent was found to be non-linear. The relatively better discrimination of plantar flexion movements than inversion movements at around 11 degrees from horizontal is interpreted as an effect arising from differential amounts of practice through use, because this position is associated with the plantar flexion movement made in normal walking. The fact that plantar flexion movements are discriminated better than inversion at one region but not others argues against accounts of superior proprioceptive sensitivity for plantar flexion compared to inversion that are based on general properties of plantar flexion such as the number of muscle fibres on stretch.

  7. Linear Approximation to Optimal Control Allocation for Rocket Nozzles with Elliptical Constraints

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Wall, Johnm W.

    2011-01-01

    In this paper we present a straightforward technique for assessing and realizing the maximum control moment effectiveness for a launch vehicle with multiple constrained rocket nozzles, where elliptical deflection limits in gimbal axes are expressed as an ensemble of independent quadratic constraints. A direct method of determining an approximating ellipsoid that inscribes the set of attainable angular accelerations is derived. In the case of a parameterized linear generalized inverse, the geometry of the attainable set is computationally expensive to obtain but can be approximated to a high degree of accuracy with the proposed method. A linear inverse can then be optimized to maximize the volume of the true attainable set by maximizing the volume of the approximating ellipsoid. The use of a linear inverse does not preclude the use of linear methods for stability analysis and control design, preferred in practice for assessing the stability characteristics of the inertial and servoelastic coupling appearing in large boosters. The present techniques are demonstrated via application to the control allocation scheme for a concept heavy-lift launch vehicle.

  8. Modern Workflow Full Waveform Inversion Applied to North America and the Northern Atlantic

    NASA Astrophysics Data System (ADS)

    Krischer, Lion; Fichtner, Andreas; Igel, Heiner

    2015-04-01

    We present the current state of a new seismic tomography model obtained using full waveform inversion of the crustal and upper mantle structure beneath North America and the Northern Atlantic, including the westernmost part of Europe. Parts of the eastern portion of the initial model consists of previous models by Fichtner et al. (2013) and Rickers et al. (2013). The final results of this study will contribute to the 'Comprehensive Earth Model' being developed by the Computational Seismology group at ETH Zurich. Significant challenges include the size of the domain, the uneven event and station coverage, and the strong east-west alignment of seismic ray paths across the North Atlantic. We use as much data as feasible, resulting in several thousand recordings per event depending on the receivers deployed at the earthquakes' origin times. To manage such projects in a reproducible and collaborative manner, we, as tomographers, should abandon ad-hoc scripts and one-time programs, and adopt sustainable and reusable solutions. Therefore we developed the LArge-scale Seismic Inversion Framework (LASIF - http://lasif.net), an open-source toolbox for managing seismic data in the context of non-linear iterative inversions that greatly reduces the time to research. Information on the applied processing, modelling, iterative model updating, what happened during each iteration, and so on are systematically archived. This results in a provenance record of the final model which in the end significantly enhances the reproducibility of iterative inversions. Additionally, tools for automated data download across different data centers, window selection, misfit measurements, parallel data processing, and input file generation for various forward solvers are provided.

  9. Capillary break-up, gelation and extensional rheology of hydrophobically modified cellulose ethers

    NASA Astrophysics Data System (ADS)

    Sharma, Vivek; Haward, Simon; Pessinet, Olivia; Soderlund, Asa; Threlfall-Holmes, Phil; McKinley, Gareth

    2012-02-01

    Cellulose derivatives containing associating hydrophobic groups along their hydrophilic polysaccharide backbone are used extensively in the formulations for inks, water-borne paints, food, nasal sprays, cosmetics, insecticides, fertilizers and bio-assays to control the rheology and processing behavior of multi-component dispersions. These complex dispersions are processed and used over a broad range of shear and extensional rates. The presence of hydrophobic stickers influences the linear and nonlinear rheology of cellulose ether solutions. In this talk, we systematically contrast the difference in the shear and extensional rheology of a cellulose ether: ethy-hydroxyethyl-cellulose (EHEC) and its hydrophobically-modified analog (HMEHEC) using microfluidic shear rheometry at deformation rates up to 10^6 inverse seconds, cross-slot flow extensional rheometry and capillary break-up during jetting as a rheometric technique. Additionally, we provide a constitutive model based on fractional calculus to describe the physical gelation in HMEHEC solutions.

  10. Improved L-BFGS diagonal preconditioners for a large-scale 4D-Var inversion system: application to CO2 flux constraints and analysis error calculation

    NASA Astrophysics Data System (ADS)

    Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng

    2013-04-01

    This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a large-scale 4D-Var system. The impact of using the diagonal preconditioners proposed by Gilbert and Le Maréchal (1989) instead of the usual Oren-Spedicato scalar will be first presented. We will also introduce new hybrid methods that combine randomization estimates of the analysis error variance with L-BFGS diagonal updates to improve the inverse Hessian approximation. Results from these new algorithms will be evaluated against standard large ensemble Monte-Carlo simulations. The methods explored here are applied to the problem of inferring global atmospheric CO2 fluxes using remote sensing observations, and are intended to be integrated with the future NASA Carbon Monitoring System.

  11. Isotropic-resolution linear-array-based photoacoustic computed tomography through inverse Radon transform

    NASA Astrophysics Data System (ADS)

    Li, Guo; Xia, Jun; Li, Lei; Wang, Lidai; Wang, Lihong V.

    2015-03-01

    Linear transducer arrays are readily available for ultrasonic detection in photoacoustic computed tomography. They offer low cost, hand-held convenience, and conventional ultrasonic imaging. However, the elevational resolution of linear transducer arrays, which is usually determined by the weak focus of the cylindrical acoustic lens, is about one order of magnitude worse than the in-plane axial and lateral spatial resolutions. Therefore, conventional linear scanning along the elevational direction cannot provide high-quality three-dimensional photoacoustic images due to the anisotropic spatial resolutions. Here we propose an innovative method to achieve isotropic resolutions for three-dimensional photoacoustic images through combined linear and rotational scanning. In each scan step, we first elevationally scan the linear transducer array, and then rotate the linear transducer array along its center in small steps, and scan again until 180 degrees have been covered. To reconstruct isotropic three-dimensional images from the multiple-directional scanning dataset, we use the standard inverse Radon transform originating from X-ray CT. We acquired a three-dimensional microsphere phantom image through the inverse Radon transform method and compared it with a single-elevational-scan three-dimensional image. The comparison shows that our method improves the elevational resolution by up to one order of magnitude, approaching the in-plane lateral-direction resolution. In vivo rat images were also acquired.

  12. Performance evaluation of the inverse dynamics method for optimal spacecraft reorientation

    NASA Astrophysics Data System (ADS)

    Ventura, Jacopo; Romano, Marcello; Walter, Ulrich

    2015-05-01

    This paper investigates the application of the inverse dynamics in the virtual domain method to Euler angles, quaternions, and modified Rodrigues parameters for rapid optimal attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics method, it yields sub-optimal solutions for minimum time problems. Furthermore, the virtual domain improves the optimality of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the optimal solution can be obtained without the virtual domain with any considered attitude representation.

  13. Neural-Based Compensation of Nonlinearities in an Airplane Longitudinal Model with Dynamic-Inversion Control

    PubMed Central

    Li, YuHui; Jin, FeiTeng

    2017-01-01

    The inversion design approach is a very useful tool for the complex multiple-input-multiple-output nonlinear systems to implement the decoupling control goal, such as the airplane model and spacecraft model. In this work, the flight control law is proposed using the neural-based inversion design method associated with the nonlinear compensation for a general longitudinal model of the airplane. First, the nonlinear mathematic model is converted to the equivalent linear model based on the feedback linearization theory. Then, the flight control law integrated with this inversion model is developed to stabilize the nonlinear system and relieve the coupling effect. Afterwards, the inversion control combined with the neural network and nonlinear portion is presented to improve the transient performance and attenuate the uncertain effects on both external disturbances and model errors. Finally, the simulation results demonstrate the effectiveness of this controller. PMID:29410680

  14. Peeling linear inversion of upper mantle velocity structure with receiver functions

    NASA Astrophysics Data System (ADS)

    Shen, Xuzhang; Zhou, Huilan

    2012-02-01

    A peeling linear inversion method is presented to study the upper mantle (from Moho to 800 km depth) velocity structures with receiver functions. The influences of the crustal and upper mantle velocity ratio error on the inversion results are analyzed, and three valid measures are taken for its reduction. This method is tested with the IASP91 and the PREM models, and the upper mantle structures beneath the stations GTA, LZH, and AXX in northwestern China are then inverted. The results indicate that this inversion method is feasible to quantify upper mantle discontinuities, besides the discontinuities between 3 h M ( h M denotes the depth of Moho) and 5 h M due to the interference of multiples from Moho. Smoothing is used to overcome possible false discontinuities from the multiples and ensure the stability of the inversion results, but the detailed information on the depth range between 3 h M and 5 h M is sacrificed.

  15. Incorporation of diet information derived from Bayesian stable isotope mixing models into mass-balanced marine ecosystem models: A case study from the Marennes-Oleron Estuary, France

    EPA Science Inventory

    We investigated the use of output from Bayesian stable isotope mixing models as constraints for a linear inverse food web model of a temperate intertidal seagrass system in the Marennes-Oléron Bay, France. Linear inverse modeling (LIM) is a technique that estimates a complete net...

  16. Minimum mean squared error (MSE) adjustment and the optimal Tykhonov-Phillips regularization parameter via reproducing best invariant quadratic uniformly unbiased estimates (repro-BIQUUE)

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard

    2008-02-01

    In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.

  17. Automated rapid finite fault inversion for megathrust earthquakes: Application to the Maule (2010), Iquique (2014) and Illapel (2015) great earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2016-04-01

    Rapid estimation of the spatial and temporal rupture characteristics of large megathrust earthquakes by finite fault inversion is important for disaster mitigation. For example, estimates of the spatio-temporal evolution of rupture can be used to evaluate population exposure to tsunami waves and ground shaking soon after the event by providing more accurate predictions than possible with point source approximations. In addition, rapid inversion results can reveal seismic source complexity to guide additional, more detailed subsequent studies. This work develops a method to rapidly estimate the slip distribution of megathrust events while reducing subjective parameter choices by automation. The method is simple yet robust and we show that it provides excellent preliminary rupture models as soon as 30 minutes for three great earthquakes in the South-American subduction zone. This may slightly change for other regions depending on seismic station coverage but method can be applied to any subduction region. The inversion is based on W-phase data since it is rapidly and widely available and of low amplitude which avoids clipping at close stations for large events. In addition, prior knowledge of the slab geometry (e.g. SLAB 1.0) is applied and rapid W-phase point source information (time delay and centroid location) is used to constrain the fault geometry and extent. Since the linearization by multiple time window (MTW) parametrization requires regularization, objective smoothing is achieved by the discrepancy principle in two fully automated steps. First, the residuals are estimated assuming unknown noise levels, and second, seeking a subsequent solution which fits the data to noise level. The MTW scheme is applied with positivity constraints and a solution is obtained by an efficient non-negative least squares solver. Systematic application of the algorithm to the Maule (2010), Iquique (2014) and Illapel (2015) events illustrates that rapid finite fault inversion with teleseismic data is feasible and provides meaningful results. The results for the three events show excellent data fits and are consistent with other solutions showing most of the slip occurring close to the trench for the Maule an Illapel events and some deeper slip for the Iquique event. Importantly, the Illapel source model predicts tsunami waveforms of close agreement with observed waveforms. Finally, we develop a new Bayesian approach to approximate uncertainties as part of the rapid inversion scheme with positivity constraints. Uncertainties are estimated by approximating the posterior distribution as a multivariate log-normal distribution. While solving for the posterior adds some additional computational cost, we illustrate that uncertainty estimation is important for meaningful interpretation of finite fault models.

  18. Double Fourier Series Solution of Poisson’s Equation on a Sphere.

    DTIC Science & Technology

    1980-10-29

    algebraic systems, the solution of these systems, and the inverse transform of the solution in Fourier space back to physi- cal space. 6. Yee, S. Y. K...Multiply each count in steps (2) through (5) by K] 7. Inverse transform um(0j j = 1, J - 1, to obtain u k; set u(P) = u 0 (P). [K(J - 1) log 2 K

  19. An equivalent unbalance identification method for the balancing of nonlinear squeeze-film damped rotordynamic systems

    NASA Astrophysics Data System (ADS)

    Torres Cedillo, Sergio G.; Bonello, Philip

    2016-01-01

    The high pressure (HP) rotor in an aero-engine assembly cannot be accessed under operational conditions because of the restricted space for instrumentation and high temperatures. This motivates the development of a non-invasive inverse problem approach for unbalance identification and balancing, requiring prior knowledge of the structure. Most such methods in the literature necessitate linear bearing models, making them unsuitable for aero-engine applications which use nonlinear squeeze-film damper (SFD) bearings. A previously proposed inverse method for nonlinear rotating systems was highly limited in its application (e.g. assumed circular centered SFD orbits). The methodology proposed in this paper overcomes such limitations. It uses the Receptance Harmonic Balance Method (RHBM) to generate the backward operator using measurements of the vibration at the engine casing, provided there is at least one linear connection between rotor and casing, apart from the nonlinear connections. A least-squares solution yields the equivalent unbalance distribution in prescribed planes of the rotor, which is consequently used to balance it. The method is validated on distinct rotordynamic systems using simulated casing vibration readings. The method is shown to provide effective balancing under hitherto unconsidered practical conditions. The repeatability of the method, as well as its robustness to noise, model uncertainty and balancing errors, are satisfactorily demonstrated and the limitations of the process discussed.

  20. Linear sampling method applied to non destructive testing of an elastic waveguide: theory, numerics and experiments

    NASA Astrophysics Data System (ADS)

    Baronian, Vahan; Bourgeois, Laurent; Chapuis, Bastien; Recoquillay, Arnaud

    2018-07-01

    This paper presents an application of the linear sampling method to ultrasonic non destructive testing of an elastic waveguide. In particular, the NDT context implies that both the solicitations and the measurements are located on the surface of the waveguide and are given in the time domain. Our strategy consists in using a modal formulation of the linear sampling method at multiple frequencies, such modal formulation being justified theoretically in Bourgeois et al (2011 Inverse Problems 27 055001) for rigid obstacles and in Bourgeois and Lunéville (2013 Inverse Problems 29 025017) for cracks. Our strategy requires the inversion of some emission and reception matrices which deserve some special attention due to potential ill-conditioning. The feasibility of our method is proved with the help of artificial data as well as real data.

  1. Source term identification in atmospheric modelling via sparse optimization

    NASA Astrophysics Data System (ADS)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.

  2. Real-Time Wing-Vortex and Pressure Distribution Estimation on Wings Via Displacements and Strains in Unsteady and Transitional Flight Conditions

    DTIC Science & Technology

    2016-09-07

    approach in co simulation with fluid-dynamics solvers is used. An original variational formulation is developed for the inverse problem of...by the inverse solution meshing. The same approach is used to map the structural and fluid interface kinematics and loads during the fluid structure...co-simulation. The inverse analysis is verified by reconstructing the deformed solution obtained with a corresponding direct formulation, based on

  3. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.

    2018-06-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.

  4. Gravity field models from kinematic orbits of CHAMP, GRACE and GOCE satellites

    NASA Astrophysics Data System (ADS)

    Bezděk, Aleš; Sebera, Josef; Klokočník, Jaroslav; Kostelecký, Jan

    2014-02-01

    The aim of our work is to generate Earth's gravity field models from GPS positions of low Earth orbiters. Our inversion method is based on Newton's second law, which relates the observed acceleration of the satellite with forces acting on it. The observed acceleration is obtained as numerical second derivative of kinematic positions. Observation equations are formulated using the gradient of the spherical harmonic expansion of the geopotential. Other forces are either modelled (lunisolar perturbations, tides) or provided by onboard measurements (nongravitational perturbations). From this linear regression model the geopotential harmonic coefficients are obtained. To this basic scheme of the acceleration approach we added some original elements, which may be useful in other inversion techniques as well. We tried to develop simple, straightforward and still statistically correct model of observations. (i) The model is linear in the harmonic coefficients, no a priori gravity field model is needed, no regularization is applied. (ii) We use the generalized least squares to successfully mitigate the strong amplification of noise due to numerical second derivative. (iii) The number of other fitted parameters is very small, in fact we use only daily biases, thus we can monitor their behaviour. (iv) GPS positions have correlated errors. The sample autocorrelation function and especially the partial autocorrelation function indicate suitability of an autoregressive model to represent the correlation structure. The decorrelation of residuals improved the accuracy of harmonic coefficients by a factor of 2-3. (v) We found it better to compute separate solutions in the three local reference frame directions than to compute them together at the same time; having obtained separate solutions for along-track, cross-track and radial components, we combine them using the normal matrices. Relative contribution of the along-track component to the combined solution is 50 percent on average. (vi) The computations were performed on an ordinary PC up to maximum degree and order 120. We applied the presented method to orbits of CHAMP and GRACE spanning seven years (2003-2009) and to two months of GOCE (Nov/Dec 2009). The obtained long-term static gravity field models are of similar or better quality compared to other published solutions. We also tried to extract the time-variable gravity signal from CHAMP and GRACE orbits. The acquired average annual signal shows clearly the continental areas with important and known hydrological variations.

  5. A Neural Network Aero Design System for Advanced Turbo-Engines

    NASA Technical Reports Server (NTRS)

    Sanz, Jose M.

    1999-01-01

    An inverse design method calculates the blade shape that produces a prescribed input pressure distribution. By controlling this input pressure distribution the aerodynamic design objectives can easily be met. Because of the intrinsic relationship between pressure distribution and airfoil physical properties, a Neural Network can be trained to choose the optimal pressure distribution that would meet a set of physical requirements. Neural network systems have been attempted in the context of direct design methods. From properties ascribed to a set of blades the neural network is trained to infer the properties of an 'interpolated' blade shape. The problem is that, especially in transonic regimes where we deal with intrinsically non linear and ill posed problems, small perturbations of the blade shape can produce very large variations of the flow parameters. It is very unlikely that, under these circumstances, a neural network will be able to find the proper solution. The unique situation in the present method is that the neural network can be trained to extract the required input pressure distribution from a database of pressure distributions while the inverse method will still compute the exact blade shape that corresponds to this 'interpolated' input pressure distribution. In other words, the interpolation process is transferred to a smoother problem, namely, finding what pressure distribution would produce the required flow conditions and, once this is done, the inverse method will compute the exact solution for this problem. The use of neural network is, in this context, highly related to the use of proper optimization techniques. The optimization is used essentially as an automation procedure to force the input pressure distributions to achieve the required aero and structural design parameters. A multilayered feed forward network with back-propagation is used to train the system for pattern association and classification.

  6. Computing the Moore-Penrose Inverse of a Matrix with a Computer Algebra System

    ERIC Educational Resources Information Center

    Schmidt, Karsten

    2008-01-01

    In this paper "Derive" functions are provided for the computation of the Moore-Penrose inverse of a matrix, as well as for solving systems of linear equations by means of the Moore-Penrose inverse. Making it possible to compute the Moore-Penrose inverse easily with one of the most commonly used Computer Algebra Systems--and to have the blueprint…

  7. Including geological information in the inverse problem of palaeothermal reconstruction

    NASA Astrophysics Data System (ADS)

    Trautner, S.; Nielsen, S. B.

    2003-04-01

    A reliable reconstruction of sediment thermal history is of central importance to the assessment of hydrocarbon potential and the understanding of basin evolution. However, only rarely do sedimentation history and borehole data in the form of present day temperatures and vitrinite reflectance constrain the past thermal evolution to a useful level of accuracy (Gallagher and Sambridge,1992; Nielsen,1998; Trautner and Nielsen,2003). This is reflected in the inverse solutions to the problem of determining heat flow history from borehole data: The recent heat flow is constrained by data while older values are governed by the chosen a prior heat flow. In this paper we reduce this problem by including geological information in the inverse problem. Through a careful analysis of geological and geophysical data the timing of the tectonic processes, which may influence heat flow, can be inferred. The heat flow history is then parameterised to allow for the temporal variations characteristic of the different tectonic events. The inversion scheme applies a Markov chain Monte Carlo (MCMC) approach (Nielsen and Gallagher, 1999; Ferrero and Gallagher,2002), which efficiently explores the model space and futhermore samples the posterior probability distribution of the model. The technique is demonstrated on wells in the northern North Sea with emphasis on the stretching event in Late Jurassic. The wells are characterised by maximum sediment temperature at the present day, which is the worst case for resolution of the past thermal history because vitrinite reflectance is determined mainly by the maximum temperature. Including geological information significantly improves the thermal resolution. Ferrero, C. and Gallagher,K.,2002. Stochastic thermal history modelling.1. Constraining heat flow histories and their uncertainty. Marine and Petroleum Geology, 19, 633-648. Gallagher,K. and Sambridge, M., 1992. The resolution of past heat flow in sedimentary basins from non-linear inversion of geochemical data: the smoothest model approach, with synthetic examples. Geophysical Journal International, 109, 78-95. Nielsen, S.B, 1998. Inversion and sensitivity analysis in basin modelling. Geoscience 98. Keele University, UK, Abstract Volume, 56. Nielsen, S.B. and Gallagher, K., 1999. Efficient sampling of 3-D basin modelling scenarios. Extended Abstracts Volume, 1999 AAPG International Conference &Exhibition, Birmingham, England, September 12-15, 1999, p. 369 - 372. Trautner S. and Nielsen, S.B., 2003. 2-D inverse thermal modelling in the Norwegian shelf using Fast Approximate Forward (FAF) solutions. In R. Marzi and Duppenbecker, S. (Ed.), Multi-Dimensional Basin Modeling, AAPG, in press.

  8. An efficient approach for inverse kinematics and redundancy resolution scheme of hyper-redundant manipulators

    NASA Astrophysics Data System (ADS)

    Chembuly, V. V. M. J. Satish; Voruganti, Hari Kumar

    2018-04-01

    Hyper redundant manipulators have a large number of degrees of freedom (DOF) than the required to perform a given task. Additional DOF of manipulators provide the flexibility to work in highly cluttered environment and in constrained workspaces. Inverse kinematics (IK) of hyper-redundant manipulators is complicated due to large number of DOF and these manipulators have multiple IK solutions. The redundancy gives a choice of selecting best solution out of multiple solutions based on certain criteria such as obstacle avoidance, singularity avoidance, joint limit avoidance and joint torque minimization. This paper focuses on IK solution and redundancy resolution of hyper-redundant manipulator using classical optimization approach. Joint positions are computed by optimizing various criteria for a serial hyper redundant manipulators while traversing different paths in the workspace. Several cases are addressed using this scheme to obtain the inverse kinematic solution while optimizing the criteria like obstacle avoidance, joint limit avoidance.

  9. The point-spread function measure of resolution for the 3-D electrical resistivity experiment

    NASA Astrophysics Data System (ADS)

    Oldenborger, Greg A.; Routh, Partha S.

    2009-02-01

    The solution appraisal component of the inverse problem involves investigation of the relationship between our estimated model and the actual model. However, full appraisal is difficult for large 3-D problems such as electrical resistivity tomography (ERT). We tackle the appraisal problem for 3-D ERT via the point-spread functions (PSFs) of the linearized resolution matrix. The PSFs represent the impulse response of the inverse solution and quantify our parameter-specific resolving capability. We implement an iterative least-squares solution of the PSF for the ERT experiment, using on-the-fly calculation of the sensitivity via an adjoint integral equation with stored Green's functions and subgrid reduction. For a synthetic example, analysis of individual PSFs demonstrates the truly 3-D character of the resolution. The PSFs for the ERT experiment are Gaussian-like in shape, with directional asymmetry and significant off-diagonal features. Computation of attributes representative of the blurring and localization of the PSF reveal significant spatial dependence of the resolution with some correlation to the electrode infrastructure. Application to a time-lapse ground-water monitoring experiment demonstrates the utility of the PSF for assessing feature discrimination, predicting artefacts and identifying model dependence of resolution. For a judicious selection of model parameters, we analyse the PSFs and their attributes to quantify the case-specific localized resolving capability and its variability over regions of interest. We observe approximate interborehole resolving capability of less than 1-1.5m in the vertical direction and less than 1-2.5m in the horizontal direction. Resolving capability deteriorates significantly outside the electrode infrastructure.

  10. Measurement Uncertainty of Dew-Point Temperature in a Two-Pressure Humidity Generator

    NASA Astrophysics Data System (ADS)

    Martins, L. Lages; Ribeiro, A. Silva; Alves e Sousa, J.; Forbes, Alistair B.

    2012-09-01

    This article describes the measurement uncertainty evaluation of the dew-point temperature when using a two-pressure humidity generator as a reference standard. The estimation of the dew-point temperature involves the solution of a non-linear equation for which iterative solution techniques, such as the Newton-Raphson method, are required. Previous studies have already been carried out using the GUM method and the Monte Carlo method but have not discussed the impact of the approximate numerical method used to provide the temperature estimation. One of the aims of this article is to take this approximation into account. Following the guidelines presented in the GUM Supplement 1, two alternative approaches can be developed: the forward measurement uncertainty propagation by the Monte Carlo method when using the Newton-Raphson numerical procedure; and the inverse measurement uncertainty propagation by Bayesian inference, based on prior available information regarding the usual dispersion of values obtained by the calibration process. The measurement uncertainties obtained using these two methods can be compared with previous results. Other relevant issues concerning this research are the broad application to measurements that require hygrometric conditions obtained from two-pressure humidity generators and, also, the ability to provide a solution that can be applied to similar iterative models. The research also studied the factors influencing both the use of the Monte Carlo method (such as the seed value and the convergence parameter) and the inverse uncertainty propagation using Bayesian inference (such as the pre-assigned tolerance, prior estimate, and standard deviation) in terms of their accuracy and adequacy.

  11. Dynamic data integration and stochastic inversion of a confined aquifer

    NASA Astrophysics Data System (ADS)

    Wang, D.; Zhang, Y.; Irsa, J.; Huang, H.; Wang, L.

    2013-12-01

    Much work has been done in developing and applying inverse methods to aquifer modeling. The scope of this paper is to investigate the applicability of a new direct method for large inversion problems and to incorporate uncertainty measures in the inversion outcomes (Wang et al., 2013). The problem considered is a two-dimensional inverse model (50×50 grid) of steady-state flow for a heterogeneous ground truth model (500×500 grid) with two hydrofacies. From the ground truth model, decreasing number of wells (12, 6, 3) were sampled for facies types, based on which experimental indicator histograms and directional variograms were computed. These parameters and models were used by Sequential Indicator Simulation to generate 100 realizations of hydrofacies patterns in a 100×100 (geostatistical) grid, which were conditioned to the facies measurements at wells. These realizations were smoothed with Simulated Annealing, coarsened to the 50×50 inverse grid, before they were conditioned with the direct method to the dynamic data, i.e., observed heads and groundwater fluxes at the same sampled wells. A set of realizations of estimated hydraulic conductivities (Ks), flow fields, and boundary conditions were created, which centered on the 'true' solutions from solving the ground truth model. Both hydrofacies conductivities were computed with an estimation accuracy of ×10% (12 wells), ×20% (6 wells), ×35% (3 wells) of the true values. For boundary condition estimation, the accuracy was within × 15% (12 wells), 30% (6 wells), and 50% (3 wells) of the true values. The inversion system of equations was solved with LSQR (Paige et al, 1982), for which coordinate transform and matrix scaling preprocessor were used to improve the condition number (CN) of the coefficient matrix. However, when the inverse grid was refined to 100×100, Gaussian Noise Perturbation was used to limit the growth of the CN before the matrix solve. To scale the inverse problem up (i.e., without smoothing and coarsening and therefore reducing the associated estimation uncertainty), a parallel LSQR solver was written and verified. For the 50×50 grid, the parallel solver sped up the serial solution time by 14X using 4 CPUs (research on parallel performance and scaling is ongoing). A sensitivity analysis was conducted to examine the relation between the observed data and the inversion outcomes, where measurement errors of increasing magnitudes (i.e., ×1, 2, 5, 10% of the total head variation and up to ×2% of the total flux variation) were imposed on the observed data. Inversion results were stable but the accuracy of Ks and boundary estimation degraded with increasing errors, as expected. In particular, quality of the observed heads is critical to hydraulic head recovery, while quality of the observed fluxes plays a dominant role in K estimation. References: Wang, D., Y. Zhang, J. Irsa, H. Huang, and L. Wang (2013), Data integration and stochastic inversion of a confined aquifer with high performance computing, Advances in Water Resources, in preparation. Paige, C. C., and M. A. Saunders (1982), LSQR: an algorithm for sparse linear equations and sparse least squares, ACM Transactions on Mathematical Software, 8(1), 43-71.

  12. Kinematic source inversions of teleseismic data based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.

    2014-12-01

    One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.

  13. Approximate solutions of acoustic 3D integral equation and their application to seismic modeling and full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2017-10-01

    Over the recent decades, a number of fast approximate solutions of Lippmann-Schwinger equation, which are more accurate than classic Born and Rytov approximations, were proposed in the field of electromagnetic modeling. Those developments could be naturally extended to acoustic and elastic fields; however, until recently, they were almost unknown in seismology. This paper presents several solutions of this kind applied to acoustic modeling for both lossy and lossless media. We evaluated the numerical merits of those methods and provide an estimation of their numerical complexity. In our numerical realization we use the matrix-free implementation of the corresponding integral operator. We study the accuracy of those approximate solutions and demonstrate, that the quasi-analytical approximation is more accurate, than the Born approximation. Further, we apply the quasi-analytical approximation to the solution of the inverse problem. It is demonstrated that, this approach improves the estimation of the data gradient, comparing to the Born approximation. The developed inversion algorithm is based on the conjugate-gradient type optimization. Numerical model study demonstrates that the quasi-analytical solution significantly reduces computation time of the seismic full-waveform inversion. We also show how the quasi-analytical approximation can be extended to the case of elastic wavefield.

  14. Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984

    USGS Publications Warehouse

    Sipkin, S.A.

    1987-01-01

    The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.

  15. Large Scale Document Inversion using a Multi-threaded Computing System

    PubMed Central

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2018-01-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations. PMID:29861701

  16. Non-invasive Hall current distribution measurement in a Hall effect thruster

    NASA Astrophysics Data System (ADS)

    Mullins, Carl R.; Farnell, Casey C.; Farnell, Cody C.; Martinez, Rafael A.; Liu, David; Branam, Richard D.; Williams, John D.

    2017-01-01

    A means is presented to determine the Hall current density distribution in a closed drift thruster by remotely measuring the magnetic field and solving the inverse problem for the current density. The magnetic field was measured by employing an array of eight tunneling magnetoresistive (TMR) sensors capable of milligauss sensitivity when placed in a high background field. The array was positioned just outside the thruster channel on a 1.5 kW Hall thruster equipped with a center-mounted hollow cathode. In the sensor array location, the static magnetic field is approximately 30 G, which is within the linear operating range of the TMR sensors. Furthermore, the induced field at this distance is approximately tens of milligauss, which is within the sensitivity range of the TMR sensors. Because of the nature of the inverse problem, the induced-field measurements do not provide the Hall current density by a simple inversion; however, a Tikhonov regularization of the induced field does provide the current density distributions. These distributions are shown as a function of time in contour plots. The measured ratios between the average Hall current and the average discharge current ranged from 6.1 to 7.3 over a range of operating conditions from 1.3 kW to 2.2 kW. The temporal inverse solution at 1.5 kW exhibited a breathing mode frequency of 24 kHz, which was in agreement with temporal measurements of the discharge current.

  17. Non-invasive Hall current distribution measurement in a Hall effect thruster.

    PubMed

    Mullins, Carl R; Farnell, Casey C; Farnell, Cody C; Martinez, Rafael A; Liu, David; Branam, Richard D; Williams, John D

    2017-01-01

    A means is presented to determine the Hall current density distribution in a closed drift thruster by remotely measuring the magnetic field and solving the inverse problem for the current density. The magnetic field was measured by employing an array of eight tunneling magnetoresistive (TMR) sensors capable of milligauss sensitivity when placed in a high background field. The array was positioned just outside the thruster channel on a 1.5 kW Hall thruster equipped with a center-mounted hollow cathode. In the sensor array location, the static magnetic field is approximately 30 G, which is within the linear operating range of the TMR sensors. Furthermore, the induced field at this distance is approximately tens of milligauss, which is within the sensitivity range of the TMR sensors. Because of the nature of the inverse problem, the induced-field measurements do not provide the Hall current density by a simple inversion; however, a Tikhonov regularization of the induced field does provide the current density distributions. These distributions are shown as a function of time in contour plots. The measured ratios between the average Hall current and the average discharge current ranged from 6.1 to 7.3 over a range of operating conditions from 1.3 kW to 2.2 kW. The temporal inverse solution at 1.5 kW exhibited a breathing mode frequency of 24 kHz, which was in agreement with temporal measurements of the discharge current.

  18. Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.

    NASA Astrophysics Data System (ADS)

    Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.

    2016-12-01

    Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.

  19. Large Scale Document Inversion using a Multi-threaded Computing System.

    PubMed

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2017-06-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  20. Applying a probabilistic seismic-petrophysical inversion and two different rock-physics models for reservoir characterization in offshore Nile Delta

    NASA Astrophysics Data System (ADS)

    Aleardi, Mattia

    2018-01-01

    We apply a two-step probabilistic seismic-petrophysical inversion for the characterization of a clastic, gas-saturated, reservoir located in offshore Nile Delta. In particular, we discuss and compare the results obtained when two different rock-physics models (RPMs) are employed in the inversion. The first RPM is an empirical, linear model directly derived from the available well log data by means of an optimization procedure. The second RPM is a theoretical, non-linear model based on the Hertz-Mindlin contact theory. The first step of the inversion procedure is a Bayesian linearized amplitude versus angle (AVA) inversion in which the elastic properties, and the associated uncertainties, are inferred from pre-stack seismic data. The estimated elastic properties constitute the input to the second step that is a probabilistic petrophysical inversion in which we account for the noise contaminating the recorded seismic data and the uncertainties affecting both the derived rock-physics models and the estimated elastic parameters. In particular, a Gaussian mixture a-priori distribution is used to properly take into account the facies-dependent behavior of petrophysical properties, related to the different fluid and rock properties of the different litho-fluid classes. In the synthetic and in the field data tests, the very minor differences between the results obtained by employing the two RPMs, and the good match between the estimated properties and well log information, confirm the applicability of the inversion approach and the suitability of the two different RPMs for reservoir characterization in the investigated area.

  1. An evolutive real-time source inversion based on a linear inverse formulation

    NASA Astrophysics Data System (ADS)

    Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.

    2016-12-01

    Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source Inversion Validation project (Mai et al. 2011). A real case application is currently being explored. Our specific formulation, combined with simple prior information, as well as numerical results obtained so far, yields interesting perspectives for a real-time implementation.

  2. The attitude inversion method of geostationary satellites based on unscented particle filter

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao

    2018-04-01

    The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.

  3. Solution of some types of differential equations: operational calculus and inverse differential operators.

    PubMed

    Zhukovsky, K

    2014-01-01

    We present a general method of operational nature to analyze and obtain solutions for a variety of equations of mathematical physics and related mathematical problems. We construct inverse differential operators and produce operational identities, involving inverse derivatives and families of generalised orthogonal polynomials, such as Hermite and Laguerre polynomial families. We develop the methodology of inverse and exponential operators, employing them for the study of partial differential equations. Advantages of the operational technique, combined with the use of integral transforms, generating functions with exponentials and their integrals, for solving a wide class of partial derivative equations, related to heat, wave, and transport problems, are demonstrated.

  4. Note on the practical significance of the Drazin inverse

    NASA Technical Reports Server (NTRS)

    Wilkinson, J. H.

    1979-01-01

    The solution of the differential system Bx = Ax + f where A and B are n x n matrices, and A - Lambda B is not a singular pencil, may be expressed in terms of the Drazin inverse. It is shown that there is a simple reduced form for the pencil A - Lambda B which is adequate for the determination of the general solution and that although the Drazin inverse could be determined efficiently from this reduced form it is inadvisable to do so.

  5. Mathematical Modeling of Torsional Surface Wave Propagation in a Non-Homogeneous Transverse Isotropic Elastic Solid Semi-Infinite Medium Under a Layer

    NASA Astrophysics Data System (ADS)

    Sethi, M.; Sharma, A.; Vasishth, A.

    2017-05-01

    The present paper deals with the mathematical modeling of the propagation of torsional surface waves in a non-homogeneous transverse isotropic elastic half-space under a rigid layer. Both rigidities and density of the half-space are assumed to vary inversely linearly with depth. Separation of variable method has been used to get the analytical solutions for the dispersion equation of the torsional surface waves. Also, the effects of nonhomogeneities on the phase velocity of torsional surface waves have been shown graphically. Also, dispersion equations have been derived for some particular cases, which are in complete agreement with some classical results.

  6. Kinematic modeling of a double octahedral Variable Geometry Truss (VGT) as an extensible gimbal

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1994-01-01

    This paper presents the complete forward and inverse kinematics solutions for control of the three degree-of-freedom (DOF) double octahedral variable geometry truss (VGT) module as an extensible gimbal. A VGT is a truss structure partially comprised of linearly actuated members. A VGT can be used as joints in a large, lightweight, high load-bearing manipulator for earth- and space-based remote operations, plus industrial applications. The results have been used to control the NASA VGT hardware as an extensible gimbal, demonstrating the capability of this device to be a joint in a VGT-based manipulator. This work is an integral part of a VGT-based manipulator design, simulation, and control tool.

  7. Group refractive index reconstruction with broadband interferometric confocal microscopy

    PubMed Central

    Marks, Daniel L.; Schlachter, Simon C.; Zysk, Adam M.; Boppart, Stephen A.

    2010-01-01

    We propose a novel method of measuring the group refractive index of biological tissues at the micrometer scale. The technique utilizes a broadband confocal microscope embedded into a Mach–Zehnder interferometer, with which spectral interferograms are measured as the sample is translated through the focus of the beam. The method does not require phase unwrapping and is insensitive to vibrations in the sample and reference arms. High measurement stability is achieved because a single spectral interferogram contains all the information necessary to compute the optical path delay of the beam transmitted through the sample. Included are a physical framework defining the forward problem, linear solutions to the inverse problem, and simulated images of biologically relevant phantoms. PMID:18451922

  8. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  9. From intermediate anisotropic to isotropic friction at large strain rates to account for viscosity thickening in polymer solutions

    NASA Astrophysics Data System (ADS)

    Stephanou, Pavlos S.; Kröger, Martin

    2018-05-01

    The steady-state extensional viscosity of dense polymeric liquids in elongational flows is known to be peculiar in the sense that for entangled polymer melts it monotonically decreases—whereas for concentrated polymer solutions it increases—with increasing strain rate beyond the inverse Rouse time. To shed light on this issue, we solve the kinetic theory model for concentrated polymer solutions and entangled melts proposed by Curtiss and Bird, also known as the tumbling-snake model, supplemented by a variable link tension coefficient that we relate to the uniaxial nematic order parameter of the polymer. As a result, the friction tensor is increasingly becoming isotropic at large strain rates as the polymer concentration decreases, and the model is seen to capture the experimentally observed behavior. Additional refinements may supplement the present model to capture very strong flows. We furthermore derive analytic expressions for small rates and the linear viscoelastic behavior. This work builds upon our earlier work on the use of the tumbling-snake model under shear and demonstrates its capacity to improve our microscopic understanding of the rheology of entangled polymer melts and concentrated polymer solutions.

  10. Wormholes minimally violating the null energy condition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouhmadi-López, Mariam; Lobo, Francisco S N; Martín-Moruno, Prado, E-mail: mariam.bouhmadi@ehu.es, E-mail: fslobo@fc.ul.pt, E-mail: pmmoruno@fc.ul.pt

    2014-11-01

    We consider novel wormhole solutions supported by a matter content that minimally violates the null energy condition. More specifically, we consider an equation of state in which the sum of the energy density and radial pressure is proportional to a constant with a value smaller than that of the inverse area characterising the system, i.e., the area of the wormhole mouth. This approach is motivated by a recently proposed cosmological event, denoted {sup t}he little sibling of the big rip{sup ,} where the Hubble rate and the scale factor blow up but the cosmic derivative of the Hubble rate doesmore » not [1]. By using the cut-and-paste approach, we match interior spherically symmetric wormhole solutions to an exterior Schwarzschild geometry, and analyse the stability of the thin-shell to linearized spherically symmetric perturbations around static solutions, by choosing suitable properties for the exotic material residing on the junction interface radius. Furthermore, we also consider an inhomogeneous generalization of the equation of state considered above and analyse the respective stability regions. In particular, we obtain a specific wormhole solution with an asymptotic behaviour corresponding to a global monopole.« less

  11. Enhancing Autonomy of Aerial Systems Via Integration of Visual Sensors into Their Avionics Suite

    DTIC Science & Technology

    2016-09-01

    aerial platform for subsequent visual sensor integration. 14. SUBJECT TERMS autonomous system, quadrotors, direct method, inverse ...CONTROLLER ARCHITECTURE .....................................................43 B. INVERSE DYNAMICS IN THE VIRTUAL DOMAIN ......................45 1...control station GPS Global-Positioning System IDVD inverse dynamics in the virtual domain ILP integer linear program INS inertial-navigation system

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitanidis, Peter

    As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO 2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic,more » tracer and thermal tests before CO 2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO 2 storage examples.« less

  13. Stochastic sediment property inversion in Shallow Water 06.

    PubMed

    Michalopoulou, Zoi-Heleni

    2017-11-01

    Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.

  14. SU-C-207B-06: Comparison of Registration Methods for Modeling Pathologic Response of Esophageal Cancer to Chemoradiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riyahi, S; Choi, W; Bhooshan, N

    2016-06-15

    Purpose: To compare linear and deformable registration methods for evaluation of tumor response to Chemoradiation therapy (CRT) in patients with esophageal cancer. Methods: Linear and multi-resolution BSpline deformable registration were performed on Pre-Post-CRT CT/PET images of 20 patients with esophageal cancer. For both registration methods, we registered CT using Mean Square Error (MSE) metric, however to register PET we used transformation obtained using Mutual Information (MI) from the same CT due to being multi-modality. Similarity of Warped-CT/PET was quantitatively evaluated using Normalized Mutual Information and plausibility of DF was assessed using inverse consistency Error. To evaluate tumor response four groupsmore » of tumor features were examined: (1) Conventional PET/CT e.g. SUV, diameter (2) Clinical parameters e.g. TNM stage, histology (3)spatial-temporal PET features that describe intensity, texture and geometry of tumor (4)all features combined. Dominant features were identified using 10-fold cross-validation and Support Vector Machine (SVM) was deployed for tumor response prediction while the accuracy was evaluated by ROC Area Under Curve (AUC). Results: Average and standard deviation of Normalized mutual information for deformable registration using MSE was 0.2±0.054 and for linear registration was 0.1±0.026, showing higher NMI for deformable registration. Likewise for MI metric, deformable registration had 0.13±0.035 comparing to linear counterpart with 0.12±0.037. Inverse consistency error for deformable registration for MSE metric was 4.65±2.49 and for linear was 1.32±2.3 showing smaller value for linear registration. The same conclusion was obtained for MI in terms of inverse consistency error. AUC for both linear and deformable registration was 1 showing no absolute difference in terms of response evaluation. Conclusion: Deformable registration showed better NMI comparing to linear registration, however inverse consistency of transformation was lower in linear registration. We do not expect to see significant difference when warping PET images using deformable or linear registration. This work was supported in part by the National Cancer Institute Grants R01CA172638.« less

  15. Minimal-Inversion Feedforward-And-Feedback Control System

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1990-01-01

    Recent developments in theory of control systems support concept of minimal-inversion feedforward-and feedback control system consisting of three independently designable control subsystems. Applicable to the control of linear, time-invariant plant.

  16. Computation of steady and unsteady quasi-one-dimensional viscous/inviscid interacting internal flows at subsonic, transonic, and supersonic Mach numbers

    NASA Technical Reports Server (NTRS)

    Swafford, Timothy W.; Huddleston, David H.; Busby, Judy A.; Chesser, B. Lawrence

    1992-01-01

    Computations of viscous-inviscid interacting internal flowfields are presented for steady and unsteady quasi-one-dimensional (Q1D) test cases. The unsteady Q1D Euler equations are coupled with integral boundary-layer equations for unsteady, two-dimensional (planar or axisymmetric), turbulent flow over impermeable, adiabatic walls. The coupling methodology differs from that used in most techniques reported previously in that the above mentioned equation sets are written as a complete system and solved simultaneously; that is, the coupling is carried out directly through the equations as opposed to coupling the solutions of the different equation sets. Solutions to the coupled system of equations are obtained using both explicit and implicit numerical schemes for steady subsonic, steady transonic, and both steady and unsteady supersonic internal flowfields. Computed solutions are compared with measurements as well as Navier-Stokes and inverse boundary-layer methods. An analysis of the eigenvalues of the coefficient matrix associated with the quasi-linear form of the coupled system of equations indicates the presence of complex eigenvalues for certain flow conditions. It is concluded that although reasonable solutions can be obtained numerically, these complex eigenvalues contribute to the overall difficulty in obtaining numerical solutions to the coupled system of equations.

  17. Probabilistic joint inversion of waveforms and polarity data for double-couple focal mechanisms of local earthquakes

    NASA Astrophysics Data System (ADS)

    Wéber, Zoltán

    2018-06-01

    Estimating the mechanisms of small (M < 4) earthquakes is quite challenging. A common scenario is that neither the available polarity data alone nor the well predictable near-station seismograms alone are sufficient to obtain reliable focal mechanism solutions for weak events. To handle this situation we introduce here a new method that jointly inverts waveforms and polarity data following a probabilistic approach. The procedure called joint waveform and polarity (JOWAPO) inversion maps the posterior probability density of the model parameters and estimates the maximum likelihood double-couple mechanism, the optimal source depth and the scalar seismic moment of the investigated event. The uncertainties of the solution are described by confidence regions. We have validated the method on two earthquakes for which well-determined focal mechanisms are available. The validation tests show that including waveforms in the inversion considerably reduces the uncertainties of the usually poorly constrained polarity solutions. The JOWAPO method performs best when it applies waveforms from at least two seismic stations. If the number of the polarity data is large enough, even single-station JOWAPO inversion can produce usable solutions. When only a few polarities are available, however, single-station inversion may result in biased mechanisms. In this case some caution must be taken when interpreting the results. We have successfully applied the JOWAPO method to an earthquake in North Hungary, whose mechanism could not be estimated by long-period waveform inversion. Using 17 P-wave polarities and waveforms at two nearby stations, the JOWAPO method produced a well-constrained focal mechanism. The solution is very similar to those obtained previously for four other events that occurred in the same earthquake sequence. The analysed event has a strike-slip mechanism with a P axis oriented approximately along an NE-SW direction.

  18. On the stability of a superspinar

    NASA Astrophysics Data System (ADS)

    Nakao, Ken-ichi; Joshi, Pankaj S.; Guo, Jun-Qi; Kocherlakota, Prashant; Tagoshi, Hideyuki; Harada, Tomohiro; Patil, Mandar; Królak, Andrzej

    2018-05-01

    The superspinar proposed by Gimon and Hořava is a rapidly rotating compact entity whose exterior is described by the over-spinning Kerr geometry. The compact entity itself is expected to be governed by superstringy effects, and in astrophysical scenarios it can give rise to interesting observable phenomena. Earlier it was suggested that the superspinar may not be stable but we point out here that this does not necessarily follow from earlier studies. We show, by analytically treating the Teukolsky equations by Detwiler's method, that in fact there are infinitely many boundary conditions that make the superspinar stable at least against the linear perturbations of m = l modes, and that the modes will decay in time. Further consideration leads us to the conclusion that it is possible to set the inverse problem to the linear stability issue: since the radial Teukolsky equation for the superspinar has no singular point on the real axis, we obtain regular solutions to the Teukolsky equation for arbitrary discrete frequency spectrum of the quasi-normal modes (no incoming waves) and the boundary conditions at the "surface" of the superspinar are found from obtained solutions. It follows that we need to know more on the physical nature of the superspinar in order to decide on its stability in physical reality.

  19. A fractional Fourier transform analysis of the scattering of ultrasonic waves

    PubMed Central

    Tant, Katherine M.M.; Mulholland, Anthony J.; Langer, Matthias; Gachagan, Anthony

    2015-01-01

    Many safety critical structures, such as those found in nuclear plants, oil pipelines and in the aerospace industry, rely on key components that are constructed from heterogeneous materials. Ultrasonic non-destructive testing (NDT) uses high-frequency mechanical waves to inspect these parts, ensuring they operate reliably without compromising their integrity. It is possible to employ mathematical models to develop a deeper understanding of the acquired ultrasonic data and enhance defect imaging algorithms. In this paper, a model for the scattering of ultrasonic waves by a crack is derived in the time–frequency domain. The fractional Fourier transform (FrFT) is applied to an inhomogeneous wave equation where the forcing function is prescribed as a linear chirp, modulated by a Gaussian envelope. The homogeneous solution is found via the Born approximation which encapsulates information regarding the flaw geometry. The inhomogeneous solution is obtained via the inverse Fourier transform of a Gaussian-windowed linear chirp excitation. It is observed that, although the scattering profile of the flaw does not change, it is amplified. Thus, the theory demonstrates the enhanced signal-to-noise ratio permitted by the use of coded excitation, as well as establishing a time–frequency domain framework to assist in flaw identification and classification. PMID:25792967

  20. Passive acoustic measurement of bedload grain size distribution using self-generated noise

    NASA Astrophysics Data System (ADS)

    Petrut, Teodor; Geay, Thomas; Gervaise, Cédric; Belleudy, Philippe; Zanker, Sebastien

    2018-01-01

    Monitoring sediment transport processes in rivers is of particular interest to engineers and scientists to assess the stability of rivers and hydraulic structures. Various methods for sediment transport process description were proposed using conventional or surrogate measurement techniques. This paper addresses the topic of the passive acoustic monitoring of bedload transport in rivers and especially the estimation of the bedload grain size distribution from self-generated noise. It discusses the feasibility of linking the acoustic signal spectrum shape to bedload grain sizes involved in elastic impacts with the river bed treated as a massive slab. Bedload grain size distribution is estimated by a regularized algebraic inversion scheme fed with the power spectrum density of river noise estimated from one hydrophone. The inversion methodology relies upon a physical model that predicts the acoustic field generated by the collision between rigid bodies. Here we proposed an analytic model of the acoustic energy spectrum generated by the impacts between a sphere and a slab. The proposed model computes the power spectral density of bedload noise using a linear system of analytic energy spectra weighted by the grain size distribution. The algebraic system of equations is then solved by least square optimization and solution regularization methods. The result of inversion leads directly to the estimation of the bedload grain size distribution. The inversion method was applied to real acoustic data from passive acoustics experiments realized on the Isère River, in France. The inversion of in situ measured spectra reveals good estimations of grain size distribution, fairly close to what was estimated by physical sampling instruments. These results illustrate the potential of the hydrophone technique to be used as a standalone method that could ensure high spatial and temporal resolution measurements for sediment transport in rivers.

  1. Total variation regularization for seismic waveform inversion using an adaptive primal dual hybrid gradient method

    NASA Astrophysics Data System (ADS)

    Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan

    2018-04-01

    Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.

  2. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models.

    PubMed

    Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  3. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  4. Extensions of the Ferry shear wave model for active linear and nonlinear microrheology

    PubMed Central

    Mitran, Sorin M.; Forest, M. Gregory; Yao, Lingxing; Lindley, Brandon; Hill, David B.

    2009-01-01

    The classical oscillatory shear wave model of Ferry et al. [J. Polym. Sci. 2:593-611, (1947)] is extended for active linear and nonlinear microrheology. In the Ferry protocol, oscillation and attenuation lengths of the shear wave measured from strobe photographs determine storage and loss moduli at each frequency of plate oscillation. The microliter volumes typical in biology require modifications of experimental method and theory. Microbead tracking replaces strobe photographs. Reflection from the top boundary yields counterpropagating modes which are modeled here for linear and nonlinear viscoelastic constitutive laws. Furthermore, bulk imposed strain is easily controlled, and we explore the onset of normal stress generation and shear thinning using nonlinear viscoelastic models. For this paper, we present the theory, exact linear and nonlinear solutions where possible, and simulation tools more generally. We then illustrate errors in inverse characterization by application of the Ferry formulas, due to both suppression of wave reflection and nonlinearity, even if there were no experimental error. This shear wave method presents an active and nonlinear analog of the two-point microrheology of Crocker et al. [Phys. Rev. Lett. 85: 888 - 891 (2000)]. Nonlocal (spatially extended) deformations and stresses are propagated through a small volume sample, on wavelengths long relative to bead size. The setup is ideal for exploration of nonlinear threshold behavior. PMID:20011614

  5. The inverse problem of refraction travel times, part I: Types of Geophysical Nonuniqueness through Minimization

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.; Park, C.B.

    2005-01-01

    In a set of two papers we study the inverse problem of refraction travel times. The purpose of this work is to use the study as a basis for development of more sophisticated methods for finding more reliable solutions to the inverse problem of refraction travel times, which is known to be nonunique. The first paper, "Types of Geophysical Nonuniqueness through Minimization," emphasizes the existence of different forms of nonuniqueness in the realm of inverse geophysical problems. Each type of nonuniqueness requires a different type and amount of a priori information to acquire a reliable solution. Based on such coupling, a nonuniqueness classification is designed. Therefore, since most inverse geophysical problems are nonunique, each inverse problem must be studied to define what type of nonuniqueness it belongs to and thus determine what type of a priori information is necessary to find a realistic solution. The second paper, "Quantifying Refraction Nonuniqueness Using a Three-layer Model," serves as an example of such an approach. However, its main purpose is to provide a better understanding of the inverse refraction problem by studying the type of nonuniqueness it possesses. An approach for obtaining a realistic solution to the inverse refraction problem is planned to be offered in a third paper that is in preparation. The main goal of this paper is to redefine the existing generalized notion of nonuniqueness and a priori information by offering a classified, discriminate structure. Nonuniqueness is often encountered when trying to solve inverse problems. However, possible nonuniqueness diversity is typically neglected and nonuniqueness is regarded as a whole, as an unpleasant "black box" and is approached in the same manner by applying smoothing constraints, damping constraints with respect to the solution increment and, rarely, damping constraints with respect to some sparse reference information about the true parameters. In practice, when solving geophysical problems different types of nonuniqueness exist, and thus there are different ways to solve the problems. Nonuniqueness is usually regarded as due to data error, assuming the true geology is acceptably approximated by simple mathematical models. Compounding the nonlinear problems, geophysical applications routinely exhibit exact-data nonuniqueness even for models with very few parameters adding to the nonuniqueness due to data error. While nonuniqueness variations have been defined earlier, they have not been linked to specific use of a priori information necessary to resolve each case. Four types of nonuniqueness, typical for minimization problems are defined with the corresponding methods for inclusion of a priori information to find a realistic solution without resorting to a non-discriminative approach. The above-developed stand-alone classification is expected to be helpful when solving any geophysical inverse problems. ?? Birkha??user Verlag, Basel, 2005.

  6. Trimming and procrastination as inversion techniques

    NASA Astrophysics Data System (ADS)

    Backus, George E.

    1996-12-01

    By examining the processes of truncating and approximating the model space (trimming it), and by committing to neither the objectivist nor the subjectivist interpretation of probability (procrastinating), we construct a formal scheme for solving linear and non-linear geophysical inverse problems. The necessary prior information about the correct model xE can be either a collection of inequalities or a probability measure describing where xE was likely to be in the model space X before the data vector y0 was measured. The results of the inversion are (1) a vector z0 that estimates some numerical properties zE of xE; (2) an estimate of the error δz = z0 - zE. As y0 is finite dimensional, so is z0, and hence in principle inversion cannot describe all of xE. The error δz is studied under successively more specialized assumptions about the inverse problem, culminating in a complete analysis of the linear inverse problem with a prior quadratic bound on xE. Our formalism appears to encompass and provide error estimates for many of the inversion schemes current in geomagnetism, and would be equally applicable in geodesy and seismology if adequate prior information were available there. As an idealized example we study the magnetic field at the core-mantle boundary, using satellite measurements of field elements at sites assumed to be almost uniformly distributed on a single spherical surface. Magnetospheric currents are neglected and the crustal field is idealized as a random process with rotationally invariant statistics. We find that an appropriate data compression diagonalizes the variance matrix of the crustal signal and permits an analytic trimming of the idealized problem.

  7. FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems

    NASA Astrophysics Data System (ADS)

    Vourc'h, Eric; Rodet, Thomas

    2015-11-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2015 was a one-day workshop held in May 2015 which attracted around 70 attendees. Each of the submitted papers has been reviewed by two reviewers. There have been 15 accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks: GDR ISIS, GDR MIA, GDR MOA and GDR Ondes. The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA and SATIE.

  8. Isometric Non-Rigid Shape-from-Motion with Riemannian Geometry Solved in Linear Time.

    PubMed

    Parashar, Shaifali; Pizarro, Daniel; Bartoli, Adrien

    2017-10-06

    We study Isometric Non-Rigid Shape-from-Motion (Iso-NRSfM): given multiple intrinsically calibrated monocular images, we want to reconstruct the time-varying 3D shape of a thin-shell object undergoing isometric deformations. We show that Iso-NRSfM is solvable from local warps, the inter-image geometric transformations. We propose a new theoretical framework based on the Riemmanian manifold to represent the unknown 3D surfaces as embeddings of the camera's retinal plane. This allows us to use the manifold's metric tensor and Christoffel Symbol (CS) fields. These are expressed in terms of the first and second order derivatives of the inverse-depth of the 3D surfaces, which are the unknowns for Iso-NRSfM. We prove that the metric tensor and the CS are related across images by simple rules depending only on the warps. This forms a set of important theoretical results. We show that current solvers cannot solve for the first and second order derivatives of the inverse-depth simultaneously. We thus propose an iterative solution in two steps. 1) We solve for the first order derivatives assuming that the second order derivatives are known. We initialise the second order derivatives to zero, which is an infinitesimal planarity assumption. We derive a system of two cubics in two variables for each image pair. The sum-of-squares of these polynomials is independent of the number of images and can be solved globally, forming a well-posed problem for N ≥ 3 images. 2) We solve for the second order derivatives by initialising the first order derivatives from the previous step. We solve a linear system of 4N-4 equations in three variables. We iterate until the first order derivatives converge. The solution for the first order derivatives gives the surfaces' normal fields which we integrate to recover the 3D surfaces. The proposed method outperforms existing work in terms of accuracy and computation cost on synthetic and real datasets.

  9. A quasilinear kinetic model for solar wind electrons and protons instabilities

    NASA Astrophysics Data System (ADS)

    Sarfraz, M.; Yoon, P. H.

    2017-12-01

    In situ measurements confirm the anisotropic behavior in temperatures of solar wind species. These anisotropies associated with charge particles are observed to be relaxed. In collionless limit, kinetic instabilities play a significant role to reshape particles distribution. The linear analysis results are encapsulated in inverse relationship between anisotropy and plasma beta based observations fittings techniques, simulations methods, or solution of linearized Vlasov equation. Here amacroscopic quasilinear technique is adopted to confirm inverse relationship through solutions of set of self-consistent kinetic equations. Firstly, for a homogeneous and non-collisional medium, quasilinear kinetic model is employed to display asymptotic variations of core and halo electrons temperatures and saturations of wave energy densities for electromagnetic electron cyclotron (EMEC) instability sourced by, T⊥}>T{∥ . It is shown that, in (β ∥ , T⊥}/T{∥ ) phase space, the saturations stages of anisotropies associated with core and halo electrons lined up on their respective marginal stability curves. Secondly, for case of electrons firehose instability ignited by excessive parallel temperature i.e T⊥}>T{∥ , both electrons and protons are allowed to dynamically evolve in time. It is also observed that, the trajectories of protons and electrons at saturation stages in phase space of anisotropy and plasma beta correspond to proton cyclotron and firehose marginal stability curves, respectively. Next, the outstanding issue that most of observed proton data resides in nearly isotropic state in phase space is interpreted. Here, in quasilinear frame-work of inhomogeneous solar wind system, a set of self-consistent quasilinear equations is formulated to show a dynamical variations of temperatures with spatial distributions. On choice of different initial parameters, it is shown that, interplay of electron and proton instabilities provides an counter-balancing force to slow down the protons away from marginal stability states. As we are dealing both, protons and electrons for radially expanding solar wind plasma, our present approach may eventually be incorporated in global-kinetic models of the solar wind species.

  10. Scaling behavior of ground-state energy cluster expansion for linear polyenes

    NASA Astrophysics Data System (ADS)

    Griffin, L. L.; Wu, Jian; Klein, D. J.; Schmalz, T. G.; Bytautas, L.

    Ground-state energies for linear-chain polyenes are additively expanded in a sequence of terms for chemically relevant conjugated substructures of increasing size. The asymptotic behavior of the large-substructure limit (i.e., high-polymer limit) is investigated as a means of characterizing the rapidity of convergence and consequent utility of this energy cluster expansion. Consideration is directed to computations via: simple Hückel theory, a refined Hückel scheme with geometry optimization, restricted Hartree-Fock self-consistent field (RHF-SCF) solutions of fixed bond-length Parisier-Parr-Pople (PPP)/Hubbard models, and ab initio SCF approaches with and without geometry optimization. The cluster expansion in what might be described as the more "refined" approaches appears to lead to qualitatively more rapid convergence: exponentially fast as opposed to an inverse power at the simple Hückel or SCF-Hubbard levels. The substructural energy cluster expansion then seems to merit special attention. Its possible utility in making accurate extrapolations from finite systems to extended polymers is noted.

  11. VARIAN CLINAC 6 MeV Photon Spectra Unfolding using a Monte Carlo Meshed Model

    NASA Astrophysics Data System (ADS)

    Morató, S.; Juste, B.; Miró, R.; Verdú, G.

    2017-09-01

    Energy spectrum is the best descriptive function to determine photon beam quality of a Medical Linear Accelerator (LinAc). The use of realistic photon spectra in Monte Carlo simulations has a great importance to obtain precise dose calculations in Radiotherapy Treatment Planning (RTP). Reconstruction of photon spectra emitted by medical accelerators from measured depth dose distributions in a water cube is an important tool for commissioning a Monte Carlo treatment planning system. Regarding this, the reconstruction problem is an inverse radiation transport function which is ill conditioned and its solution may become unstable due to small perturbations in the input data. This paper presents a more stable spectral reconstruction method which can be used to provide an independent confirmation of source models for a given machine without any prior knowledge of the spectral distribution. Monte Carlo models used in this work are built with unstructured meshes to simulate with realism the linear accelerator head geometry.

  12. Analysis and design of a six-degree-of-freedom Stewart platform-based robotic wrist

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Antrazi, Sami; Zhou, Zhen-Lei

    1991-01-01

    The kinematic analysis and implementation of a six degree of freedom robotic wrist which is mounted to a general open-kinetic chain manipulator to serve as a restbed for studying precision robotic assembly in space is discussed. The wrist design is based on the Stewart Platform mechanism and consists mainly of two platforms and six linear actuators driven by DC motors. Position feedback is achieved by linear displacement transducers mounted along the actuators and force feedback is obtained by a 6 degree of freedom force sensor mounted between the gripper and the payload platform. The robot wrist inverse kinematics which computes the required actuator lengths corresponding to Cartesian variables has a closed-form solution. The forward kinematics is solved iteratively using the Newton-Ralphson method which simultaneously provides a modified Jacobian Matrix which relates length velocities to Cartesian translational velocities and time rates of change of roll-pitch-yaw angles. Results of computer simulation conducted to evaluate the efficiency of the forward kinematics and Modified Jacobian Matrix are discussed.

  13. FAST: a framework for simulation and analysis of large-scale protein-silicon biosensor circuits.

    PubMed

    Gu, Ming; Chakrabartty, Shantanu

    2013-08-01

    This paper presents a computer aided design (CAD) framework for verification and reliability analysis of protein-silicon hybrid circuits used in biosensors. It is envisioned that similar to integrated circuit (IC) CAD design tools, the proposed framework will be useful for system level optimization of biosensors and for discovery of new sensing modalities without resorting to laborious fabrication and experimental procedures. The framework referred to as FAST analyzes protein-based circuits by solving inverse problems involving stochastic functional elements that admit non-linear relationships between different circuit variables. In this regard, FAST uses a factor-graph netlist as a user interface and solving the inverse problem entails passing messages/signals between the internal nodes of the netlist. Stochastic analysis techniques like density evolution are used to understand the dynamics of the circuit and estimate the reliability of the solution. As an example, we present a complete design flow using FAST for synthesis, analysis and verification of our previously reported conductometric immunoassay that uses antibody-based circuits to implement forward error-correction (FEC).

  14. Inverse analysis and regularisation in conditional source-term estimation modelling

    NASA Astrophysics Data System (ADS)

    Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.

    2014-05-01

    Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.

  15. Analysis of groundwater flow and stream depletion in L-shaped fluvial aquifers

    NASA Astrophysics Data System (ADS)

    Lin, Chao-Chih; Chang, Ya-Chi; Yeh, Hund-Der

    2018-04-01

    Understanding the head distribution in aquifers is crucial for the evaluation of groundwater resources. This article develops a model for describing flow induced by pumping in an L-shaped fluvial aquifer bounded by impermeable bedrocks and two nearly fully penetrating streams. A similar scenario for numerical studies was reported in Kihm et al. (2007). The water level of the streams is assumed to be linearly varying with distance. The aquifer is divided into two subregions and the continuity conditions of the hydraulic head and flux are imposed at the interface of the subregions. The steady-state solution describing the head distribution for the model without pumping is first developed by the method of separation of variables. The transient solution for the head distribution induced by pumping is then derived based on the steady-state solution as initial condition and the methods of finite Fourier transform and Laplace transform. Moreover, the solution for stream depletion rate (SDR) from each of the two streams is also developed based on the head solution and Darcy's law. Both head and SDR solutions in the real time domain are obtained by a numerical inversion scheme called the Stehfest algorithm. The software MODFLOW is chosen to compare with the proposed head solution for the L-shaped aquifer. The steady-state and transient head distributions within the L-shaped aquifer predicted by the present solution are compared with the numerical simulations and measurement data presented in Kihm et al. (2007).

  16. Determination of thermophysical characteristics of solid materials by electrical modelling of the solutions to the inverse problems in nonsteady heat conduction

    NASA Technical Reports Server (NTRS)

    Kozdoba, L. A.; Krivoshei, F. A.

    1985-01-01

    The solution of the inverse problem of nonsteady heat conduction is discussed, based on finding the coefficient of the heat conduction and the coefficient of specific volumetric heat capacity. These findings are included in the equation used for the electrical model of this phenomenon.

  17. Double point source W-phase inversion: Real-time implementation and automated model selection

    USGS Publications Warehouse

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  18. Inferring neural activity from BOLD signals through nonlinear optimization.

    PubMed

    Vakorin, Vasily A; Krakovska, Olga O; Borowsky, Ron; Sarty, Gordon E

    2007-11-01

    The blood oxygen level-dependent (BOLD) fMRI signal does not measure neuronal activity directly. This fact is a key concern for interpreting functional imaging data based on BOLD. Mathematical models describing the path from neural activity to the BOLD response allow us to numerically solve the inverse problem of estimating the timing and amplitude of the neuronal activity underlying the BOLD signal. In fact, these models can be viewed as an advanced substitute for the impulse response function. In this work, the issue of estimating the dynamics of neuronal activity from the observed BOLD signal is considered within the framework of optimization problems. The model is based on the extended "balloon" model and describes the conversion of neuronal signals into the BOLD response through the transitional dynamics of the blood flow-inducing signal, cerebral blood flow, cerebral blood volume and deoxyhemoglobin concentration. Global optimization techniques are applied to find a control input (the neuronal activity and/or the biophysical parameters in the model) that causes the system to follow an admissible solution to minimize discrepancy between model and experimental data. As an alternative to a local linearization (LL) filtering scheme, the optimization method escapes the linearization of the transition system and provides a possibility to search for the global optimum, avoiding spurious local minima. We have found that the dynamics of the neural signals and the physiological variables as well as the biophysical parameters can be robustly reconstructed from the BOLD responses. Furthermore, it is shown that spiking off/on dynamics of the neural activity is the natural mathematical solution of the model. Incorporating, in addition, the expansion of the neural input by smooth basis functions, representing a low-pass filtering, allows us to model local field potential (LFP) solutions instead of spiking solutions.

  19. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  20. Effects of spatially variable resolution on field-scale estimates of tracer concentration from electrical inversions using Archie's law

    USGS Publications Warehouse

    Singha, Kamini; Gorelick, Steven M.

    2006-01-01

    Two important mechanisms affect our ability to estimate solute concentrations quantitatively from the inversion of field-scale electrical resistivity tomography (ERT) data: (1) the spatially variable physical processes that govern the flow of current as well as the variation of physical properties in space and (2) the overparameterization of inverse models, which requires the imposition of a smoothing constraint (regularization) to facilitate convergence of the inverse solution. Based on analyses of field and synthetic data, we find that the ability of ERT to recover the 3D shape and magnitudes of a migrating conductive target is spatially variable. Additionally, the application of Archie's law to tomograms from field ERT data produced solute concentrations that are consistently less than 10% of point measurements collected in the field and estimated from transport modeling. Estimates of concentration from ERT using Archie's law only fit measured solute concentrations if the apparent formation factor is varied with space and time and allowed to take on unreasonably high values. Our analysis suggests that the inability to find a single petrophysical relation in space and time between concentration and electrical resistivity is largely an effect of two properties of ERT surveys: (1) decreased sensitivity of ERT to detect the target plume with increasing distance from the electrodes and (2) the smoothing imprint of regularization used in inversion.

  1. Synthesis and characterization of Fe colloid catalysts in inverse micelle solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martino, A.; Stoker, M.; Hicks, M.

    1995-12-31

    Surfactant molecules, possessing a hydrophilic head group and a hydrophobic tail group, aggregate in various solvents to form structured solutions. In two component mixtures of surfactant and organic solvents (e.g., toluene and alkanes), surfactants aggregate to form inverse micelles. Here, the hydrophilic head groups shield themselves by forming a polar core, and the hydrophobic tails groups are free to move about in the surrounding oleic phase. The formation of Fe clusters in inverse miscelles was studied.Iron salts are solubilized within the polar interior of inverse micelles, and the addition of the reducing agent LiBH{sub 4} initiates a chemical reduction tomore » produce monodisperse, nanometer sized Fe based particles. The reaction sequence is sustained by material exchange between inverse micelles. The surfactant interface provides a spatial constraint on the reaction volume, and reactions carried out in these micro-heterogeneous solutions produce colloidal sized particles (10-100{Angstrom}) stabilized in solution against flocculation of surfactant. The clusters were stabilized with respect to size with transmission electron microscopy (TEM) and with respect to chemical composition with Mossbauer spectroscopy, electron diffraction, and x-ray photoelectron spectroscopy (XPS). In addition, these iron based clusters were tested for catalytic activity in a model hydrogenolysis reaction. The hydrogenolysis of naphthyl bibenzyl methane was used as a model for coal pyrolysis.« less

  2. Integration of Visual and Joint Information to Enable Linear Reaching Motions

    NASA Astrophysics Data System (ADS)

    Eberle, Henry; Nasuto, Slawomir J.; Hayashi, Yoshikatsu

    2017-01-01

    A new dynamics-driven control law was developed for a robot arm, based on the feedback control law which uses the linear transformation directly from work space to joint space. This was validated using a simulation of a two-joint planar robot arm and an optimisation algorithm was used to find the optimum matrix to generate straight trajectories of the end-effector in the work space. We found that this linear matrix can be decomposed into the rotation matrix representing the orientation of the goal direction and the joint relation matrix (MJRM) representing the joint response to errors in the Cartesian work space. The decomposition of the linear matrix indicates the separation of path planning in terms of the direction of the reaching motion and the synergies of joint coordination. Once the MJRM is numerically obtained, the feedfoward planning of reaching direction allows us to provide asymptotically stable, linear trajectories in the entire work space through rotational transformation, completely avoiding the use of inverse kinematics. Our dynamics-driven control law suggests an interesting framework for interpreting human reaching motion control alternative to the dominant inverse method based explanations, avoiding expensive computation of the inverse kinematics and the point-to-point control along the desired trajectories.

  3. Inversed estimation of critical factors for controlling over-prediction of summertime tropospheric O3 over East Asia based of the combination of DDM sensitivity analysis and modeled Green's function method

    NASA Astrophysics Data System (ADS)

    Itahashi, S.; Yumimoto, K.; Uno, I.; Kim, S.

    2012-12-01

    Air quality studies based on the chemical transport model have been provided many important results for promoting our knowledge of air pollution phenomena, however, discrepancies between modeling results and observation data are still important issue to overcome. One of the concerning issue would be an over-prediction of summertime tropospheric ozone in remote area of Japan. This problem has been pointed out in the model comparison study of both regional scale (e.g., MICS-Asia) and global scale model (e.g., TH-FTAP). Several reasons for this issue can be listed as, (i) the modeled reproducibility on the penetration of clean oceanic air mass, (ii) correct estimation of the anthropogenic NOx / VOC emissions over East Asia, (iii) the chemical reaction scheme used in model simulation. In this study, we attempt to inverse estimation of some important chemical reactions based on the combining system of DDM (decoupled direct method) sensitivity analysis and modeled Green's function approach. The decoupled direct method (DDM) is an efficient and accurate way of performing sensitivity analysis to model inputs, calculates sensitivity coefficients representing the responsiveness of atmospheric chemical concentrations to perturbations in a model input or parameter. The inverse solutions with the Green's functions are given by a linear, least-squares method but are still robust against nonlinearities, To construct the response matrix (i.e., Green's functions), we can directly use the results of DDM sensitivity analysis. The solution of chemical reaction constants which have relatively large uncertainties are determined with constraints of observed ozone concentration data over the remote area in Japan. Our inversed estimation demonstrated that the underestimation of reaction constant to produce HNO3 (NO2 + OH + M → HNO3 + M) in SAPRC99 chemical scheme, and the inversed results indicated the +29.0 % increment to this reaction. This estimation has good agreement when compared with the CB4 and CB5, and also to the SAPRC07 estimation. For the NO2 photolysis rates, 49.4 % reduction was pronounced. This result indicates the importance of heavy aerosol effect for the change of photolysis rate must be incorporated in the numerical study.

  4. Analysis of Interval Changes on Mammograms for Computer Aided Diagnosis

    DTIC Science & Technology

    2000-05-01

    tizer was calibrated so that the gray values were linearly and erage pixel values in the template and ROI, respectively. The inversely proportional to the...earlier for linearly and inversely proportional to the OD within the alignment of the breast regions, except that the regions to be range 0-4 OD...results versely proportional to the radial distance r from the nipple. in a decrease in the value of (to 20 mm. This decrease helps For the data set

  5. Application of linearized inverse scattering methods for the inspection in steel plates embedded in concrete structures

    NASA Astrophysics Data System (ADS)

    Tsunoda, Takaya; Suzuki, Keigo; Saitoh, Takahiro

    2018-04-01

    This study develops a method to visualize the state of steel-concrete interface with ultrasonic testing. Scattered waves are obtained by the UT pitch-catch mode from the surface of the concrete. Discrete wavelet transform is applied in order to extract echoes scattered from the steel-concrete interface. Then Linearized Inverse Scattering Methods are used for imaging the interface. The results show that LISM with Born and Kirchhoff approximation provide clear images for the target.

  6. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    PubMed

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  7. Numerical reconstruction of unknown Robin inclusions inside a heat conductor by a non-iterative method

    NASA Astrophysics Data System (ADS)

    Nakamura, Gen; Wang, Haibing

    2017-05-01

    Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.

  8. Modeling and Inverse Controller Design for an Unmanned Aerial Vehicle Based on the Self-Organizing Map

    NASA Technical Reports Server (NTRS)

    Cho, Jeongho; Principe, Jose C.; Erdogmus, Deniz; Motter, Mark A.

    2005-01-01

    The next generation of aircraft will have dynamics that vary considerably over the operating regime. A single controller will have difficulty to meet the design specifications. In this paper, a SOM-based local linear modeling scheme of an unmanned aerial vehicle (UAV) is developed to design a set of inverse controllers. The SOM selects the operating regime depending only on the embedded output space information and avoids normalization of the input data. Each local linear model is associated with a linear controller, which is easy to design. Switching of the controllers is done synchronously with the active local linear model that tracks the different operating conditions. The proposed multiple modeling and control strategy has been successfully tested in a simulator that models the LoFLYTE UAV.

  9. Earthquake source tensor inversion with the gCAP method and 3D Green's functions

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Ben-Zion, Y.; Zhu, L.; Ross, Z.

    2013-12-01

    We develop and apply a method to invert earthquake seismograms for source properties using a general tensor representation and 3D Green's functions. The method employs (i) a general representation of earthquake potency/moment tensors with double couple (DC), compensated linear vector dipole (CLVD), and isotropic (ISO) components, and (ii) a corresponding generalized CAP (gCap) scheme where the continuous wave trains are broken into Pnl and surface waves (Zhu & Ben-Zion, 2013). For comparison, we also use the waveform inversion method of Zheng & Chen (2012) and Ammon et al. (1998). Sets of 3D Green's functions are calculated on a grid of 1 km3 using the 3-D community velocity model CVM-4 (Kohler et al. 2003). A bootstrap technique is adopted to establish robustness of the inversion results using the gCap method (Ross & Ben-Zion, 2013). Synthetic tests with 1-D and 3-D waveform calculations show that the source tensor inversion procedure is reasonably reliable and robust. As initial application, the method is used to investigate source properties of the March 11, 2013, Mw=4.7 earthquake on the San Jacinto fault using recordings of ~45 stations up to ~0.2Hz. Both the best fitting and most probable solutions include ISO component of ~1% and CLVD component of ~0%. The obtained ISO component, while small, is found to be a non-negligible positive value that can have significant implications for the physics of the failure process. Work on using higher frequency data for this and other earthquakes is in progress.

  10. Invariants of the Jacobi-Porstendorfer room model for radon progeny in indoor air.

    PubMed

    Thomas, Josef; Jilek, Karel

    2012-06-01

    The Jacobi-Porstendörfer room model, describing the dynamical behaviour of radon and radon progeny in indoor air, has been successfully used for decades. The inversion of the model-the determination of the five parameters from measured results which provide better information on the room environment than mere ratios of unattached and attached radon progeny-is treated as an algebraic task. The linear interdependence of the used equations strongly limits the algebraic invertibility of experimental results. For a unique solution, the fulfilment of two invariants of the room model for the measured results is required. Non-fulfilment of these model invariants by the measured results leads to a set of non-identical solutions and indicates the violation of the conditions required by the room model or the incorrectness or excessive uncertainties of the measured results. The limited and non-unique algebraic invertibility of the room model is analysed numerically using our own data for the radon progeny.

  11. Method of adiabatic modes in studying problems of smoothly irregular open waveguide structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sevastianov, L. A., E-mail: sevast@sci.pfu.edu.ru; Egorov, A. A.; Sevastyanov, A. L.

    2013-02-15

    Basic steps in developing an original method of adiabatic modes that makes it possible to solve the direct and inverse problems of simulating and designing three-dimensional multilayered smoothly irregular open waveguide structures are described. A new element in the method is that an approximate solution of Maxwell's equations is made to obey 'inclined' boundary conditions at the interfaces between themedia being considered. These boundary conditions take into account the obliqueness of planes tangent to nonplanar boundaries between the media and lead to new equations for coupled vector quasiwaveguide hybrid adiabatic modes. Solutions of these equations describe the phenomenon of 'entanglement'more » of two linear polarizations of an irregular multilayered waveguide, the appearance of a new mode in an entangled state, and the effect of rotation of the polarization plane of quasiwaveguide modes. The efficiency of the method is demonstrated by considering the example of numerically simulating a thin-film generalized waveguide Lueneburg lens.« less

  12. Transient reaction of an elastic half-plane on a source of a concentrated boundary disturbance

    NASA Astrophysics Data System (ADS)

    Okonechnikov, A. S.; Tarlakovski, D. V.; Ul'yashina, A. N.; Fedotenkov, G. V.

    2016-11-01

    One of the key problems in studying the non-stationary processes of solid mechanics is obtaining of influence functions. These functions serve as solutions for the problems of effect of sudden concentrated loads on a body with linear elastic properties. Knowledge of the influence functions allows us to obtain the solutions for the problems with non-mixed boundary and initial conditions in the form of quadrature formulae with the help of superposition principle, as well as get the integral governing equations for the problems with mixed boundary and initial conditions. This paper offers explicit derivations for all nonstationary surface influence functions of an elastic half-plane in a plane strain condition. It is achieved with the help of combined inverse transform of a Fourier-Laplace integral transformation. The external disturbance is both dynamic and kinematic. The derived functions in xτ-domain are studied to find and describe singularities and are supplemented with graphs.

  13. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2010-01-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366

  14. The fast multipole method and Fourier convolution for the solution of acoustic scattering on regular volumetric grids

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2010-10-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  15. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2010-10-20

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  16. Pure quasi-P-wave calculation in transversely isotropic media using a hybrid method

    NASA Astrophysics Data System (ADS)

    Wu, Zedong; Liu, Hongwei; Alkhalifah, Tariq

    2018-07-01

    The acoustic approximation for anisotropic media is widely used in current industry imaging and inversion algorithms mainly because Pwaves constitute the majority of the energy recorded in seismic exploration. The resulting acoustic formulae tend to be simpler, resulting in more efficient implementations, and depend on fewer medium parameters. However, conventional solutions of the acoustic wave equation with higher-order derivatives suffer from shear wave artefacts. Thus, we derive a new acoustic wave equation for wave propagation in transversely isotropic (TI) media, which is based on a partially separable approximation of the dispersion relation for TI media and free of shear wave artefacts. Even though our resulting equation is not a partial differential equation, it is still a linear equation. Thus, we propose to implement this equation efficiently by combining the finite difference approximation with spectral evaluation of the space-independent parts. The resulting algorithm provides solutions without the constraint ɛ ≥ δ. Numerical tests demonstrate the effectiveness of the approach.

  17. Black hole algorithm for determining model parameter in self-potential data

    NASA Astrophysics Data System (ADS)

    Sungkono; Warnana, Dwa Desa

    2018-01-01

    Analysis of self-potential (SP) data is increasingly popular in geophysical method due to its relevance in many cases. However, the inversion of SP data is often highly nonlinear. Consequently, local search algorithms commonly based on gradient approaches have often failed to find the global optimum solution in nonlinear problems. Black hole algorithm (BHA) was proposed as a solution to such problems. As the name suggests, the algorithm was constructed based on the black hole phenomena. This paper investigates the application of BHA to solve inversions of field and synthetic self-potential (SP) data. The inversion results show that BHA accurately determines model parameters and model uncertainty. This indicates that BHA is highly potential as an innovative approach for SP data inversion.

  18. Numerical solution of 2D-vector tomography problem using the method of approximate inverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna

    2016-08-10

    We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.

  19. Fully probabilistic earthquake source inversion on teleseismic scales

    NASA Astrophysics Data System (ADS)

    Stähler, Simon; Sigloch, Karin

    2017-04-01

    Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.

  20. Linear ketenimines. Variable structures of C,C-dicyanoketenimines and C,C-bis-sulfonylketenimines.

    PubMed

    Finnerty, Justin; Mitschke, Ullrich; Wentrup, Curt

    2002-02-22

    C,C-dicyanoketenimines 10a-c were generated by flash vacuum thermolysis of ketene N,S-acetals 9a-c or by thermal or photochemical decomposition of alpha-azido-beta-cyanocinnamonitrile 11. In the latter reaction, 3,3-dicyano-2-phenyl-1-azirine 12 is also formed. IR spectroscopy of the keteniminines isolated in Ar matrixes or as neat films, NMR spectroscopy of 10c, and theoretical calculations (B3LYP/6-31G) demonstrate that these ketenimines have variable geometry, being essentially linear along the CCN-R framework in polar media (neat films and solution), but in the gas phase or Ar matrix they are bent, as is usual for ketenimines. Experiments and calculations agree that a single CN substituent as in 13 is not enough to enforce linearity, and sulfonyl groups are less effective that cyano groups in causing linearity. C,C-bis(methylsulfonyl)ketenimines 4-5 and a C-cyano-C-(methylsulfonyl)ketenimine 15 are not linear. The compound p-O2NC6H4N=C=C(COOMe)2 previously reported in the literature is probably somewhat linearized along the CCNR moiety. A computational survey (B3LYP/6-31G) of the inversion barrier at nitrogen indicates that electronegative C-substituents dramatically lower the barrier; this is also true of N-acyl substituents. Increasing polarity causes lower barriers. Although N-alkylbis(methylsulfonyl)ketenimines are not calculated to be linear, the barriers are so low that crystal lattice forces can induce planarity in N-methylbis(methylsulfonyl)ketenimine 3.

  1. Construction of a cardiac conduction system subject to extracellular stimulation.

    PubMed

    Clements, Clyde; Vigmond, Edward

    2005-01-01

    Proper electrical excitation of the heart is dependent on the specialized conduction system that coordinates the electrical activity from the atria to the ventricles. This paper describes the construction of a conduction system as a branching network of Purkinje fibers on the endocardial surface. Endocardial surfaces were extracted from an FEM model of the ventricles and transformed to 2D. A Purkinje network was drawn on top and the inverse transform performed. The underlying mathematics utilized one dimensional cubic Hermite finite elements. Compared to linear elements, the cubic Hermite solution was found to have a much smaller RMS error. Furthermore, this method has the advantage of enforcing current conservation at bifurcation and unification points, and allows for discrete coupling resistances.

  2. Analysis of lightning field changes produced by Florida thunderstorms

    NASA Technical Reports Server (NTRS)

    Koshak, William John

    1991-01-01

    A new method is introduced for inferring the charges deposited in a lightning flash. Lightning-caused field changes (delta E's) are described by a more general volume charge distribution than is defined on a large cartesian grid system centered above the measuring networks. It is shown that a linear system of equations can be used to relate delta E's at the ground to the values of charge on this grid. It is possible to apply more general physical constraints to the charge solutions, and it is possible to access the information content of the delta E data. Computer-simulated delta E inversions show that the location and symmetry of the charge retrievals are usually consistent with the known test sources.

  3. Definition and solution of a stochastic inverse problem for the Manning’s n parameter field in hydrodynamic models

    DOE PAGES

    Butler, Troy; Graham, L.; Estep, D.; ...

    2015-02-03

    The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less

  4. Comparison of iterative inverse coarse-graining methods

    NASA Astrophysics Data System (ADS)

    Rosenberger, David; Hanke, Martin; van der Vegt, Nico F. A.

    2016-10-01

    Deriving potentials for coarse-grained Molecular Dynamics (MD) simulations is frequently done by solving an inverse problem. Methods like Iterative Boltzmann Inversion (IBI) or Inverse Monte Carlo (IMC) have been widely used to solve this problem. The solution obtained by application of these methods guarantees a match in the radial distribution function (RDF) between the underlying fine-grained system and the derived coarse-grained system. However, these methods often fail in reproducing thermodynamic properties. To overcome this deficiency, additional thermodynamic constraints such as pressure or Kirkwood-Buff integrals (KBI) may be added to these methods. In this communication we test the ability of these methods to converge to a known solution of the inverse problem. With this goal in mind we have studied a binary mixture of two simple Lennard-Jones (LJ) fluids, in which no actual coarse-graining is performed. We further discuss whether full convergence is actually needed to achieve thermodynamic representability.

  5. Two-Port Representation of a Linear Transmission Line in the Time Domain.

    DTIC Science & Technology

    1980-01-01

    which is a rational function. To use the Prony procedure it is necessary to inverse transform the admittance functions. For the transmission line, most...impulse is a constant, the inverse transform of Y0(s) contains an impulse of value ._ Therefore, if we were to numerically inverse transform Yo(s), we...would remove this im- pulse and inverse transform Y-(S) Y (S) 1’LR+C~ (23) The prony procedure would then be applied to the result. Of course, an impulse

  6. An iterative solver for the 3D Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Belonosov, Mikhail; Dmitriev, Maxim; Kostin, Victor; Neklyudov, Dmitry; Tcheverda, Vladimir

    2017-09-01

    We develop a frequency-domain iterative solver for numerical simulation of acoustic waves in 3D heterogeneous media. It is based on the application of a unique preconditioner to the Helmholtz equation that ensures convergence for Krylov subspace iteration methods. Effective inversion of the preconditioner involves the Fast Fourier Transform (FFT) and numerical solution of a series of boundary value problems for ordinary differential equations. Matrix-by-vector multiplication for iterative inversion of the preconditioned matrix involves inversion of the preconditioner and pointwise multiplication of grid functions. Our solver has been verified by benchmarking against exact solutions and a time-domain solver.

  7. Path Following in the Exact Penalty Method of Convex Programming.

    PubMed

    Zhou, Hua; Lange, Kenneth

    2015-07-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.

  8. Path Following in the Exact Penalty Method of Convex Programming

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2015-01-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044

  9. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agaltsov, A. D., E-mail: agalets@gmail.com; Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr; IEPT RAS, 117997 Moscow

    2014-10-15

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.

  10. Forward and inverse functional variations in rotationally inelastic scattering

    NASA Astrophysics Data System (ADS)

    Guzman, Robert; Rabitz, Herschel

    1986-09-01

    This paper considers the response of various rotational energy transfer processes to functional variations about an assumed model intermolecular potential. Attention is focused on the scattering of an atom and a linear rigid rotor. The collision dynamics are approximated by employing both the infinite order sudden (IOS) and exponential distorted wave (EDW) methods to describe Ar-N2 and He-H2, respectively. The following cross sections are considered: state-to-state differential and integral, final state summed differential and integral, and effective diffusion and viscosity cross sections. Attention is first given to the forward sensitivity densities δ0/δV(R,r) where 0 denotes any of the aforementioned cross sections, R is the intermolecular distance, and r is the internal coordinates. These forward sensitivity densities (functional derivatives) offer a quantitative measure of the importance of different regions of the potential surface to a chosen cross section. Via knowledge of the forward sensitivities and a particular variation δV(R,r) the concomitant response δ0 is generated. It was found that locally a variation in the potential can give rise to a large response in the cross sections as measured by these forward densities. In contrast, a unit percent change in the overall potential produced a 1%-10% change in the cross sections studied indicating that the large + and - responses to local variations tend to cancel. In addition, inverse sensitivity densities δV(R,r)/δ0 are obtained. These inverse densities are of interest since they are the exact solution to the infinitesimal inverse scattering problem. Although the inverse sensitivity densities do not in themselves form an inversion algorithm, they do offer a quantitative measure of the importance of performing particular measurements for the ultimate purpose of inversion. Using a set of state-to-state integral cross sections we found that the resultant responses from the infinitesimal inversion were typically small such that ‖δV(R,r)‖≪‖V(R,r)‖. From the viewpoint of an actual inversion, these results indicate that only through an extensive effort will significant knowledge of the potential be gained from the cross sections. All of these calculations serve to illustrate the methodology, and other observables as well as dynamical schemes could be explored as desired.

  11. Application of artificial intelligent tools to modeling of glucosamine preparation from exoskeleton of shrimp.

    PubMed

    Valizadeh, Hadi; Pourmahmood, Mohammad; Mojarrad, Javid Shahbazi; Nemati, Mahboob; Zakeri-Milani, Parvin

    2009-04-01

    The objective of this study was to forecast and optimize the glucosamine production yield from chitin (obtained from Persian Gulf shrimp) by means of genetic algorithm (GA), particle swarm optimization (PSO), and artificial neural networks (ANNs) as tools of artificial intelligence methods. Three factors (acid concentration, acid solution to chitin ratio, and reaction time) were used as the input parameters of the models investigated. According to the obtained results, the production yield of glucosamine hydrochloride depends linearly on acid concentration, acid solution to solid ratio, and time and also the cross-product of acid concentration and time and the cross-product of solids to acid solution ratio and time. The production yield significantly increased with an increase of acid concentration, acid solution ratio, and reaction time. The production yield is inversely related to the cross-product of acid concentration and time. It means that at high acid concentrations, the longer reaction times give lower production yields. The results revealed that the average percent error (PE) for prediction of production yield by GA, PSO, and ANN are 6.84, 7.11, and 5.49%, respectively. Considering the low PE, it might be concluded that these models have a good predictive power in the studied range of variables and they have the ability of generalization to unknown cases.

  12. VASP- VARIABLE DIMENSION AUTOMATIC SYNTHESIS PROGRAM

    NASA Technical Reports Server (NTRS)

    White, J. S.

    1994-01-01

    VASP is a variable dimension Fortran version of the Automatic Synthesis Program, ASP. The program is used to implement Kalman filtering and control theory. Basically, it consists of 31 subprograms for solving most modern control problems in linear, time-variant (or time-invariant) control systems. These subprograms include operations of matrix algebra, computation of the exponential of a matrix and its convolution integral, and the solution of the matrix Riccati equation. The user calls these subprograms by means of a FORTRAN main program, and so can easily obtain solutions to most general problems of extremization of a quadratic functional of the state of the linear dynamical system. Particularly, these problems include the synthesis of the Kalman filter gains and the optimal feedback gains for minimization of a quadratic performance index. VASP, as an outgrowth of the Automatic Synthesis Program, has the following improvements: more versatile programming language; more convenient input/output format; some new subprograms which consolidate certain groups of statements that are often repeated; and variable dimensioning. The pertinent difference between the two programs is that VASP has variable dimensioning and more efficient storage. The documentation for the VASP program contains a VASP dictionary and example problems. The dictionary contains a description of each subroutine and instructions on its use. The example problems include dynamic response, optimal control gain, solution of the sampled data matrix Riccati equation, matrix decomposition, and a pseudo-inverse of a matrix. This program is written in FORTRAN IV and has been implemented on the IBM 360. The VASP program was developed in 1971.

  13. A new approach to the solution of the linear mixing model for a single isotope: application to the case of an opportunistic predator.

    PubMed

    Hall-Aspland, S A; Hall, A P; Rogers, T L

    2005-03-01

    Mixing models are used to determine diets where the number of prey items are greater than one, however, the limitation of the linear mixing method is the lack of a unique solution when the number of potential sources is greater than the number (n) of isotopic signatures +1. Using the IsoSource program all possible combinations of each source contribution (0-100%) in preselected small increments can be examined and a range of values produced for each sample analysed. We propose the use of a Moore Penrose (M-P) pseudoinverse, which involves the inverse of a 2x2 matrix. This is easily generalized to the case of a single isotope with (p) prey sources and produces a specific solution. The Antarctic leopard seal (Hydrurga leptonyx) was used as a model species to test this method. This seal is an opportunistic predator, which preys on a wide range of species including seals, penguins, fish and krill. The M-P method was used to determine the contribution to diet from each of the four prey types based on blood and fur samples collected over three consecutive austral summers. The advantage of the M-P method was the production of a vector of fractions f for each predator isotopic value, allowing us to identify the relative variation in dietary proportions. Comparison of the calculated fractions from this method with 'means' from IsoSource allowed confidence in the new approach for the case of a single isotope, N.

  14. On stability of the solutions of inverse problem for determining the right-hand side of a degenerate parabolic equation with two independent variables

    NASA Astrophysics Data System (ADS)

    Kamynin, V. L.; Bukharova, T. I.

    2017-01-01

    We prove the estimates of stability with respect to perturbations of input data for the solutions of inverse problems for degenerate parabolic equations with unbounded coefficients. An important feature of these estimates is that the constants in these estimates are written out explicitly by the input data of the problem.

  15. Nonlinear Problems in Fluid Dynamics and Inverse Scattering

    DTIC Science & Technology

    1993-05-31

    nonlinear Kadomtsev - Petviashvili (KP) equations , have solutions which will become infinite in finite time. This phenomenon is sometimes referred to as...40 (November 1992). 4 7. Wave Collapse and Instability of Solitary Waves of a Generalized Nonlinear Kaoiomtsev- Petviashvili Equation , X.P. Wang, M.J...words) The inverse scattering of a class of differential-difference equations and multidimensional operators has been constructed. Solutions of nonlinear

  16. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  17. Turbulent Equilibria for Charged Particles in Space

    NASA Astrophysics Data System (ADS)

    Yoon, Peter

    2017-04-01

    The solar wind electron distribution function is apparently composed of several components including non-thermal tail population. The electron distribution that contains energetic tail feature is well fitted with the kappa distribution function. The solar wind protons also possess quasi power-law tail distribution function that is well fitted with an inverse power law model. The present paper discusses the latest theoretical development regarding the dynamical steady-state solution of electrons and Langmuir turbulence that are in turbulent equilibrium. According to such a theory, the Maxwellian and kappa distribution functions for the electrons emerge as the only two possible solution that satisfy the steady-state weak turbulence plasma kinetic equation. For the proton inverse power-law tail problem, a similar turbulent equilibrium solution can be conceived of, but instead of high-frequency Langmuir fluctuation, the theory involves low-frequency kinetic Alfvenic turbulence. The steady-state solution of the self-consistent proton kinetic equation and wave kinetic equation for Alfvenic waves can be found in order to obtain a self-consistent solution for the inverse power law tail distribution function.

  18. Convergent radial dispersion: A note on evaluation of the Laplace transform solution

    USGS Publications Warehouse

    Moench, Allen F.

    1991-01-01

    A numerical inversion algorithm for Laplace transforms that is capable of handling rapid changes in the computed function is applied to the Laplace transform solution to the problem of convergent radial dispersion in a homogeneous aquifer. Prior attempts by the author to invert this solution were unsuccessful for highly advective systems where the Peclet number was relatively large. The algorithm used in this note allows for rapid and accurate inversion of the solution for all Peclet numbers of practical interest, and beyond. Dimensionless breakthrough curves are illustrated for tracer input in the form of a step function, a Dirac impulse, or a rectangular input.

  19. The advantages of logarithmically scaled data for electromagnetic inversion

    NASA Astrophysics Data System (ADS)

    Wheelock, Brent; Constable, Steven; Key, Kerry

    2015-06-01

    Non-linear inversion algorithms traverse a data misfit space over multiple iterations of trial models in search of either a global minimum or some target misfit contour. The success of the algorithm in reaching that objective depends upon the smoothness and predictability of the misfit space. For any given observation, there is no absolute form a datum must take, and therefore no absolute definition for the misfit space; in fact, there are many alternatives. However, not all misfit spaces are equal in terms of promoting the success of inversion. In this work, we appraise three common forms that complex data take in electromagnetic geophysical methods: real and imaginary components, a power of amplitude and phase, and logarithmic amplitude and phase. We find that the optimal form is logarithmic amplitude and phase. Single-parameter misfit curves of log-amplitude and phase data for both magnetotelluric and controlled-source electromagnetic methods are the smoothest of the three data forms and do not exhibit flattening at low model resistivities. Synthetic, multiparameter, 2-D inversions illustrate that log-amplitude and phase is the most robust data form, converging to the target misfit contour in the fewest steps regardless of starting model and the amount of noise added to the data; inversions using the other two data forms run slower or fail under various starting models and proportions of noise. It is observed that inversion with log-amplitude and phase data is nearly two times faster in converging to a solution than with other data types. We also assess the statistical consequences of transforming data in the ways discussed in this paper. With the exception of real and imaginary components, which are assumed to be Gaussian, all other data types do not produce an expected mean-squared misfit value of 1.00 at the true model (a common assumption) as the errors in the complex data become large. We recommend that real and imaginary data with errors larger than 10 per cent of the complex amplitude be withheld from a log-amplitude and phase inversion rather than retaining them with large error-bars.

  20. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary data streams or by considering longer observation windows no systematic analysis has been carried out so far to explain the large differences among results. We consider adjoint based methods to investigate inverse problems using DALEC and various data streams. Using resolution matrices we study the nature of the inverse problems (solution existence, uniqueness and stability) and show how standard regularization techniques affect resolution and stability properties. Instead of using standard prior information as a penalty term in the cost function to regularize the problems we constraint the parameter space using ecological balance conditions and inequality constraints. The efficiency and rapidity of this approach allows us to compute ensembles of solutions to the inverse problems from which we can establish the robustness of the variational method and obtain non Gaussian posterior distributions for the model parameters and initial carbon stocks.

  1. On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.

    PubMed

    Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C

    2008-07-21

    The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.

  2. Female Literacy Rate is a Better Predictor of Birth Rate and Infant Mortality Rate in India.

    PubMed

    Saurabh, Suman; Sarkar, Sonali; Pandey, Dhruv K

    2013-01-01

    Educated women are known to take informed reproductive and healthcare decisions. These result in population stabilization and better infant care reflected by lower birth rates and infant mortality rates (IMRs), respectively. Our objective was to study the relationship of male and female literacy rates with crude birth rates (CBRs) and IMRs of the states and union territories (UTs) of India. The data were analyzed using linear regression. CBR and IMR were taken as the dependent variables; while the overall literacy rates, male, and female literacy rates were the independent variables. CBRs were inversely related to literacy rates (slope parameter = -0.402, P < 0.001). On multiple linear regression with male and female literacy rates, a significant inverse relationship emerged between female literacy rate and CBR (slope = -0.363, P < 0.001), while male literacy rate was not significantly related to CBR (P = 0.674). IMR of the states were also inversely related to their literacy rates (slope = -1.254, P < 0.001). Multiple linear regression revealed a significant inverse relationship between IMR and female literacy (slope = -0.816, P = 0.031), whereas male literacy rate was not significantly related (P = 0.630). Female literacy is relatively highly important for both population stabilization and better infant health.

  3. Chameleon scalar fields in relativistic gravitational backgrounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsujikawa, Shinji; Tamaki, Takashi; Tavakol, Reza, E-mail: shinji@rs.kagu.tus.ac.jp, E-mail: tamaki@gravity.phys.waseda.ac.jp, E-mail: r.tavakol@qmul.ac.uk

    2009-05-15

    We study the field profile of a scalar field {phi} that couples to a matter fluid (dubbed a chameleon field) in the relativistic gravitational background of a spherically symmetric spacetime. Employing a linear expansion in terms of the gravitational potential {Phi}{sub c} at the surface of a compact object with a constant density, we derive the thin-shell field profile both inside and outside the object, as well as the resulting effective coupling with matter, analytically. We also carry out numerical simulations for the class of inverse power-law potentials V({phi}) = M{sup 4+n}{phi}{sup -n} by employing the information provided by ourmore » analytical solutions to set the boundary conditions around the centre of the object and show that thin-shell solutions in fact exist if the gravitational potential {Phi}{sub c} is smaller than 0.3, which marginally covers the case of neutron stars. Thus the chameleon mechanism is present in the relativistic gravitational backgrounds, capable of reducing the effective coupling. Since thin-shell solutions are sensitive to the choice of boundary conditions, our analytic field profile is very helpful to provide appropriate boundary conditions for {Phi}{sub c}{approx}« less

  4. Visualizing phase transition behavior of dilute stimuli responsive polymer solutions via Mueller matrix polarimetry.

    PubMed

    Narayanan, Amal; Chandel, Shubham; Ghosh, Nirmalya; De, Priyadarsi

    2015-09-15

    Probing volume phase transition behavior of superdiluted polymer solutions both micro- and macroscopically still persists as an outstanding challenge. In this regard, we have explored 4 × 4 spectral Mueller matrix measurement and its inverse analysis for excavating the microarchitectural facts about stimuli responsiveness of "smart" polymers. Phase separation behavior of thermoresponsive poly(N-isopropylacrylamide) (PNIPAM) and pH responsive poly(N,N-(dimethylamino)ethyl methacrylate) (PDMAEMA) and their copolymers were analyzed in terms of Mueller matrix derived polarization parameters, namely, depolarization (Δ), diattenuation (d), and linear retardance (δ). The Δ, d, and δ parameters provided useful information on both macro- and microstructural alterations during the phase separation. Additionally, the two step action ((i) breakage of polymer-water hydrogen bonding and (ii) polymer-polymer aggregation) at the molecular microenvironment during the cloud point generation was successfully probed via these parameters. It is demonstrated that, in comparison to the present techniques available for assessing the hydrophobic-hydrophilic switch over of simple stimuli-responsive polymers, Mueller matrix polarimetry offers an important advantage requiring a few hundred times dilute polymer solution (0.01 mg/mL, 1.1-1.4 μM) at a low-volume format.

  5. Atypical k-essence cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chimento, Luis P.; Lazkoz, Ruth

    We analyze the implications of having a divergent speed of sound in k-essence cosmological models. We first study a known theory of that kind, for which the Lagrangian density depends linearly on the time derivative of the k-field. We show that when k-essence is the only source consistency requires that the potential of the k-field be of the inverse square form. Then, we review the known result that the corresponding power-law solutions can be mapped to power-law solutions of theories with no divergence in the speed of sound. After that, we argue that the requirement of a divergent sound speedmore » at some point fixes uniquely the form of the Lagrangian to be exactly the one considered earlier and prove the asymptotic stability of the most interesting solutions belonging to the divergent theory. Then, we discuss the implications of having not just k-essence but also matter. This is interesting because introducing another component breaks the rigidity of the theory, and the form of the potential ceases to be unique as happened in the pure k-essence case. Finally, we show the finiteness of the effective sound speed under an appropiate definition.« less

  6. Stable and unstable roots of ion temperature gradient driven mode using curvature modified plasma dispersion functions

    NASA Astrophysics Data System (ADS)

    Gültekin, Ö.; Gürcan, Ö. D.

    2018-02-01

    Basic, local kinetic theory of ion temperature gradient driven (ITG) mode, with adiabatic electrons is reconsidered. Standard unstable, purely oscillating as well as damped solutions of the local dispersion relation are obtained using a bracketing technique that uses the argument principle. This method requires computing the plasma dielectric function and its derivatives, which are implemented here using modified plasma dispersion functions with curvature and their derivatives, and allows bracketing/following the zeros of the plasma dielectric function which corresponds to different roots of the ITG dispersion relation. We provide an open source implementation of the derivatives of modified plasma dispersion functions with curvature, which are used in this formulation. Studying the local ITG dispersion, we find that near the threshold of instability the unstable branch is rather asymmetric with oscillating solutions towards lower wave numbers (i.e. drift waves), and damped solutions toward higher wave numbers. This suggests a process akin to inverse cascade by coupling to the oscillating branch towards lower wave numbers may play a role in the nonlinear evolution of the ITG, near the instability threshold. Also, using the algorithm, the linear wave diffusion is estimated for the marginally stable ITG mode.

  7. Post-earthquake relaxation using a spectral element method: 2.5-D case

    USGS Publications Warehouse

    Pollitz, Fred

    2014-01-01

    The computation of quasi-static deformation for axisymmetric viscoelastic structures on a gravitating spherical earth is addressed using the spectral element method (SEM). A 2-D spectral element domain is defined with respect to spherical coordinates of radius and angular distance from a pole of symmetry, and 3-D viscoelastic structure is assumed to be azimuthally symmetric with respect to this pole. A point dislocation source that is periodic in azimuth is implemented with a truncated sequence of azimuthal order numbers. Viscoelasticity is limited to linear rheologies and is implemented with the correspondence principle in the Laplace transform domain. This leads to a series of decoupled 2-D problems which are solved with the SEM. Inverse Laplace transform of the independent 2-D solutions leads to the time-domain solution of the 3-D equations of quasi-static equilibrium imposed on a 2-D structure. The numerical procedure is verified through comparison with analytic solutions for finite faults embedded in a laterally homogeneous viscoelastic structure. This methodology is applicable to situations where the predominant structure varies in one horizontal direction, such as a structural contrast across (or parallel to) a long strike-slip fault.

  8. Steady Secondary Flows Generated by Periodic Compression and Expansion of an Ideal Gas in a Pulse Tube

    NASA Technical Reports Server (NTRS)

    Lee, Jeffrey M.

    1999-01-01

    This study establishes a consistent set of differential equations for use in describing the steady secondary flows generated by periodic compression and expansion of an ideal gas in pulse tubes. Also considered is heat transfer between the gas and the tube wall of finite thickness. A small-amplitude series expansion solution in the inverse Strouhal number is proposed for the two-dimensional axisymmetric mass, momentum and energy equations. The anelastic approach applies when shock and acoustic energies are small compared with the energy needed to compress and expand the gas. An analytic solution to the ordered series is obtained in the strong temperature limit where the zeroth-order temperature is constant. The solution shows steady velocities increase linearly for small Valensi number and can be of order I for large Valensi number. A conversion of steady work flow to heat flow occurs whenever temperature, velocity or phase angle gradients are present. Steady enthalpy flow is reduced by heat transfer and is scaled by the Prandtl times Valensi numbers. Particle velocities from a smoke-wire experiment were compared with predictions for the basic and orifice pulse tube configurations. The theory accurately predicted the observed steady streaming.

  9. Improved source inversion from joint measurements of translational and rotational ground motions

    NASA Astrophysics Data System (ADS)

    Donner, S.; Bernauer, M.; Reinwald, M.; Hadziioannou, C.; Igel, H.

    2017-12-01

    Waveform inversion for seismic point (moment tensor) and kinematic sources is a standard procedure. However, especially in the local and regional distances a lack of appropriate velocity models, the sparsity of station networks, or a low signal-to-noise ratio combined with more complex waveforms hamper the successful retrieval of reliable source solutions. We assess the potential of rotational ground motion recordings to increase the resolution power and reduce non-uniquenesses for point and kinematic source solutions. Based on synthetic waveform data, we perform a Bayesian (i.e. probabilistic) inversion. Thus, we avoid the subjective selection of the most reliable solution according the lowest misfit or other constructed criterion. In addition, we obtain unbiased measures of resolution and possible trade-offs. Testing different earthquake mechanisms and scenarios, we can show that the resolution of the source solutions can be improved significantly. Especially depth dependent components show significant improvement. Next to synthetic data of station networks, we also tested sparse-network and single station cases.

  10. Kinematics and dynamics of robotic systems with multiple closed loops

    NASA Astrophysics Data System (ADS)

    Zhang, Chang-De

    The kinematics and dynamics of robotic systems with multiple closed loops, such as Stewart platforms, walking machines, and hybrid manipulators, are studied. In the study of kinematics, focus is on the closed-form solutions of the forward position analysis of different parallel systems. A closed-form solution means that the solution is expressed as a polynomial in one variable. If the order of the polynomial is less than or equal to four, the solution has analytical closed-form. First, the conditions of obtaining analytical closed-form solutions are studied. For a Stewart platform, the condition is found to be that one rotational degree of freedom of the output link is decoupled from the other five. Based on this condition, a class of Stewart platforms which has analytical closed-form solution is formulated. Conditions of analytical closed-form solution for other parallel systems are also studied. Closed-form solutions of forward kinematics for walking machines and multi-fingered grippers are then studied. For a parallel system with three three-degree-of-freedom subchains, there are 84 possible ways to select six independent joints among nine joints. These 84 ways can be classified into three categories: Category 3:3:0, Category 3:2:1, and Category 2:2:2. It is shown that the first category has no solutions; the solutions of the second category have analytical closed-form; and the solutions of the last category are higher order polynomials. The study is then extended to a nearly general Stewart platform. The solution is a 20th order polynomial and the Stewart platform has a maximum of 40 possible configurations. Also, the study is extended to a new class of hybrid manipulators which consists of two serially connected parallel mechanisms. In the study of dynamics, a computationally efficient method for inverse dynamics of manipulators based on the virtual work principle is developed. Although this method is comparable with the recursive Newton-Euler method for serial manipulators, its advantage is more noteworthy when applied to parallel systems. An approach of inverse dynamics of a walking machine is also developed, which includes inverse dynamic modeling, foot force distribution, and joint force/torque allocation.

  11. Inverse Scattering and Local Observable Algebras in Integrable Quantum Field Theories

    NASA Astrophysics Data System (ADS)

    Alazzawi, Sabina; Lechner, Gandalf

    2017-09-01

    We present a solution method for the inverse scattering problem for integrable two-dimensional relativistic quantum field theories, specified in terms of a given massive single particle spectrum and a factorizing S-matrix. An arbitrary number of massive particles transforming under an arbitrary compact global gauge group is allowed, thereby generalizing previous constructions of scalar theories. The two-particle S-matrix S is assumed to be an analytic solution of the Yang-Baxter equation with standard properties, including unitarity, TCP invariance, and crossing symmetry. Using methods from operator algebras and complex analysis, we identify sufficient criteria on S that imply the solution of the inverse scattering problem. These conditions are shown to be satisfied in particular by so-called diagonal S-matrices, but presumably also in other cases such as the O( N)-invariant nonlinear {σ}-models.

  12. Kernel reconstruction methods for Doppler broadening - Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    NASA Astrophysics Data System (ADS)

    Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord

    2017-04-01

    This article establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T - namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin ,Tmax ]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [ 300 K , 3000 K ] with only 9 reference temperatures.

  13. Parsimony and goodness-of-fit in multi-dimensional NMR inversion

    NASA Astrophysics Data System (ADS)

    Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos

    2017-01-01

    Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.

  14. The inverse problem: Ocean tides derived from earth tide observations

    NASA Technical Reports Server (NTRS)

    Kuo, J. T.

    1978-01-01

    Indirect mapping ocean tides by means of land and island-based tidal gravity measurements is presented. The inverse scheme of linear programming is used for indirect mapping of ocean tides. Open ocean tides were measured by the numerical integration of Laplace's tidal equations.

  15. A large inversion in the linear chromosome of Streptomyces griseus caused by replicative transposition of a new Tn3 family transposon.

    PubMed

    Murata, M; Uchida, T; Yang, Y; Lezhava, A; Kinashi, H

    2011-04-01

    We have comprehensively analyzed the linear chromosomes of Streptomyces griseus mutants constructed and kept in our laboratory. During this study, macrorestriction analysis of AseI and DraI fragments of mutant 402-2 suggested a large chromosomal inversion. The junctions of chromosomal inversion were cloned and sequenced and compared with the corresponding target sequences in the parent strain 2247. Consequently, a transposon-involved mechanism was revealed. Namely, a transposon originally located at the left target site was replicatively transposed to the right target site in an inverted direction, which generated a second copy and at the same time caused a 2.5-Mb chromosomal inversion. The involved transposon named TnSGR was grouped into a new subfamily of the resolvase-encoding Tn3 family transposons based on its gene organization. At the end, terminal diversity of S. griseus chromosomes is discussed by comparing the sequences of strains 2247 and IFO13350.

  16. Integrability: mathematical methods for studying solitary waves theory

    NASA Astrophysics Data System (ADS)

    Wazwaz, Abdul-Majid

    2014-03-01

    In recent decades, substantial experimental research efforts have been devoted to linear and nonlinear physical phenomena. In particular, studies of integrable nonlinear equations in solitary waves theory have attracted intensive interest from mathematicians, with the principal goal of fostering the development of new methods, and physicists, who are seeking solutions that represent physical phenomena and to form a bridge between mathematical results and scientific structures. The aim for both groups is to build up our current understanding and facilitate future developments, develop more creative results and create new trends in the rapidly developing field of solitary waves. The notion of the integrability of certain partial differential equations occupies an important role in current and future trends, but a unified rigorous definition of the integrability of differential equations still does not exist. For example, an integrable model in the Painlevé sense may not be integrable in the Lax sense. The Painlevé sense indicates that the solution can be represented as a Laurent series in powers of some function that vanishes on an arbitrary surface with the possibility of truncating the Laurent series at finite powers of this function. The concept of Lax pairs introduces another meaning of the notion of integrability. The Lax pair formulates the integrability of nonlinear equation as the compatibility condition of two linear equations. However, it was shown by many researchers that the necessary integrability conditions are the existence of an infinite series of generalized symmetries or conservation laws for the given equation. The existence of multiple soliton solutions often indicates the integrability of the equation but other tests, such as the Painlevé test or the Lax pair, are necessary to confirm the integrability for any equation. In the context of completely integrable equations, studies are flourishing because these equations are able to describe the real features in a variety of vital areas in science, technology and engineering. In recognition of the importance of solitary waves theory and the underlying concept of integrable equations, a variety of powerful methods have been developed to carry out the required analysis. Examples of such methods which have been advanced are the inverse scattering method, the Hirota bilinear method, the simplified Hirota method, the Bäcklund transformation method, the Darboux transformation, the Pfaffian technique, the Painlevé analysis, the generalized symmetry method, the subsidiary ordinary differential equation method, the coupled amplitude-phase formulation, the sine-cosine method, the sech-tanh method, the mapping and deformation approach and many new other methods. The inverse scattering method, viewed as a nonlinear analogue of the Fourier transform method, is a powerful approach that demonstrates the existence of soliton solutions through intensive computations. At the center of the theory of integrable equations lies the bilinear forms and Hirota's direct method, which can be used to obtain soliton solutions by using exponentials. The Bäcklund transformation method is a useful invariant transformation that transforms one solution into another of a differential equation. The Darboux transformation method is a well known tool in the theory of integrable systems. It is believed that there is a connection between the Bäcklund transformation and the Darboux transformation, but it is as yet not known. Archetypes of integrable equations are the Korteweg-de Vries (KdV) equation, the modified KdV equation, the sine-Gordon equation, the Schrödinger equation, the Vakhnenko equation, the KdV6 equation, the Burgers equation, the fifth-order Lax equation and many others. These equations yield soliton solutions, multiple soliton solutions, breather solutions, quasi-periodic solutions, kink solutions, homo-clinic solutions and other solutions as well. The couplings of linear and nonlinear equations were recently discovered and subsequently received considerable attention. The concept of couplings forms a new direction for developing innovative construction methods. The recently obtained results in solitary waves theory highlight new approaches for additional creative ideas, promising further achievements and increased progress in this field. We are grateful to all of the authors who accepted our invitation to contribute to this comment section.

  17. Lithographically Encrypted Inverse Opals for Anti-Counterfeiting Applications.

    PubMed

    Heo, Yongjoon; Kang, Hyelim; Lee, Joon-Seok; Oh, You-Kwan; Kim, Shin-Hyun

    2016-07-01

    Colloidal photonic crystals possess inimitable optical properties of iridescent structural colors and unique spectral shape, which render them useful for security materials. This work reports a novel method to encrypt graphical and spectral codes in polymeric inverse opals to provide advanced security. To accomplish this, this study prepares lithographically featured micropatterns on the top surface of hydrophobic inverse opals, which serve as shadow masks against the surface modification of air cavities to achieve hydrophilicity. The resultant inverse opals allow rapid infiltration of aqueous solution into the hydrophilic cavities while retaining air in the hydrophobic cavities. Therefore, the structural color of inverse opals is regioselectively red-shifted, disclosing the encrypted graphical codes. The decoded inverse opals also deliver unique reflectance spectral codes originated from two distinct regions. The combinatorial code composed of graphical and optical codes is revealed only when the aqueous solution agreed in advance is used for decoding. In addition, the encrypted inverse opals are chemically stable, providing invariant codes with high reproducibility. In addition, high mechanical stability enables the transfer of the films onto any surfaces. This novel encryption technology will provide a new opportunity in a wide range of security applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)

    NASA Astrophysics Data System (ADS)

    2014-10-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2014 was a one-day workshop held in May 2014 which attracted around sixty attendees. Each of the submitted papers has been reviewed by two reviewers. There have been nine accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks (GDR ISIS, GDR MIA, GDR MOA, GDR Ondes). The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA, SATIE. Eric Vourc'h and Thomas Rodet

  19. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  20. Application of snapshot imaging spectrometer in environmental detection

    NASA Astrophysics Data System (ADS)

    Sun, Kai; Qin, Xiaolei; Zhang, Yu; Wang, Jinqiang

    2017-10-01

    This study aimed at the application of snapshot imaging spectrometer in environmental detection. The simulated sewage and dyeing wastewater were prepared and the optimal experimental conditions were determined. The white LED array was used as the detection light source and the image of the sample was collected by the imaging spectrometer developed in the laboratory to obtain the spectral information of the sample in the range of 400-800 nm. The standard curve between the absorbance and the concentration of the samples was established. The linear range of a single component of Rhoda mine B was 1-50 mg/L, the linear correlation coefficient was more than 0.99, the recovery was 93%-113% and the relative standard deviations (RSD) was 7.5%. The linear range of chemical oxygen demand (COD) standard solution was 50-900mg/L, the linear correlation coefficient was 0.981, the recovery was 91% -106% and the relative standard deviation (RSD) was 6.7%. The rapid, accurate and precise method for detecting dyes showed an excellent promise for on-site and emergency detection in environment. At the request of the proceedings editor, an updated version of this article was published on 17 October 2017. The original version of this article was replaced due to an accidental inversion of Figure 2 and Figure 3. The Figures have been corrected in the updated and republished version.

Top