Sample records for inverse solution based

  1. Kinematics of an in-parallel actuated manipulator based on the Stewart platform mechanism

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1992-01-01

    This paper presents kinematic equations and solutions for an in-parallel actuated robotic mechanism based on Stewart's platform. These equations are required for inverse position and resolved rate (inverse velocity) platform control. NASA LaRC has a Vehicle Emulator System (VES) platform designed by MIT which is based on Stewart's platform. The inverse position solution is straight-forward and computationally inexpensive. Given the desired position and orientation of the moving platform with respect to the base, the lengths of the prismatic leg actuators are calculated. The forward position solution is more complicated and theoretically has 16 solutions. The position and orientation of the moving platform with respect to the base is calculated given the leg actuator lengths. Two methods are pursued in this paper to solve this problem. The resolved rate (inverse velocity) solution is derived. Given the desired Cartesian velocity of the end-effector, the required leg actuator rates are calculated. The Newton-Raphson Jacobian matrix resulting from the second forward position kinematics solution is a modified inverse Jacobian matrix. Examples and simulations are given for the VES.

  2. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  3. Inverse solutions for electrical impedance tomography based on conjugate gradients methods

    NASA Astrophysics Data System (ADS)

    Wang, M.

    2002-01-01

    A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.

  4. Application of a stochastic inverse to the geophysical inverse problem

    NASA Technical Reports Server (NTRS)

    Jordan, T. H.; Minster, J. B.

    1972-01-01

    The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.

  5. New Additions to the Toolkit for Forward/Inverse Problems in Electrocardiography within the SCIRun Problem Solving Environment.

    PubMed

    Coll-Font, Jaume; Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrel J; Wang, Dafang; Brooks, Dana H; van Dam, Peter; Macleod, Rob S

    2014-09-01

    Cardiac electrical imaging often requires the examination of different forward and inverse problem formulations based on mathematical and numerical approximations of the underlying source and the intervening volume conductor that can generate the associated voltages on the surface of the body. If the goal is to recover the source on the heart from body surface potentials, the solution strategy must include numerical techniques that can incorporate appropriate constraints and recover useful solutions, even though the problem is badly posed. Creating complete software solutions to such problems is a daunting undertaking. In order to make such tools more accessible to a broad array of researchers, the Center for Integrative Biomedical Computing (CIBC) has made an ECG forward/inverse toolkit available within the open source SCIRun system. Here we report on three new methods added to the inverse suite of the toolkit. These new algorithms, namely a Total Variation method, a non-decreasing TMP inverse and a spline-based inverse, consist of two inverse methods that take advantage of the temporal structure of the heart potentials and one that leverages the spatial characteristics of the transmembrane potentials. These three methods further expand the possibilities of researchers in cardiology to explore and compare solutions to their particular imaging problem.

  6. Kinematic source inversions of teleseismic data based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.

    2014-12-01

    One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.

  7. Inverse kinematic solution for near-simple robots and its application to robot calibration

    NASA Technical Reports Server (NTRS)

    Hayati, Samad A.; Roston, Gerald P.

    1986-01-01

    This paper provides an inverse kinematic solution for a class of robot manipulators called near-simple manipulators. The kinematics of these manipulators differ from those of simple-robots by small parameter variations. Although most robots are by design simple, in practice, due to manufacturing tolerances, every robot is near-simple. The method in this paper gives an approximate inverse kinematics solution for real time applications based on the nominal solution for these robots. The validity of the results are tested both by a simulation study and by applying the algorithm to a PUMA robot.

  8. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models.

    PubMed

    Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  9. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  10. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  11. Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.

    PubMed

    Song, C; Zhuang, T; Wu, Q

    2005-01-01

    This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.

  12. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  13. The inverse electroencephalography pipeline

    NASA Astrophysics Data System (ADS)

    Weinstein, David Michael

    The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.

  14. Pareto joint inversion of 2D magnetotelluric and gravity data

    NASA Astrophysics Data System (ADS)

    Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek

    2015-04-01

    In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where interesting density distributions are relatively shallow and resistivity changes are related to deeper parts. This kind of conditions are well suited for joint inversion of MT and gravity data. In the next stage of the solution development of further code optimization and extensive tests for real data will be realized. Presented work was supported by Polish National Centre for Research and Development under the contract number POIG.01.04.00-12-279/13

  15. A complete analytical solution for the inverse instantaneous kinematics of a spherical-revolute-spherical (7R) redundant manipulator

    NASA Technical Reports Server (NTRS)

    Podhorodeski, R. P.; Fenton, R. G.; Goldenberg, A. A.

    1989-01-01

    Using a method based upon resolving joint velocities using reciprocal screw quantities, compact analytical expressions are generated for the inverse solution of the joint rates of a seven revolute (spherical-revolute-spherical) manipulator. The method uses a sequential decomposition of screw coordinates to identify reciprocal screw quantities used in the resolution of a particular joint rate solution, and also to identify a Jacobian null-space basis used for the direct solution of optimal joint rates. The results of the screw decomposition are used to study special configurations of the manipulator, generating expressions for the inverse velocity solution for all non-singular configurations of the manipulator, and identifying singular configurations and their characteristics. Two functions are therefore served: a new general method for the solution of the inverse velocity problem is presented; and complete analytical expressions are derived for the resolution of the joint rates of a seven degree of freedom manipulator useful for telerobotic and industrial robotic application.

  16. Definition and solution of a stochastic inverse problem for the Manning’s n parameter field in hydrodynamic models

    DOE PAGES

    Butler, Troy; Graham, L.; Estep, D.; ...

    2015-02-03

    The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less

  17. Electromagnetic inverse scattering

    NASA Technical Reports Server (NTRS)

    Bojarski, N. N.

    1972-01-01

    A three-dimensional electromagnetic inverse scattering identity, based on the physical optics approximation, is developed for the monostatic scattered far field cross section of perfect conductors. Uniqueness of this inverse identity is proven. This identity requires complete scattering information for all frequencies and aspect angles. A nonsingular integral equation is developed for the arbitrary case of incomplete frequence and/or aspect angle scattering information. A general closed-form solution to this integral equation is developed, which yields the shape of the scatterer from such incomplete information. A specific practical radar solution is presented. The resolution of this solution is developed, yielding short-pulse target resolution radar system parameter equations. The special cases of two- and one-dimensional inverse scattering and the special case of a priori knowledge of scatterer symmetry are treated in some detail. The merits of this solution over the conventional radar imaging technique are discussed.

  18. The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method

    NASA Astrophysics Data System (ADS)

    Voronina, T. A.; Romanenko, A. A.

    2016-12-01

    Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.

  19. Kinematic equations for control of the redundant eight-degree-of-freedom advanced research manipulator 2

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1992-01-01

    The forward position and velocity kinematics for the redundant eight-degree-of-freedom Advanced Research Manipulator 2 (ARM2) are presented. Inverse position and velocity kinematic solutions are also presented. The approach in this paper is to specify two of the unknowns and solve for the remaining six unknowns. Two unknowns can be specified with two restrictions. First, the elbow joint angle and rate cannot be specified because they are known from the end-effector position and velocity. Second, one unknown must be specified from the four-jointed wrist, and the second from joints that translate the wrist, elbow joint excluded. There are eight solutions to the inverse position problem. The inverse velocity solution is unique, assuming the Jacobian matrix is not singular. A discussion of singularities is based on specifying two joint rates and analyzing the reduced Jacobian matrix. When this matrix is singular, the generalized inverse may be used as an alternate solution. Computer simulations were developed to verify the equations. Examples demonstrate agreement between forward and inverse solutions.

  20. Black hole algorithm for determining model parameter in self-potential data

    NASA Astrophysics Data System (ADS)

    Sungkono; Warnana, Dwa Desa

    2018-01-01

    Analysis of self-potential (SP) data is increasingly popular in geophysical method due to its relevance in many cases. However, the inversion of SP data is often highly nonlinear. Consequently, local search algorithms commonly based on gradient approaches have often failed to find the global optimum solution in nonlinear problems. Black hole algorithm (BHA) was proposed as a solution to such problems. As the name suggests, the algorithm was constructed based on the black hole phenomena. This paper investigates the application of BHA to solve inversions of field and synthetic self-potential (SP) data. The inversion results show that BHA accurately determines model parameters and model uncertainty. This indicates that BHA is highly potential as an innovative approach for SP data inversion.

  1. Easy way to determine quantitative spatial resolution distribution for a general inverse problem

    NASA Astrophysics Data System (ADS)

    An, M.; Feng, M.

    2013-12-01

    The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.

  2. Real-Time Wing-Vortex and Pressure Distribution Estimation on Wings Via Displacements and Strains in Unsteady and Transitional Flight Conditions

    DTIC Science & Technology

    2016-09-07

    approach in co simulation with fluid-dynamics solvers is used. An original variational formulation is developed for the inverse problem of...by the inverse solution meshing. The same approach is used to map the structural and fluid interface kinematics and loads during the fluid structure...co-simulation. The inverse analysis is verified by reconstructing the deformed solution obtained with a corresponding direct formulation, based on

  3. Determination of thermophysical characteristics of solid materials by electrical modelling of the solutions to the inverse problems in nonsteady heat conduction

    NASA Technical Reports Server (NTRS)

    Kozdoba, L. A.; Krivoshei, F. A.

    1985-01-01

    The solution of the inverse problem of nonsteady heat conduction is discussed, based on finding the coefficient of the heat conduction and the coefficient of specific volumetric heat capacity. These findings are included in the equation used for the electrical model of this phenomenon.

  4. The neural network approximation method for solving multidimensional nonlinear inverse problems of geophysics

    NASA Astrophysics Data System (ADS)

    Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.

    2017-07-01

    The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.

  5. Solving geosteering inverse problems by stochastic Hybrid Monte Carlo method

    DOE PAGES

    Shen, Qiuyang; Wu, Xuqing; Chen, Jiefu; ...

    2017-11-20

    The inverse problems arise in almost all fields of science where the real-world parameters are extracted from a set of measured data. The geosteering inversion plays an essential role in the accurate prediction of oncoming strata as well as a reliable guidance to adjust the borehole position on the fly to reach one or more geological targets. This mathematical treatment is not easy to solve, which requires finding an optimum solution among a large solution space, especially when the problem is non-linear and non-convex. Nowadays, a new generation of logging-while-drilling (LWD) tools has emerged on the market. The so-called azimuthalmore » resistivity LWD tools have azimuthal sensitivity and a large depth of investigation. Hence, the associated inverse problems become much more difficult since the earth model to be inverted will have more detailed structures. The conventional deterministic methods are incapable to solve such a complicated inverse problem, where they suffer from the local minimum trap. Alternatively, stochastic optimizations are in general better at finding global optimal solutions and handling uncertainty quantification. In this article, we investigate the Hybrid Monte Carlo (HMC) based statistical inversion approach and suggest that HMC based inference is more efficient in dealing with the increased complexity and uncertainty faced by the geosteering problems.« less

  6. Numerical solution of 2D-vector tomography problem using the method of approximate inverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna

    2016-08-10

    We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.

  7. Synthesis and characterization of Fe colloid catalysts in inverse micelle solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martino, A.; Stoker, M.; Hicks, M.

    1995-12-31

    Surfactant molecules, possessing a hydrophilic head group and a hydrophobic tail group, aggregate in various solvents to form structured solutions. In two component mixtures of surfactant and organic solvents (e.g., toluene and alkanes), surfactants aggregate to form inverse micelles. Here, the hydrophilic head groups shield themselves by forming a polar core, and the hydrophobic tails groups are free to move about in the surrounding oleic phase. The formation of Fe clusters in inverse miscelles was studied.Iron salts are solubilized within the polar interior of inverse micelles, and the addition of the reducing agent LiBH{sub 4} initiates a chemical reduction tomore » produce monodisperse, nanometer sized Fe based particles. The reaction sequence is sustained by material exchange between inverse micelles. The surfactant interface provides a spatial constraint on the reaction volume, and reactions carried out in these micro-heterogeneous solutions produce colloidal sized particles (10-100{Angstrom}) stabilized in solution against flocculation of surfactant. The clusters were stabilized with respect to size with transmission electron microscopy (TEM) and with respect to chemical composition with Mossbauer spectroscopy, electron diffraction, and x-ray photoelectron spectroscopy (XPS). In addition, these iron based clusters were tested for catalytic activity in a model hydrogenolysis reaction. The hydrogenolysis of naphthyl bibenzyl methane was used as a model for coal pyrolysis.« less

  8. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  9. An efficient approach for inverse kinematics and redundancy resolution scheme of hyper-redundant manipulators

    NASA Astrophysics Data System (ADS)

    Chembuly, V. V. M. J. Satish; Voruganti, Hari Kumar

    2018-04-01

    Hyper redundant manipulators have a large number of degrees of freedom (DOF) than the required to perform a given task. Additional DOF of manipulators provide the flexibility to work in highly cluttered environment and in constrained workspaces. Inverse kinematics (IK) of hyper-redundant manipulators is complicated due to large number of DOF and these manipulators have multiple IK solutions. The redundancy gives a choice of selecting best solution out of multiple solutions based on certain criteria such as obstacle avoidance, singularity avoidance, joint limit avoidance and joint torque minimization. This paper focuses on IK solution and redundancy resolution of hyper-redundant manipulator using classical optimization approach. Joint positions are computed by optimizing various criteria for a serial hyper redundant manipulators while traversing different paths in the workspace. Several cases are addressed using this scheme to obtain the inverse kinematic solution while optimizing the criteria like obstacle avoidance, joint limit avoidance.

  10. A trade-off solution between model resolution and covariance in surface-wave inversion

    USGS Publications Warehouse

    Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.

    2010-01-01

    Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.

  11. Inverse opal photonic crystal of chalcogenide glass by solution processing.

    PubMed

    Kohoutek, Tomas; Orava, Jiri; Sawada, Tsutomu; Fudouzi, Hiroshi

    2011-01-15

    Chalcogenide opal and inverse opal photonic crystals were successfully fabricated by low-cost and low-temperature solution-based process, which is well developed in polymer films processing. Highly ordered silica colloidal crystal films were successfully infilled with nano-colloidal solution of the high refractive index As(30)S(70) chalcogenide glass by using spin-coating method. The silica/As-S opal film was etched in HF acid to dissolve the silica opal template and fabricate the inverse opal As-S photonic crystal. Both, the infilled silica/As-S opal film (Δn ~ 0.84 near λ=770 nm) and the inverse opal As-S photonic structure (Δn ~ 1.26 near λ=660 nm) had significantly enhanced reflectivity values and wider photonic bandgaps in comparison with the silica opal film template (Δn ~ 0.434 near λ=600 nm). The key aspects of opal film preparation by spin-coating of nano-colloidal chalcogenide glass solution are discussed. The solution fabricated "inorganic polymer" opal and the inverse opal structures exceed photonic properties of silica or any organic polymer opal film. The fabricated photonic structures are proposed for designing novel flexible colloidal crystal laser devices, photonic waveguides and chemical sensors. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.

    2010-12-01

    Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.

  13. Improved resistivity imaging of groundwater solute plumes using POD-based inversion

    NASA Astrophysics Data System (ADS)

    Oware, E. K.; Moysey, S. M.; Khan, T.

    2012-12-01

    We propose a new approach for enforcing physics-based regularization in electrical resistivity imaging (ERI) problems. The approach utilizes a basis-constrained inversion where an optimal set of basis vectors is extracted from training data by Proper Orthogonal Decomposition (POD). The key aspect of the approach is that Monte Carlo simulation of flow and transport is used to generate a training dataset, thereby intrinsically capturing the physics of the underlying flow and transport models in a non-parametric form. POD allows for these training data to be projected onto a subspace of the original domain, resulting in the extraction of a basis for the inversion that captures characteristics of the groundwater flow and transport system, while simultaneously allowing for dimensionality reduction of the original problem in the projected space We use two different synthetic transport scenarios in heterogeneous media to illustrate how the POD-based inversion compares with standard Tikhonov and coupled inversion. The first scenario had a single source zone leading to a unimodal solute plume (synthetic #1), whereas, the second scenario had two source zones that produced a bimodal plume (synthetic #2). For both coupled inversion and the POD approach, the conceptual flow and transport model used considered only a single source zone for both scenarios. Results were compared based on multiple metrics (concentration root-mean square error (RMSE), peak concentration, and total solute mass). In addition, results for POD inversion based on 3 different data densities (120, 300, and 560 data points) and varying number of selected basis images (100, 300, and 500) were compared. For synthetic #1, we found that all three methods provided qualitatively reasonable reproduction of the true plume. Quantitatively, the POD inversion performed best overall for each metric considered. Moreover, since synthetic #1 was consistent with the conceptual transport model, a small number of basis vectors (100) contained enough a priori information to constrain the inversion. Increasing the amount of data or number of selected basis images did not translate into significant improvement in imaging results. For synthetic #2, the RMSE and error in total mass were lowest for the POD inversion. However, the peak concentration was significantly overestimated by the POD approach. Regardless, the POD-based inversion was the only technique that could capture the bimodality of the plume in the reconstructed image, thus providing critical information that could be used to reconceptualize the transport problem. We also found that, in the case of synthetic #2, increasing the number of resistivity measurements and the number of selected basis vectors allowed for significant improvements in the reconstructed images.

  14. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  15. An iterative solver for the 3D Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Belonosov, Mikhail; Dmitriev, Maxim; Kostin, Victor; Neklyudov, Dmitry; Tcheverda, Vladimir

    2017-09-01

    We develop a frequency-domain iterative solver for numerical simulation of acoustic waves in 3D heterogeneous media. It is based on the application of a unique preconditioner to the Helmholtz equation that ensures convergence for Krylov subspace iteration methods. Effective inversion of the preconditioner involves the Fast Fourier Transform (FFT) and numerical solution of a series of boundary value problems for ordinary differential equations. Matrix-by-vector multiplication for iterative inversion of the preconditioned matrix involves inversion of the preconditioner and pointwise multiplication of grid functions. Our solver has been verified by benchmarking against exact solutions and a time-domain solver.

  16. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  17. Kinematics and dynamics of robotic systems with multiple closed loops

    NASA Astrophysics Data System (ADS)

    Zhang, Chang-De

    The kinematics and dynamics of robotic systems with multiple closed loops, such as Stewart platforms, walking machines, and hybrid manipulators, are studied. In the study of kinematics, focus is on the closed-form solutions of the forward position analysis of different parallel systems. A closed-form solution means that the solution is expressed as a polynomial in one variable. If the order of the polynomial is less than or equal to four, the solution has analytical closed-form. First, the conditions of obtaining analytical closed-form solutions are studied. For a Stewart platform, the condition is found to be that one rotational degree of freedom of the output link is decoupled from the other five. Based on this condition, a class of Stewart platforms which has analytical closed-form solution is formulated. Conditions of analytical closed-form solution for other parallel systems are also studied. Closed-form solutions of forward kinematics for walking machines and multi-fingered grippers are then studied. For a parallel system with three three-degree-of-freedom subchains, there are 84 possible ways to select six independent joints among nine joints. These 84 ways can be classified into three categories: Category 3:3:0, Category 3:2:1, and Category 2:2:2. It is shown that the first category has no solutions; the solutions of the second category have analytical closed-form; and the solutions of the last category are higher order polynomials. The study is then extended to a nearly general Stewart platform. The solution is a 20th order polynomial and the Stewart platform has a maximum of 40 possible configurations. Also, the study is extended to a new class of hybrid manipulators which consists of two serially connected parallel mechanisms. In the study of dynamics, a computationally efficient method for inverse dynamics of manipulators based on the virtual work principle is developed. Although this method is comparable with the recursive Newton-Euler method for serial manipulators, its advantage is more noteworthy when applied to parallel systems. An approach of inverse dynamics of a walking machine is also developed, which includes inverse dynamic modeling, foot force distribution, and joint force/torque allocation.

  18. Parsimony and goodness-of-fit in multi-dimensional NMR inversion

    NASA Astrophysics Data System (ADS)

    Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos

    2017-01-01

    Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.

  19. Approximate solutions of acoustic 3D integral equation and their application to seismic modeling and full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2017-10-01

    Over the recent decades, a number of fast approximate solutions of Lippmann-Schwinger equation, which are more accurate than classic Born and Rytov approximations, were proposed in the field of electromagnetic modeling. Those developments could be naturally extended to acoustic and elastic fields; however, until recently, they were almost unknown in seismology. This paper presents several solutions of this kind applied to acoustic modeling for both lossy and lossless media. We evaluated the numerical merits of those methods and provide an estimation of their numerical complexity. In our numerical realization we use the matrix-free implementation of the corresponding integral operator. We study the accuracy of those approximate solutions and demonstrate, that the quasi-analytical approximation is more accurate, than the Born approximation. Further, we apply the quasi-analytical approximation to the solution of the inverse problem. It is demonstrated that, this approach improves the estimation of the data gradient, comparing to the Born approximation. The developed inversion algorithm is based on the conjugate-gradient type optimization. Numerical model study demonstrates that the quasi-analytical solution significantly reduces computation time of the seismic full-waveform inversion. We also show how the quasi-analytical approximation can be extended to the case of elastic wavefield.

  20. Recursive inversion of externally defined linear systems

    NASA Technical Reports Server (NTRS)

    Bach, Ralph E., Jr.; Baram, Yoram

    1988-01-01

    The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problems of system identification and compensation.

  1. Space structures insulating material's thermophysical and radiation properties estimation

    NASA Astrophysics Data System (ADS)

    Nenarokomov, A. V.; Alifanov, O. M.; Titov, D. M.

    2007-11-01

    In many practical situations in aerospace technology it is impossible to measure directly such properties of analyzed materials (for example, composites) as thermal and radiation characteristics. The only way that can often be used to overcome these difficulties is indirect measurements. This type of measurement is usually formulated as the solution of inverse heat transfer problems. Such problems are ill-posed in mathematical sense and their main feature shows itself in the solution instabilities. That is why special regularizing methods are needed to solve them. The experimental methods of identification of the mathematical models of heat transfer based on solving the inverse problems are one of the modern effective solving manners. The objective of this paper is to estimate thermal and radiation properties of advanced materials using the approach based on inverse methods.

  2. Improved Genetic Algorithm Based on the Cooperation of Elite and Inverse-elite

    NASA Astrophysics Data System (ADS)

    Kanakubo, Masaaki; Hagiwara, Masafumi

    In this paper, we propose an improved genetic algorithm based on the combination of Bee system and Inverse-elitism, both are effective strategies for the improvement of GA. In the Bee system, in the beginning, each chromosome tries to find good solution individually as global search. When some chromosome is regarded as superior one, the other chromosomes try to find solution around there. However, since chromosomes for global search are generated randomly, Bee system lacks global search ability. On the other hand, in the Inverse-elitism, an inverse-elite whose gene values are reversed from the corresponding elite is produced. This strategy greatly contributes to diversification of chromosomes, but it lacks local search ability. In the proposed method, the Inverse-elitism with Pseudo-simplex method is employed for global search of Bee system in order to strengthen global search ability. In addition, it also has strong local search ability. The proposed method has synergistic effects of the three strategies. We confirmed validity and superior performance of the proposed method by computer simulations.

  3. Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media

    NASA Astrophysics Data System (ADS)

    Jakobsen, Morten; Tveit, Svenn

    2018-05-01

    We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.

  4. Recursive inversion of externally defined linear systems by FIR filters

    NASA Technical Reports Server (NTRS)

    Bach, Ralph E., Jr.; Baram, Yoram

    1989-01-01

    The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least-squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problem of system identification and compensation.

  5. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.

    2018-06-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.

  6. Computational structures for robotic computations

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chang, P. R.

    1987-01-01

    The computational problem of inverse kinematics and inverse dynamics of robot manipulators by taking advantage of parallelism and pipelining architectures is discussed. For the computation of inverse kinematic position solution, a maximum pipelined CORDIC architecture has been designed based on a functional decomposition of the closed-form joint equations. For the inverse dynamics computation, an efficient p-fold parallel algorithm to overcome the recurrence problem of the Newton-Euler equations of motion to achieve the time lower bound of O(log sub 2 n) has also been developed.

  7. On epicardial potential reconstruction using regularization schemes with the L1-norm data term.

    PubMed

    Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart

    2011-01-07

    The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.

  8. NLSE: Parameter-Based Inversion Algorithm

    NASA Astrophysics Data System (ADS)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.

    Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.

  9. Forward and inverse solutions for Risley prism based on the Denavit-Hartenberg methodology

    NASA Astrophysics Data System (ADS)

    Beltran-Gonzalez, A.; Garcia-Torales, G.; Strojnik, M.; Flores, J. L.; Garcia-Luna, J. L.

    2017-08-01

    In this work forward and inverse solutions for two-element Risley prism for pointing and scanning beam systems are developed. A more efficient and faster algorithm is proposed to make an analogy of the Risley prism system compared with a robotic system with two degrees of freedom. This system of equations controls each Risley prism individually as a planar manipulator arm of two links. In order to evaluate the algorithm we implement it in a pointing system. We perform popular routines such as the linear, spiral and loops traces. Using forward and inverse solutions for two-element Risley prism it is also possible to point at coordinates specified by the user, provided they are within the pointer area of work area. Experimental results are showed as a validation of our proposal.

  10. Solution of Inverse Kinematics for 6R Robot Manipulators With Offset Wrist Based on Geometric Algebra.

    PubMed

    Fu, Zhongtao; Yang, Wenyu; Yang, Zhen

    2013-08-01

    In this paper, we present an efficient method based on geometric algebra for computing the solutions to the inverse kinematics problem (IKP) of the 6R robot manipulators with offset wrist. Due to the fact that there exist some difficulties to solve the inverse kinematics problem when the kinematics equations are complex, highly nonlinear, coupled and multiple solutions in terms of these robot manipulators stated mathematically, we apply the theory of Geometric Algebra to the kinematic modeling of 6R robot manipulators simply and generate closed-form kinematics equations, reformulate the problem as a generalized eigenvalue problem with symbolic elimination technique, and then yield 16 solutions. Finally, a spray painting robot, which conforms to the type of robot manipulators, is used as an example of implementation for the effectiveness and real-time of this method. The experimental results show that this method has a large advantage over the classical methods on geometric intuition, computation and real-time, and can be directly extended to all serial robot manipulators and completely automatized, which provides a new tool on the analysis and application of general robot manipulators.

  11. Multiple estimation channel decoupling and optimization method based on inverse system

    NASA Astrophysics Data System (ADS)

    Wu, Peng; Mu, Rongjun; Zhang, Xin; Deng, Yanpeng

    2018-03-01

    This paper addressed the intelligent autonomous navigation request of intelligent deformation missile, based on the intelligent deformation missile dynamics and kinematics modeling, navigation subsystem solution method and error modeling, and then focuses on the corresponding data fusion and decision fusion technology, decouples the sensitive channel of the filter input through the inverse system of design dynamics to reduce the influence of sudden change of the measurement information on the filter input. Then carrying out a series of simulation experiments, which verified the feasibility of the inverse system decoupling algorithm effectiveness.

  12. Inverse scattering transform for the time dependent Schrödinger equation with applications to the KPI equation

    NASA Astrophysics Data System (ADS)

    Zhou, Xin

    1990-03-01

    For the direct-inverse scattering transform of the time dependent Schrödinger equation, rigorous results are obtained based on an opertor-triangular-factorization approach. By viewing the equation as a first order operator equation, similar results as for the first order n x n matrix system are obtained. The nonlocal Riemann-Hilbert problem for inverse scattering is shown to have solution.

  13. Developing a Near Real-time System for Earthquake Slip Distribution Inversion

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen

    2016-04-01

    Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.

  14. Improved source inversion from joint measurements of translational and rotational ground motions

    NASA Astrophysics Data System (ADS)

    Donner, S.; Bernauer, M.; Reinwald, M.; Hadziioannou, C.; Igel, H.

    2017-12-01

    Waveform inversion for seismic point (moment tensor) and kinematic sources is a standard procedure. However, especially in the local and regional distances a lack of appropriate velocity models, the sparsity of station networks, or a low signal-to-noise ratio combined with more complex waveforms hamper the successful retrieval of reliable source solutions. We assess the potential of rotational ground motion recordings to increase the resolution power and reduce non-uniquenesses for point and kinematic source solutions. Based on synthetic waveform data, we perform a Bayesian (i.e. probabilistic) inversion. Thus, we avoid the subjective selection of the most reliable solution according the lowest misfit or other constructed criterion. In addition, we obtain unbiased measures of resolution and possible trade-offs. Testing different earthquake mechanisms and scenarios, we can show that the resolution of the source solutions can be improved significantly. Especially depth dependent components show significant improvement. Next to synthetic data of station networks, we also tested sparse-network and single station cases.

  15. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    NASA Astrophysics Data System (ADS)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  16. Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow

    NASA Astrophysics Data System (ADS)

    Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar

    2014-09-01

    We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.

  17. Effects of spatially variable resolution on field-scale estimates of tracer concentration from electrical inversions using Archie's law

    USGS Publications Warehouse

    Singha, Kamini; Gorelick, Steven M.

    2006-01-01

    Two important mechanisms affect our ability to estimate solute concentrations quantitatively from the inversion of field-scale electrical resistivity tomography (ERT) data: (1) the spatially variable physical processes that govern the flow of current as well as the variation of physical properties in space and (2) the overparameterization of inverse models, which requires the imposition of a smoothing constraint (regularization) to facilitate convergence of the inverse solution. Based on analyses of field and synthetic data, we find that the ability of ERT to recover the 3D shape and magnitudes of a migrating conductive target is spatially variable. Additionally, the application of Archie's law to tomograms from field ERT data produced solute concentrations that are consistently less than 10% of point measurements collected in the field and estimated from transport modeling. Estimates of concentration from ERT using Archie's law only fit measured solute concentrations if the apparent formation factor is varied with space and time and allowed to take on unreasonably high values. Our analysis suggests that the inability to find a single petrophysical relation in space and time between concentration and electrical resistivity is largely an effect of two properties of ERT surveys: (1) decreased sensitivity of ERT to detect the target plume with increasing distance from the electrodes and (2) the smoothing imprint of regularization used in inversion.

  18. Significance of the model considering mixed grain-size for inverse analysis of turbidites

    NASA Astrophysics Data System (ADS)

    Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.

    2016-12-01

    A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.

  19. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    NASA Astrophysics Data System (ADS)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good performance of the algorithm. Both versions and documentation are available on Internet and anybody can download them. The goal of this presentation is to offer the algorithm and computer codes for anybody interested in the solution to inverse problems.

  20. Inverse analysis and regularisation in conditional source-term estimation modelling

    NASA Astrophysics Data System (ADS)

    Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.

    2014-05-01

    Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.

  1. Using sequential self-calibration method to identify conductivity distribution: Conditioning on tracer test data

    USGS Publications Warehouse

    Hu, B.X.; He, C.

    2008-01-01

    An iterative inverse method, the sequential self-calibration method, is developed for mapping spatial distribution of a hydraulic conductivity field by conditioning on nonreactive tracer breakthrough curves. A streamline-based, semi-analytical simulator is adopted to simulate solute transport in a heterogeneous aquifer. The simulation is used as the forward modeling step. In this study, the hydraulic conductivity is assumed to be a deterministic or random variable. Within the framework of the streamline-based simulator, the efficient semi-analytical method is used to calculate sensitivity coefficients of the solute concentration with respect to the hydraulic conductivity variation. The calculated sensitivities account for spatial correlations between the solute concentration and parameters. The performance of the inverse method is assessed by two synthetic tracer tests conducted in an aquifer with a distinct spatial pattern of heterogeneity. The study results indicate that the developed iterative inverse method is able to identify and reproduce the large-scale heterogeneity pattern of the aquifer given appropriate observation wells in these synthetic cases. ?? International Association for Mathematical Geology 2008.

  2. Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.

    PubMed

    Ebert, M

    1997-12-01

    This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.

  3. Using machine learning to accelerate sampling-based inversion

    NASA Astrophysics Data System (ADS)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  4. Forward and inverse solutions for three-element Risley prism beam scanners.

    PubMed

    Li, Anhu; Liu, Xingsheng; Sun, Wansong

    2017-04-03

    Scan blind zone and control singularity are two adverse issues for the beam scanning performance in double-prism Risley systems. In this paper, a theoretical model which introduces a third prism is developed. The critical condition for a fully eliminated scan blind zone is determined through a geometric derivation, providing several useful formulae for three-Risley-prism system design. Moreover, inverse solutions for a three-prism system are established, based on the damped least-squares iterative refinement by a forward ray tracing method. It is shown that the efficiency of this iterative calculation of the inverse solutions can be greatly enhanced by a numerical differentiation method. In order to overcome the control singularity problem, the motion law of any one prism in a three-prism system needs to be conditioned, resulting in continuous and steady motion profiles for the other two prisms.

  5. A novel post-processing scheme for two-dimensional electrical impedance tomography based on artificial neural networks

    PubMed Central

    2017-01-01

    Objective Electrical Impedance Tomography (EIT) is a powerful non-invasive technique for imaging applications. The goal is to estimate the electrical properties of living tissues by measuring the potential at the boundary of the domain. Being safe with respect to patient health, non-invasive, and having no known hazards, EIT is an attractive and promising technology. However, it suffers from a particular technical difficulty, which consists of solving a nonlinear inverse problem in real time. Several nonlinear approaches have been proposed as a replacement for the linear solver, but in practice very few are capable of stable, high-quality, and real-time EIT imaging because of their very low robustness to errors and inaccurate modeling, or because they require considerable computational effort. Methods In this paper, a post-processing technique based on an artificial neural network (ANN) is proposed to obtain a nonlinear solution to the inverse problem, starting from a linear solution. While common reconstruction methods based on ANNs estimate the solution directly from the measured data, the method proposed here enhances the solution obtained from a linear solver. Conclusion Applying a linear reconstruction algorithm before applying an ANN reduces the effects of noise and modeling errors. Hence, this approach significantly reduces the error associated with solving 2D inverse problems using machine-learning-based algorithms. Significance This work presents radical enhancements in the stability of nonlinear methods for biomedical EIT applications. PMID:29206856

  6. Mathematics of Computed Tomography

    NASA Astrophysics Data System (ADS)

    Hawkins, William Grant

    A review of the applications of the Radon transform is presented, with emphasis on emission computed tomography and transmission computed tomography. The theory of the 2D and 3D Radon transforms, and the effects of attenuation for emission computed tomography are presented. The algebraic iterative methods, their importance and limitations are reviewed. Analytic solutions of the 2D problem the convolution and frequency filtering methods based on linear shift invariant theory, and the solution of the circular harmonic decomposition by integral transform theory--are reviewed. The relation between the invisible kernels, the inverse circular harmonic transform, and the consistency conditions are demonstrated. The discussion and review are extended to the 3D problem-convolution, frequency filtering, spherical harmonic transform solutions, and consistency conditions. The Cormack algorithm based on reconstruction with Zernike polynomials is reviewed. An analogous algorithm and set of reconstruction polynomials is developed for the spherical harmonic transform. The relations between the consistency conditions, boundary conditions and orthogonal basis functions for the 2D projection harmonics are delineated and extended to the 3D case. The equivalence of the inverse circular harmonic transform, the inverse Radon transform, and the inverse Cormack transform is presented. The use of the number of nodes of a projection harmonic as a filter is discussed. Numerical methods for the efficient implementation of angular harmonic algorithms based on orthogonal functions and stable recursion are presented. The derivation of a lower bound for the signal-to-noise ratio of the Cormack algorithm is derived.

  7. Unscented Kalman filter assimilation of time-lapse self-potential data for monitoring solute transport

    NASA Astrophysics Data System (ADS)

    Cui, Yi-an; Liu, Lanbo; Zhu, Xiaoxiong

    2017-08-01

    Monitoring the extent and evolution of contaminant plumes in local and regional groundwater systems from existing landfills is critical in contamination control and remediation. The self-potential survey is an efficient and economical nondestructive geophysical technique that can be used to investigate underground contaminant plumes. Based on the unscented transform, we have built a Kalman filtering cycle to conduct time-lapse data assimilation for monitoring the transport of solute based on the solute transport experiment using a bench-scale physical model. The data assimilation was formed by modeling the evolution based on the random walk model and observation correcting based on the self-potential forward. Thus, monitoring self-potential data can be inverted by the data assimilation technique. As a result, we can reconstruct the dynamic process of the contaminant plume instead of using traditional frame-to-frame static inversion, which may cause inversion artifacts. The data assimilation inversion algorithm was evaluated through noise-added synthetic time-lapse self-potential data. The result of the numerical experiment shows validity, accuracy and tolerance to the noise of the dynamic inversion. To validate the proposed algorithm, we conducted a scaled-down sandbox self-potential observation experiment to generate time-lapse data that closely mimics the real-world contaminant monitoring setup. The results of physical experiments support the idea that the data assimilation method is a potentially useful approach for characterizing the transport of contamination plumes using the unscented Kalman filter (UKF) data assimilation technique applied to field time-lapse self-potential data.

  8. Acoustic propagation in a thermally stratified atmosphere

    NASA Technical Reports Server (NTRS)

    Vanmoorhem, W. K.

    1988-01-01

    Acoustic propagation in an atmosphere with a specific form of a temperature profile has been investigated by analytical means. The temperature profile used is representative of an actual atmospheric profile and contains three free parameters. Both lapse and inversion cases have been considered. Although ray solutions have been considered, the primary emphasis has been on solutions of the acoustic wave equation with point source where the sound speed varies with height above the ground corresponding to the assumed temperature profile. The method used to obtain the solution of the wave equation is based on Hankel transformation of the wave equation, approximate solution of the transformed equation for wavelength small compared to the scale of the temperature (or sound speed) profile, and approximate or numerical inversion of the Hankel transformed solution. The solution displays the characteristics found in experimental data but extensive comparison between the models and experimental data has not been carried out.

  9. Acoustic propagation in a thermally stratified atmosphere

    NASA Technical Reports Server (NTRS)

    Vanmoorhem, W. K.

    1987-01-01

    Acoustic propagation in an atmosphere with a specific form of temperature profile has been investigated by analytical means. The temperature profile used is representative of an actual atmospheric profile and contains three free parameters. Both lapse and inversion cases have been considered. Although ray solution have been considered the primary emphasis has been on solutions of the acoustic wave equation with point force where the sound speed varies with height above the ground corresponding to the assumed temperature profile. The method used to obtain the solution of the wave equation is based on Hankel transformation of the wave equation, approximate solution of the transformed equation for wavelength small compared to the scale of the temperature (or sound speed) profile, and approximate or numerical inversion of the Hankel transformed solution. The solution displays the characteristics found in experimental data but extensive comparison between the models and experimental data has not been carried out.

  10. Numerical solution of inverse scattering for near-field optics.

    PubMed

    Bao, Gang; Li, Peijun

    2007-06-01

    A novel regularized recursive linearization method is developed for a two-dimensional inverse medium scattering problem that arises in near-field optics, which reconstructs the scatterer of an inhomogeneous medium located on a substrate from data accessible through photon scanning tunneling microscopy experiments. Based on multiple frequency scattering data, the method starts from the Born approximation corresponding to weak scattering at a low frequency, and each update is obtained by continuation on the wavenumber from solutions of one forward problem and one adjoint problem of the Helmholtz equation.

  11. Wavelet-based 3-D inversion for frequency-domain airborne EM data

    NASA Astrophysics Data System (ADS)

    Liu, Yunhe; Farquharson, Colin G.; Yin, Changchun; Baranwal, Vikas C.

    2018-04-01

    In this paper, we propose a new wavelet-based 3-D inversion method for frequency-domain airborne electromagnetic (FDAEM) data. Instead of inverting the model in the space domain using a smoothing constraint, this new method recovers the model in the wavelet domain based on a sparsity constraint. In the wavelet domain, the model is represented by two types of coefficients, which contain both large- and fine-scale informations of the model, meaning the wavelet-domain inversion has inherent multiresolution. In order to accomplish a sparsity constraint, we minimize an L1-norm measure in the wavelet domain that mostly gives a sparse solution. The final inversion system is solved by an iteratively reweighted least-squares method. We investigate different orders of Daubechies wavelets to accomplish our inversion algorithm, and test them on synthetic frequency-domain AEM data set. The results show that higher order wavelets having larger vanishing moments and regularity can deliver a more stable inversion process and give better local resolution, while the lower order wavelets are simpler and less smooth, and thus capable of recovering sharp discontinuities if the model is simple. At last, we test this new inversion algorithm on a frequency-domain helicopter EM (HEM) field data set acquired in Byneset, Norway. Wavelet-based 3-D inversion of HEM data is compared to L2-norm-based 3-D inversion's result to further investigate the features of the new method.

  12. General Theory and Algorithms for the Non-Casual Inversion, Slewing and Control of Space-Based Articulated Structures

    DTIC Science & Technology

    1993-10-01

    Structures: Simultaneous Trajectory Tracking and Vibration Reduction ... 10 3 . Buckling Control of a Flexible Beam Using Piezoelectric Actuators...bounded solution for the inverse dynamic torque has to be non-causal. Bayo, et. al. [ 3 ], extended the inverse dynamics to planar, multiple-link systems...presented by &ayo and Moulin [4] for the single link system, with provisions for 3 extension to multiple link systems. An equivalent time domain approach for

  13. Solution of underdetermined systems of equations with gridded a priori constraints.

    PubMed

    Stiros, Stathis C; Saltogianni, Vasso

    2014-01-01

    The TOPINV, Topological Inversion algorithm (or TGS, Topological Grid Search) initially developed for the inversion of highly non-linear redundant systems of equations, can solve a wide range of underdetermined systems of non-linear equations. This approach is a generalization of a previous conclusion that this algorithm can be used for the solution of certain integer ambiguity problems in Geodesy. The overall approach is based on additional (a priori) information for the unknown variables. In the past, such information was used either to linearize equations around approximate solutions, or to expand systems of observation equations solved on the basis of generalized inverses. In the proposed algorithm, the a priori additional information is used in a third way, as topological constraints to the unknown n variables, leading to an R(n) grid containing an approximation of the real solution. The TOPINV algorithm does not focus on point-solutions, but exploits the structural and topological constraints in each system of underdetermined equations in order to identify an optimal closed space in the R(n) containing the real solution. The centre of gravity of the grid points defining this space corresponds to global, minimum-norm solutions. The rationale and validity of the overall approach are demonstrated on the basis of examples and case studies, including fault modelling, in comparison with SVD solutions and true (reference) values, in an accuracy-oriented approach.

  14. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  15. FWT2D: A massively parallel program for frequency-domain full-waveform tomography of wide-aperture seismic data—Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves

    2009-03-01

    This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.

  16. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with branches for the static process setup, inversion iterations, and solver runs, each branch specifying information at the event, station and channel levels. The workflow management framework is based on an embedded scripting engine that allows definition of various workflow scenarios using a high-level scripting language and provides access to all available inversion components represented as standard library functions. At present the SES3D wave propagation solver is integrated in the solution; the work is in progress for interfacing with SPECFEM3D. A separate framework is designed for interoperability with an optimization module; the workflow manager and optimization process run in parallel and cooperate by exchanging messages according to a specially designed protocol. A library of high-performance modules implementing signal pre-processing, misfit and adjoint computations according to established good practices is included. Monitoring is based on information stored in the inversion state database and at present implements a command line interface; design of a graphical user interface is in progress. The software design fits well into the common massively parallel system architecture featuring a large number of computational nodes running distributed applications under control of batch-oriented resource managers. The solution prototype has been implemented on the "Piz Daint" supercomputer provided by the Swiss Supercomputing Centre (CSCS).

  17. Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.

    PubMed

    Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens

    2005-05-01

    Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.

  18. Determination of unknown coefficient in a non-linear elliptic problem related to the elastoplastic torsion of a bar

    NASA Astrophysics Data System (ADS)

    Hasanov, Alemdar; Erdem, Arzu

    2008-08-01

    The inverse problem of determining the unknown coefficient of the non-linear differential equation of torsional creep is studied. The unknown coefficient g = g({xi}2) depends on the gradient{xi} : = |{nabla}u| of the solution u(x), x [isin] {Omega} [sub] Rn, of the direct problem. It is proved that this gradient is bounded in C-norm. This permits one to choose the natural class of admissible coefficients for the considered inverse problem. The continuity in the norm of the Sobolev space H1({Omega}) of the solution u(x;g) of the direct problem with respect to the unknown coefficient g = g({xi}2) is obtained in the following sense: ||u(x;g) - u(x;gm)||1 [->] 0 when gm({eta}) [->] g({eta}) point-wise as m [->] {infty}. Based on these results, the existence of a quasi-solution of the inverse problem in the considered class of admissible coefficients is obtained. Numerical examples related to determination of the unknown coefficient are presented.

  19. The inverse problem of refraction travel times, part I: Types of Geophysical Nonuniqueness through Minimization

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.; Park, C.B.

    2005-01-01

    In a set of two papers we study the inverse problem of refraction travel times. The purpose of this work is to use the study as a basis for development of more sophisticated methods for finding more reliable solutions to the inverse problem of refraction travel times, which is known to be nonunique. The first paper, "Types of Geophysical Nonuniqueness through Minimization," emphasizes the existence of different forms of nonuniqueness in the realm of inverse geophysical problems. Each type of nonuniqueness requires a different type and amount of a priori information to acquire a reliable solution. Based on such coupling, a nonuniqueness classification is designed. Therefore, since most inverse geophysical problems are nonunique, each inverse problem must be studied to define what type of nonuniqueness it belongs to and thus determine what type of a priori information is necessary to find a realistic solution. The second paper, "Quantifying Refraction Nonuniqueness Using a Three-layer Model," serves as an example of such an approach. However, its main purpose is to provide a better understanding of the inverse refraction problem by studying the type of nonuniqueness it possesses. An approach for obtaining a realistic solution to the inverse refraction problem is planned to be offered in a third paper that is in preparation. The main goal of this paper is to redefine the existing generalized notion of nonuniqueness and a priori information by offering a classified, discriminate structure. Nonuniqueness is often encountered when trying to solve inverse problems. However, possible nonuniqueness diversity is typically neglected and nonuniqueness is regarded as a whole, as an unpleasant "black box" and is approached in the same manner by applying smoothing constraints, damping constraints with respect to the solution increment and, rarely, damping constraints with respect to some sparse reference information about the true parameters. In practice, when solving geophysical problems different types of nonuniqueness exist, and thus there are different ways to solve the problems. Nonuniqueness is usually regarded as due to data error, assuming the true geology is acceptably approximated by simple mathematical models. Compounding the nonlinear problems, geophysical applications routinely exhibit exact-data nonuniqueness even for models with very few parameters adding to the nonuniqueness due to data error. While nonuniqueness variations have been defined earlier, they have not been linked to specific use of a priori information necessary to resolve each case. Four types of nonuniqueness, typical for minimization problems are defined with the corresponding methods for inclusion of a priori information to find a realistic solution without resorting to a non-discriminative approach. The above-developed stand-alone classification is expected to be helpful when solving any geophysical inverse problems. ?? Birkha??user Verlag, Basel, 2005.

  20. A new wavelet transform to sparsely represent cortical current densities for EEG/MEG inverse problems.

    PubMed

    Liao, Ke; Zhu, Min; Ding, Lei

    2013-08-01

    The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. A new frequency domain analytical solution of a cascade of diffusive channels for flood routing

    NASA Astrophysics Data System (ADS)

    Cimorelli, Luigi; Cozzolino, Luca; Della Morte, Renata; Pianese, Domenico; Singh, Vijay P.

    2015-04-01

    Simplified flood propagation models are often employed in practical applications for hydraulic and hydrologic analyses. In this paper, we present a new numerical method for the solution of the Linear Parabolic Approximation (LPA) of the De Saint Venant equations (DSVEs), accounting for the space variation of model parameters and the imposition of appropriate downstream boundary conditions. The new model is based on the analytical solution of a cascade of linear diffusive channels in the Laplace Transform domain. The time domain solutions are obtained using a Fourier series approximation of the Laplace Inversion formula. The new Inverse Laplace Transform Diffusive Flood Routing model (ILTDFR) can be used as a building block for the construction of real-time flood forecasting models or in optimization models, because it is unconditionally stable and allows fast and fairly precise computation.

  2. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    NASA Astrophysics Data System (ADS)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  3. MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion

    NASA Astrophysics Data System (ADS)

    Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong

    This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.

  4. SaaS Platform for Time Series Data Handling

    NASA Astrophysics Data System (ADS)

    Oplachko, Ekaterina; Rykunov, Stanislav; Ustinin, Mikhail

    2018-02-01

    The paper is devoted to the description of MathBrain, a cloud-based resource, which works as a "Software as a Service" model. It is designed to maximize the efficiency of the current technology and to provide a tool for time series data handling. The resource provides access to the following analysis methods: direct and inverse Fourier transforms, Principal component analysis and Independent component analysis decompositions, quantitative analysis, magnetoencephalography inverse problem solution in a single dipole model based on multichannel spectral data.

  5. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pablant, N. A.; Bell, R. E.; Bitter, M.

    2014-11-15

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at the Large Helical Device. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy andmore » tomographic inversion, XICS can provide profile measurements of the local emissivity, temperature, and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modified Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example, geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  6. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaksa)

    DOE PAGES

    Pablant, N. A.; Bell, R. E.; Bitter, M.; ...

    2014-08-08

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at LHD. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy and tomographic inversion, XICSmore » can provide pro file measurements of the local emissivity, temperature and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modifi ed Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  7. Real-time inverse kinematics and inverse dynamics for lower limb applications using OpenSim

    PubMed Central

    Modenese, L.; Lloyd, D.G.

    2017-01-01

    Real-time estimation of joint angles and moments can be used for rapid evaluation in clinical, sport, and rehabilitation contexts. However, real-time calculation of kinematics and kinetics is currently based on approximate solutions or generic anatomical models. We present a real-time system based on OpenSim solving inverse kinematics and dynamics without simplifications at 2000 frame per seconds with less than 31.5ms of delay. We describe the software architecture, sensitivity analyses to minimise delays and errors, and compare offline and real-time results. This system has the potential to strongly impact current rehabilitation practices enabling the use of personalised musculoskeletal models in real-time. PMID:27723992

  8. Real-time inverse kinematics and inverse dynamics for lower limb applications using OpenSim.

    PubMed

    Pizzolato, C; Reggiani, M; Modenese, L; Lloyd, D G

    2017-03-01

    Real-time estimation of joint angles and moments can be used for rapid evaluation in clinical, sport, and rehabilitation contexts. However, real-time calculation of kinematics and kinetics is currently based on approximate solutions or generic anatomical models. We present a real-time system based on OpenSim solving inverse kinematics and dynamics without simplifications at 2000 frame per seconds with less than 31.5 ms of delay. We describe the software architecture, sensitivity analyses to minimise delays and errors, and compare offline and real-time results. This system has the potential to strongly impact current rehabilitation practices enabling the use of personalised musculoskeletal models in real-time.

  9. Feasibility of creating a specialized reactimeter based on the inverse solution to kinetics equation with a current-mode neutron detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koshelev, A. S., E-mail: alexsander.coshelev@yandex.ru; Arapov, A. V.; Ovchinnikov, M. A.

    2016-12-15

    The file-evaluation results of a reactimeter based on the inverse solution to the kinetics equation (ISKE) are presented, which were obtained using an operating hardware-measuring complex with a KNK-4 neutron detector working in the current mode. The processing of power-recording files of the BR-1M, BR-K1, and VIR-2M reactors of the Russian Federal Nuclear Center—All-Russian Research Institute of Experimental Physics, which was performed with the use of Excel simulation of the ISKE formalism, demonstrated the feasibility of implementation of the reactivity monitoring (during the operation of these reactors at stationary power) beginning from the level of ~5 × 10{sup –4}β{sub eff}.

  10. Noise spectroscopy as an equilibrium analysis tool for highly sensitive electrical biosensing

    NASA Astrophysics Data System (ADS)

    Guo, Qiushi; Kong, Tao; Su, Ruigong; Zhang, Qi; Cheng, Guosheng

    2012-08-01

    We demonstrate an approach for highly sensitive bio-detection based on silicon nanowire field-effect transistors by employing low frequency noise spectroscopy analysis. The inverse of noise amplitude of the device exhibits an enhanced gate coupling effect in strong inversion regime when measured in buffer solution than that in air. The approach was further validated by the detection of cardiac troponin I of 0.23 ng/ml in fetal bovine serum, in which 2 orders of change in noise amplitude was characterized. The selectivity of the proposed approach was also assessed by the addition of 10 μg/ml bovine serum albumin solution.

  11. The double universal joint wrist on a manipulator: Solution of inverse position kinematics and singularity analysis

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., III

    1992-01-01

    This paper presents three methods to solve the inverse position kinematics position problem of the double universal joint attached to a manipulator: (1) an analytical solution for two specific cases; (2) an approximate closed form solution based on ignoring the wrist offset; and (3) an iterative method which repeats closed form position and orientation calculations until the solution is achieved. Several manipulators are used to demonstrate the solution methods: cartesian, cylindrical, spherical, and an anthropomorphic articulated arm, based on the Flight Telerobotic Servicer (FTS) arm. A singularity analysis is presented for the double universal joint wrist attached to the above manipulator arms. While the double universal joint wrist standing alone is singularity-free in orientation, the singularity analysis indicates the presence of coupled position/orientation singularities of the spherical and articulated manipulators with the wrist. The cartesian and cylindrical manipulators with the double universal joint wrist were found to be singularity-free. The methods of this paper can be implemented in a real-time controller for manipulators with the double universal joint wrist. Such mechanically dextrous systems could be used in telerobotic and industrial applications, but further work is required to avoid the singularities.

  12. Interactive Inverse Groundwater Modeling - Addressing User Fatigue

    NASA Astrophysics Data System (ADS)

    Singh, A.; Minsker, B. S.

    2006-12-01

    This paper builds on ongoing research on developing an interactive and multi-objective framework to solve the groundwater inverse problem. In this work we solve the classic groundwater inverse problem of estimating a spatially continuous conductivity field, given field measurements of hydraulic heads. The proposed framework is based on an interactive multi-objective genetic algorithm (IMOGA) that not only considers quantitative measures such as calibration error and degree of regularization, but also takes into account expert knowledge about the structure of the underlying conductivity field expressed as subjective rankings of potential conductivity fields by the expert. The IMOGA converges to the optimal Pareto front representing the best trade- off among the qualitative as well as quantitative objectives. However, since the IMOGA is a population-based iterative search it requires the user to evaluate hundreds of solutions. This leads to the problem of 'user fatigue'. We propose a two step methodology to combat user fatigue in such interactive systems. The first step is choosing only a few highly representative solutions to be shown to the expert for ranking. Spatial clustering is used to group the search space based on the similarity of the conductivity fields. Sampling is then carried out from different clusters to improve the diversity of solutions shown to the user. Once the expert has ranked representative solutions from each cluster a machine learning model is used to 'learn user preference' and extrapolate these for the solutions not ranked by the expert. We investigate different machine learning models such as Decision Trees, Bayesian learning model, and instance based weighting to model user preference. In addition, we also investigate ways to improve the performance of these models by providing information about the spatial structure of the conductivity fields (which is what the expert bases his or her rank on). Results are shown for each of these machine learning models and the advantages and disadvantages for each approach are discussed. These results indicate that using the proposed two-step methodology leads to significant reduction in user-fatigue without deteriorating the solution quality of the IMOGA.

  13. Bayesian inversion of refraction seismic traveltime data

    NASA Astrophysics Data System (ADS)

    Ryberg, T.; Haberland, Ch

    2018-03-01

    We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test for a synthetic data set from a known model is also presented.

  14. High-resolution atmospheric inversion of urban CO2 emissions during the dormant season of the Indianapolis Flux Experiment (INFLUX)

    NASA Astrophysics Data System (ADS)

    Lauvaux, Thomas; Miles, Natasha L.; Deng, Aijun; Richardson, Scott J.; Cambaliza, Maria O.; Davis, Kenneth J.; Gaudet, Brian; Gurney, Kevin R.; Huang, Jianhua; O'Keefe, Darragh; Song, Yang; Karion, Anna; Oda, Tomohiro; Patarasuk, Risa; Razlivanov, Igor; Sarmiento, Daniel; Shepson, Paul; Sweeney, Colm; Turnbull, Jocelyn; Wu, Kai

    2016-05-01

    Based on a uniquely dense network of surface towers measuring continuously the atmospheric concentrations of greenhouse gases (GHGs), we developed the first comprehensive monitoring systems of CO2 emissions at high resolution over the city of Indianapolis. The urban inversion evaluated over the 2012-2013 dormant season showed a statistically significant increase of about 20% (from 4.5 to 5.7 MtC ± 0.23 MtC) compared to the Hestia CO2 emission estimate, a state-of-the-art building-level emission product. Spatial structures in prior emission errors, mostly undetermined, appeared to affect the spatial pattern in the inverse solution and the total carbon budget over the entire area by up to 15%, while the inverse solution remains fairly insensitive to the CO2 boundary inflow and to the different prior emissions (i.e., ODIAC). Preceding the surface emission optimization, we improved the atmospheric simulations using a meteorological data assimilation system also informing our Bayesian inversion system through updated observations error variances. Finally, we estimated the uncertainties associated with undetermined parameters using an ensemble of inversions. The total CO2 emissions based on the ensemble mean and quartiles (5.26-5.91 MtC) were statistically different compared to the prior total emissions (4.1 to 4.5 MtC). Considering the relatively small sensitivity to the different parameters, we conclude that atmospheric inversions are potentially able to constrain the carbon budget of the city, assuming sufficient data to measure the inflow of GHG over the city, but additional information on prior emission error structures are required to determine the spatial structures of urban emissions at high resolution.

  15. A statistical kinematic source inversion approach based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo

    2014-05-01

    Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.

  16. Convergence acceleration in scattering series and seismic waveform inversion using nonlinear Shanks transformation

    NASA Astrophysics Data System (ADS)

    Eftekhar, Roya; Hu, Hao; Zheng, Yingcai

    2018-06-01

    Iterative solution process is fundamental in seismic inversions, such as in full-waveform inversions and some inverse scattering methods. However, the convergence could be slow or even divergent depending on the initial model used in the iteration. We propose to apply Shanks transformation (ST for short) to accelerate the convergence of the iterative solution. ST is a local nonlinear transformation, which transforms a series locally into another series with an improved convergence property. ST works by separating the series into a smooth background trend called the secular term versus an oscillatory transient term. ST then accelerates the convergence of the secular term. Since the transformation is local, we do not need to know all the terms in the original series which is very important in the numerical implementation. The ST performance was tested numerically for both the forward Born series and the inverse scattering series (ISS). The ST has been shown to accelerate the convergence in several examples, including three examples of forward modeling using the Born series and two examples of velocity inversion based on a particular type of the ISS. We observe that ST is effective in accelerating the convergence and it can also achieve convergence even for a weakly divergent scattering series. As such, it provides a useful technique to invert for a large-contrast medium perturbation in seismic inversion.

  17. Applications of He's semi-inverse method, ITEM and GGM to the Davey-Stewartson equation

    NASA Astrophysics Data System (ADS)

    Zinati, Reza Farshbaf; Manafian, Jalil

    2017-04-01

    We investigate the Davey-Stewartson (DS) equation. Travelling wave solutions were found. In this paper, we demonstrate the effectiveness of the analytical methods, namely, He's semi-inverse variational principle method (SIVPM), the improved tan(φ/2)-expansion method (ITEM) and generalized G'/G-expansion method (GGM) for seeking more exact solutions via the DS equation. These methods are direct, concise and simple to implement compared to other existing methods. The exact solutions containing four types solutions have been achieved. The results demonstrate that the aforementioned methods are more efficient than the Ansatz method applied by Mirzazadeh (2015). Abundant exact travelling wave solutions including solitons, kink, periodic and rational solutions have been found by the improved tan(φ/2)-expansion and generalized G'/G-expansion methods. By He's semi-inverse variational principle we have obtained dark and bright soliton wave solutions. Also, the obtained semi-inverse variational principle has profound implications in physical understandings. These solutions might play important role in engineering and physics fields. Moreover, by using Matlab, some graphical simulations were done to see the behavior of these solutions.

  18. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary data streams or by considering longer observation windows no systematic analysis has been carried out so far to explain the large differences among results. We consider adjoint based methods to investigate inverse problems using DALEC and various data streams. Using resolution matrices we study the nature of the inverse problems (solution existence, uniqueness and stability) and show how standard regularization techniques affect resolution and stability properties. Instead of using standard prior information as a penalty term in the cost function to regularize the problems we constraint the parameter space using ecological balance conditions and inequality constraints. The efficiency and rapidity of this approach allows us to compute ensembles of solutions to the inverse problems from which we can establish the robustness of the variational method and obtain non Gaussian posterior distributions for the model parameters and initial carbon stocks.

  19. 2-D inversion of VES data in Saqqara archaeological area, Egypt

    NASA Astrophysics Data System (ADS)

    El-Qady, Gad; Sakamoto, Chika; Ushijima, Keisuke

    1999-10-01

    The interpretation of actual geophysical field data still has a problem for obtaining a unique solution. In order to investigate the groundwater potentials in Saqqara archaeological area, vertical electrical soundings with Schlumberger array have been carried out. In the interpretation of VES data, 1D resistivity inversion has been performed based on a horizontally layered earth model by El-Qady (1995). However, some results of 1D inversion are not fully satisfied for actual 3D structures such as archaeological tombs. Therefore, we have carried out 2D inversion based on ABIC least squares method for Schlumberger VES data obtained in Saqqara area. Although the results of 2D cross sections were correlated with the previous interpretation, the 2D inversion still shows a rough spatial resistivity distribution, which is the abrupt change in resistivity between two neighboring blocks of the computed region. It is concluded that 3D interpretation is recommended for visualizing ground water distribution with depth in the Saqqara area.

  20. Mantle viscosity structure constrained by joint inversions of seismic velocities and density

    NASA Astrophysics Data System (ADS)

    Rudolph, M. L.; Moulik, P.; Lekic, V.

    2017-12-01

    The viscosity structure of Earth's deep mantle affects the thermal evolution of Earth, the ascent of mantle upwellings, sinking of subducted oceanic lithosphere, and the mixing of compositional heterogeneities in the mantle. Modeling the long-wavelength dynamic geoid allows us to constrain the radial viscosity profile of the mantle. Typically, in inversions for the mantle viscosity structure, wavespeed variations are mapped into density variations using a constant- or depth-dependent scaling factor. Here, we use a newly developed joint model of anisotropic Vs, Vp, density and transition zone topographies to generate a suite of solutions for the mantle viscosity structure directly from the seismologically constrained density structure. The density structure used to drive our forward models includes contributions from both thermal and compositional variations, including important contributions from compositionally dense material in the Large Low Velocity Provinces at the base of the mantle. These compositional variations have been neglected in the forward models used in most previous inversions and have the potential to significantly affect large-scale flow and thus the inferred viscosity structure. We use a transdimensional, hierarchical, Bayesian approach to solve the inverse problem, and our solutions for viscosity structure include an increase in viscosity below the base of the transition zone, in the shallow lower mantle. Using geoid dynamic response functions and an analysis of the correlation between the observed geoid and mantle structure, we demonstrate the underlying reason for this inference. Finally, we present a new family of solutions in which the data uncertainty is accounted for using covariance matrices associated with the mantle structure models.

  1. On the Use of Nonlinear Regularization in Inverse Methods for the Solar Tachocline Profile Determination

    NASA Astrophysics Data System (ADS)

    Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.

    Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.

  2. Profiling of poorly stratified atmospheres with scanning lidar

    Treesearch

    C. E. Wold; V. A. Kovalev; A. P. Petkov; W. M. Hao

    2012-01-01

    The direct multiangle solution may allow inversion of the scanning lidar data even when the requirement of the horizontally stratified atmosphere is poorly met. The solution is based on two principles: (1) The signal measured in zenith is the core source for extracting the information about the atmospheric aerosol loading, and (2) The multiangle signals are used as...

  3. Multi-scale signed envelope inversion

    NASA Astrophysics Data System (ADS)

    Chen, Guo-Xin; Wu, Ru-Shan; Wang, Yu-Qing; Chen, Sheng-Chang

    2018-06-01

    Envelope inversion based on modulation signal mode was proposed to reconstruct large-scale structures of underground media. In order to solve the shortcomings of conventional envelope inversion, multi-scale envelope inversion was proposed using new envelope Fréchet derivative and multi-scale inversion strategy to invert strong contrast models. In multi-scale envelope inversion, amplitude demodulation was used to extract the low frequency information from envelope data. However, only to use amplitude demodulation method will cause the loss of wavefield polarity information, thus increasing the possibility of inversion to obtain multiple solutions. In this paper we proposed a new demodulation method which can contain both the amplitude and polarity information of the envelope data. Then we introduced this demodulation method into multi-scale envelope inversion, and proposed a new misfit functional: multi-scale signed envelope inversion. In the numerical tests, we applied the new inversion method to the salt layer model and SEG/EAGE 2-D Salt model using low-cut source (frequency components below 4 Hz were truncated). The results of numerical test demonstrated the effectiveness of this method.

  4. A new, double-inversion mechanism of the F- + CH3Cl SN2 reaction in aqueous solution.

    PubMed

    Liu, Peng; Wang, Dunyou; Xu, Yulong

    2016-11-23

    Atomic-level, bimolecular nucleophilic substitution reaction mechanisms have been studied mostly in the gas phase, but the gas-phase results cannot be expected to reliably describe condensed-phase chemistry. As a novel, double-inversion mechanism has just been found for the F - + CH 3 Cl S N 2 reaction in the gas phase [Nat. Commun., 2015, 6, 5972], here, using multi-level quantum mechanics methods combined with the molecular mechanics method, we discovered a new, double-inversion mechanism for this reaction in aqueous solution. However, the structures of the stationary points along the reaction path show significant differences from those in the gas phase due to the strong influence of solvent and solute interactions, especially due to the hydrogen bonds formed between the solute and the solvent. More importantly, the relationship between the two double-inversion transition states is not clear in the gas phase, but, here we revealed a novel intermediate complex serving as a "connecting link" between the two transition states of the abstraction-induced inversion and the Walden-inversion mechanisms. A detailed reaction path was constructed to show the atomic-level evolution of this novel double reaction mechanism in aqueous solution. The potentials of mean force were calculated and the obtained Walden-inversion barrier height agrees well with the available experimental value.

  5. A multi-frequency iterative imaging method for discontinuous inverse medium problem

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Feng, Lixin

    2018-06-01

    The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.

  6. Gravity inversion of a fault by Particle swarm optimization (PSO).

    PubMed

    Toushmalani, Reza

    2013-01-01

    Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.

  7. A combined direct/inverse three-dimensional transonic wing design method for vector computers

    NASA Technical Reports Server (NTRS)

    Weed, R. A.; Carlson, L. A.; Anderson, W. K.

    1984-01-01

    A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.

  8. Solutions to inverse plume in a crosswind problem using a predictor - corrector method

    NASA Astrophysics Data System (ADS)

    Vanderveer, Joseph; Jaluria, Yogesh

    2013-11-01

    Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.

  9. The inverse Wiener polarity index problem for chemical trees.

    PubMed

    Du, Zhibin; Ali, Akbar

    2018-01-01

    The Wiener polarity number (which, nowadays, known as the Wiener polarity index and usually denoted by Wp) was devised by the chemist Harold Wiener, for predicting the boiling points of alkanes. The index Wp of chemical trees (chemical graphs representing alkanes) is defined as the number of unordered pairs of vertices (carbon atoms) at distance 3. The inverse problems based on some well-known topological indices have already been addressed in the literature. The solution of such inverse problems may be helpful in speeding up the discovery of lead compounds having the desired properties. This paper is devoted to solving a stronger version of the inverse problem based on Wiener polarity index for chemical trees. More precisely, it is proved that for every integer t ∈ {n - 3, n - 2,…,3n - 16, 3n - 15}, n ≥ 6, there exists an n-vertex chemical tree T such that Wp(T) = t.

  10. Multicomponent pre-stack seismic waveform inversion in transversely isotropic media using a non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Padhi, Amit; Mallick, Subhashis

    2014-03-01

    Inversion of band- and offset-limited single component (P wave) seismic data does not provide robust estimates of subsurface elastic parameters and density. Multicomponent seismic data can, in principle, circumvent this limitation but adds to the complexity of the inversion algorithm because it requires simultaneous optimization of multiple objective functions, one for each data component. In seismology, these multiple objectives are typically handled by constructing a single objective given as a weighted sum of the objectives of individual data components and sometimes with additional regularization terms reflecting their interdependence; which is then followed by a single objective optimization. Multi-objective problems, inclusive of the multicomponent seismic inversion are however non-linear. They have non-unique solutions, known as the Pareto-optimal solutions. Therefore, casting such problems as a single objective optimization provides one out of the entire set of the Pareto-optimal solutions, which in turn, may be biased by the choice of the weights. To handle multiple objectives, it is thus appropriate to treat the objective as a vector and simultaneously optimize each of its components so that the entire Pareto-optimal set of solutions could be estimated. This paper proposes such a novel multi-objective methodology using a non-dominated sorting genetic algorithm for waveform inversion of multicomponent seismic data. The applicability of the method is demonstrated using synthetic data generated from multilayer models based on a real well log. We document that the proposed method can reliably extract subsurface elastic parameters and density from multicomponent seismic data both when the subsurface is considered isotropic and transversely isotropic with a vertical symmetry axis. We also compute approximate uncertainty values in the derived parameters. Although we restrict our inversion applications to horizontally stratified models, we outline a practical procedure of extending the method to approximately include local dips for each source-receiver offset pair. Finally, the applicability of the proposed method is not just limited to seismic inversion but it could be used to invert different data types not only requiring multiple objectives but also multiple physics to describe them.

  11. Modeling the 16 September 2015 Chile tsunami source with the inversion of deep-ocean tsunami records by means of the r - solution method

    NASA Astrophysics Data System (ADS)

    Voronina, Tatyana; Romanenko, Alexey; Loskutov, Artem

    2017-04-01

    The key point in the state-of-the-art in the tsunami forecasting is constructing a reliable tsunami source. In this study, we present an application of the original numerical inversion technique to modeling the tsunami sources of the 16 September 2015 Chile tsunami. The problem of recovering a tsunami source from remote measurements of the incoming wave in the deep-water tsunameters is considered as an inverse problem of mathematical physics in the class of ill-posed problems. This approach is based on the least squares and the truncated singular value decomposition techniques. The tsunami wave propagation is considered within the scope of the linear shallow-water theory. As in inverse seismic problem, the numerical solutions obtained by mathematical methods become unstable due to the presence of noise in real data. A method of r-solutions makes it possible to avoid instability in the solution to the ill-posed problem under study. This method seems to be attractive from the computational point of view since the main efforts are required only once for calculating the matrix whose columns consist of computed waveforms for each harmonic as a source (an unknown tsunami source is represented as a part of a spatial harmonics series in the source area). Furthermore, analyzing the singular spectra of the matrix obtained in the course of numerical calculations one can estimate the future inversion by a certain observational system that will allow offering a more effective disposition for the tsunameters with the help of precomputations. In other words, the results obtained allow finding a way to improve the inversion by selecting the most informative set of available recording stations. The case study of the 6 February 2013 Solomon Islands tsunami highlights a critical role of arranging deep-water tsunameters for obtaining the inversion results. Implementation of the proposed methodology to the 16 September 2015 Chile tsunami has successfully produced tsunami source model. The function recovered by the method proposed can find practical applications both as an initial condition for various optimization approaches and for computer calculation of the tsunami wave propagation.

  12. Real-time inverse kinematics for the upper limb: a model-based algorithm using segment orientations.

    PubMed

    Borbély, Bence J; Szolgay, Péter

    2017-01-17

    Model based analysis of human upper limb movements has key importance in understanding the motor control processes of our nervous system. Various simulation software packages have been developed over the years to perform model based analysis. These packages provide computationally intensive-and therefore off-line-solutions to calculate the anatomical joint angles from motion captured raw measurement data (also referred as inverse kinematics). In addition, recent developments in inertial motion sensing technology show that it may replace large, immobile and expensive optical systems with small, mobile and cheaper solutions in cases when a laboratory-free measurement setup is needed. The objective of the presented work is to extend the workflow of measurement and analysis of human arm movements with an algorithm that allows accurate and real-time estimation of anatomical joint angles for a widely used OpenSim upper limb kinematic model when inertial sensors are used for movement recording. The internal structure of the selected upper limb model is analyzed and used as the underlying platform for the development of the proposed algorithm. Based on this structure, a prototype marker set is constructed that facilitates the reconstruction of model-based joint angles using orientation data directly available from inertial measurement systems. The mathematical formulation of the reconstruction algorithm is presented along with the validation of the algorithm on various platforms, including embedded environments. Execution performance tables of the proposed algorithm show significant improvement on all tested platforms. Compared to OpenSim's Inverse Kinematics tool 50-15,000x speedup is achieved while maintaining numerical accuracy. The proposed algorithm is capable of real-time reconstruction of standardized anatomical joint angles even in embedded environments, establishing a new way for complex applications to take advantage of accurate and fast model-based inverse kinematics calculations.

  13. Bayesian Inversion of 2D Models from Airborne Transient EM Data

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Key, K.; Ray, A.

    2016-12-01

    The inherent non-uniqueness in most geophysical inverse problems leads to an infinite number of Earth models that fit observed data to within an adequate tolerance. To resolve this ambiguity, traditional inversion methods based on optimization techniques such as the Gauss-Newton and conjugate gradient methods rely on an additional regularization constraint on the properties that an acceptable model can possess, such as having minimal roughness. While allowing such an inversion scheme to converge on a solution, regularization makes it difficult to estimate the uncertainty associated with the model parameters. This is because regularization biases the inversion process toward certain models that satisfy the regularization constraint and away from others that don't, even when both may suitably fit the data. By contrast, a Bayesian inversion framework aims to produce not a single `most acceptable' model but an estimate of the posterior likelihood of the model parameters, given the observed data. In this work, we develop a 2D Bayesian framework for the inversion of transient electromagnetic (TEM) data. Our method relies on a reversible-jump Markov Chain Monte Carlo (RJ-MCMC) Bayesian inverse method with parallel tempering. Previous gradient-based inversion work in this area used a spatially constrained scheme wherein individual (1D) soundings were inverted together and non-uniqueness was tackled by using lateral and vertical smoothness constraints. By contrast, our work uses a 2D model space of Voronoi cells whose parameterization (including number of cells) is fully data-driven. To make the problem work practically, we approximate the forward solution for each TEM sounding using a local 1D approximation where the model is obtained from the 2D model by retrieving a vertical profile through the Voronoi cells. The implicit parsimony of the Bayesian inversion process leads to the simplest models that adequately explain the data, obviating the need for explicit smoothness constraints. In addition, credible intervals in model space are directly obtained, resolving some of the uncertainty introduced by regularization. An example application shows how the method can be used to quantify the uncertainty in airborne EM soundings for imaging subglacial brine channels and groundwater systems.

  14. Inverse problem of radiofrequency sounding of ionosphere

    NASA Astrophysics Data System (ADS)

    Velichko, E. N.; Yu. Grishentsev, A.; Korobeynikov, A. G.

    2016-01-01

    An algorithm for the solution of the inverse problem of vertical ionosphere sounding and a mathematical model of noise filtering are presented. An automated system for processing and analysis of spectrograms of vertical ionosphere sounding based on our algorithm is described. It is shown that the algorithm we suggest has a rather high efficiency. This is supported by the data obtained at the ionospheric stations of the so-called “AIS-M” type.

  15. Alternative formulations of the Laplace transform boundary element (LTBE) numerical method for the solution of diffusion-type equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moridis, G.

    1992-03-01

    The Laplace Transform Boundary Element (LTBE) method is a recently introduced numerical method, and has been used for the solution of diffusion-type PDEs. It completely eliminates the time dependency of the problem and the need for time discretization, yielding solutions numerical in space and semi-analytical in time. In LTBE solutions are obtained in the Laplace spare, and are then inverted numerically to yield the solution in time. The Stehfest and the DeHoog formulations of LTBE, based on two different inversion algorithms, are investigated. Both formulations produce comparable, extremely accurate solutions.

  16. Real-time Inversion of Tsunami Source from GNSS Ground Deformation Observations and Tide Gauges.

    NASA Astrophysics Data System (ADS)

    Arcas, D.; Wei, Y.

    2017-12-01

    Over the last decade, the NOAA Center for Tsunami Research (NCTR) has developed an inversion technique to constrain tsunami sources based on the use of Green's functions in combination with data reported by NOAA's Deep-ocean Assessment and Reporting of Tsunamis (DART®) systems. The system has consistently proven effective in providing highly accurate tsunami forecasts of wave amplitude throughout an entire basin. However, improvement is necessary in two critical areas: reduction of data latency for near-field tsunami predictions and reduction of maintenance cost of the network. Two types of sensors have been proposed as supplementary to the existing network of DART®systems: Global Navigation Satellite System (GNSS) stations and coastal tide gauges. The use GNSS stations to provide autonomous geo-spatial positioning at specific sites during an earthquake has been proposed in recent years to supplement the DART® array in tsunami source inversion. GNSS technology has the potential to provide substantial contributions in the two critical areas of DART® technology where improvement is most necessary. The present study uses GNSS ground displacement observations of the 2011 Tohoku-Oki earthquake in combination with NCTR operational database of Green's functions, to produce a rapid estimate of tsunami source based on GNSS observations alone. The solution is then compared with that obtained via DART® data inversion and the difficulties in obtaining an accurate GNSS-based solution are underlined. The study also identifies the set of conditions required for source inversion from coastal tide-gauges using the degree of nonlinearity of the signal as a primary criteria. We then proceed to identify the conditions and scenarios under which a particular gage could be used to invert a tsunami source.

  17. Inverse scattering method and soliton double solution family for the general symplectic gravity model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao Yajun

    A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.

  18. Applications of Bayesian spectrum representation in acoustics

    NASA Astrophysics Data System (ADS)

    Botts, Jonathan M.

    This dissertation utilizes a Bayesian inference framework to enhance the solution of inverse problems where the forward model maps to acoustic spectra. A Bayesian solution to filter design inverts a acoustic spectra to pole-zero locations of a discrete-time filter model. Spatial sound field analysis with a spherical microphone array is a data analysis problem that requires inversion of spatio-temporal spectra to directions of arrival. As with many inverse problems, a probabilistic analysis results in richer solutions than can be achieved with ad-hoc methods. In the filter design problem, the Bayesian inversion results in globally optimal coefficient estimates as well as an estimate the most concise filter capable of representing the given spectrum, within a single framework. This approach is demonstrated on synthetic spectra, head-related transfer function spectra, and measured acoustic reflection spectra. The Bayesian model-based analysis of spatial room impulse responses is presented as an analogous problem with equally rich solution. The model selection mechanism provides an estimate of the number of arrivals, which is necessary to properly infer the directions of simultaneous arrivals. Although, spectrum inversion problems are fairly ubiquitous, the scope of this dissertation has been limited to these two and derivative problems. The Bayesian approach to filter design is demonstrated on an artificial spectrum to illustrate the model comparison mechanism and then on measured head-related transfer functions to show the potential range of application. Coupled with sampling methods, the Bayesian approach is shown to outperform least-squares filter design methods commonly used in commercial software, confirming the need for a global search of the parameter space. The resulting designs are shown to be comparable to those that result from global optimization methods, but the Bayesian approach has the added advantage of a filter length estimate within the same unified framework. The application to reflection data is useful for representing frequency-dependent impedance boundaries in finite difference acoustic simulations. Furthermore, since the filter transfer function is a parametric model, it can be modified to incorporate arbitrary frequency weighting and account for the band-limited nature of measured reflection spectra. Finally, the model is modified to compensate for dispersive error in the finite difference simulation, from the filter design process. Stemming from the filter boundary problem, the implementation of pressure sources in finite difference simulation is addressed in order to assure that schemes properly converge. A class of parameterized source functions is proposed and shown to offer straightforward control of residual error in the simulation. Guided by the notion that the solution to be approximated affects the approximation error, sources are designed which reduce residual dispersive error to the size of round-off errors. The early part of a room impulse response can be characterized by a series of isolated plane waves. Measured with an array of microphones, plane waves map to a directional response of the array or spatial intensity map. Probabilistic inversion of this response results in estimates of the number and directions of image source arrivals. The model-based inversion is shown to avoid ambiguities associated with peak-finding or inspection of the spatial intensity map. For this problem, determining the number of arrivals in a given frame is critical for properly inferring the state of the sound field. This analysis is effectively compression of the spatial room response, which is useful for analysis or encoding of the spatial sound field. Parametric, model-based formulations of these problems enhance the solution in all cases, and a Bayesian interpretation provides a principled approach to model comparison and parameter estimation. v

  19. Sources of unbounded priority inversions in real-time systems and a comparative study of possible solutions

    NASA Technical Reports Server (NTRS)

    Davari, Sadegh; Sha, Lui

    1992-01-01

    In the design of real-time systems, tasks are often assigned priorities. Preemptive priority driven schedulers are used to schedule tasks to meet the timing requirements. Priority inversion is the term used to describe the situation when a higher priority task's execution is delayed by lower priority tasks. Priority inversion can occur when there is contention for resources among tasks of different priorities. The duration of priority inversion could be long enough to cause tasks to miss their dead lines. Priority inversion cannot be completely eliminated. However, it is important to identify sources of priority inversion and minimize the duration of priority inversion. In this paper, a comprehensive review of the problem of and solutions to unbounded priority inversion is presented.

  20. Model based approach to UXO imaging using the time domain electromagnetic method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lavely, E.M.

    1999-04-01

    Time domain electromagnetic (TDEM) sensors have emerged as a field-worthy technology for UXO detection in a variety of geological and environmental settings. This success has been achieved with commercial equipment that was not optimized for UXO detection and discrimination. The TDEM response displays a rich spatial and temporal behavior which is not currently utilized. Therefore, in this paper the author describes a research program for enhancing the effectiveness of the TDEM method for UXO detection and imaging. Fundamental research is required in at least three major areas: (a) model based imaging capability i.e. the forward and inverse problem, (b) detectormore » modeling and instrument design, and (c) target recognition and discrimination algorithms. These research problems are coupled and demand a unified treatment. For example: (1) the inverse solution depends on solution of the forward problem and knowledge of the instrument response; (2) instrument design with improved diagnostic power requires forward and inverse modeling capability; and (3) improved target recognition algorithms (such as neural nets) must be trained with data collected from the new instrument and with synthetic data computed using the forward model. Further, the design of the appropriate input and output layers of the net will be informed by the results of the forward and inverse modeling. A more fully developed model of the TDEM response would enable the joint inversion of data collected from multiple sensors (e.g., TDEM sensors and magnetometers). Finally, the author suggests that a complementary approach to joint inversions is the statistical recombination of data using principal component analysis. The decomposition into principal components is useful since the first principal component contains those features that are most strongly correlated from image to image.« less

  1. Prestack density inversion using the Fatti equation constrained by the P- and S-wave impedance and density

    NASA Astrophysics Data System (ADS)

    Liang, Li-Feng; Zhang, Hong-Bing; Dan, Zhi-Wei; Xu, Zi-Qiang; Liu, Xiu-Juan; Cao, Cheng-Hao

    2017-03-01

    Simultaneous prestack inversion is based on the modified Fatti equation and uses the ratio of the P- and S-wave velocity as constraints. We use the relation of P-wave impedance and density (PID) and S-wave impedance and density (SID) to replace the constant Vp/Vs constraint, and we propose the improved constrained Fatti equation to overcome the effect of P-wave impedance on density. We compare the sensitivity of both methods using numerical simulations and conclude that the density inversion sensitivity improves when using the proposed method. In addition, the random conjugate-gradient method is used in the inversion because it is fast and produces global solutions. The use of synthetic and field data suggests that the proposed inversion method is effective in conventional and nonconventional lithologies.

  2. Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics

    PubMed Central

    Petrov, Yury

    2012-01-01

    EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497

  3. Inverse scattering pre-stack depth imaging and it's comparison to some depth migration methods for imaging rich fault complex structure

    NASA Astrophysics Data System (ADS)

    Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Mubarok, Syahrul; Deny, Agus; Widowati, Sri; Kurniadi, Rizal

    2012-06-01

    Migration is important issue for seismic imaging in complex structure. In this decade, depth imaging becomes important tools for producing accurate image in depth imaging instead of time domain imaging. The challenge of depth migration method, however, is in revealing the complex structure of subsurface. There are many methods of depth migration with their advantages and weaknesses. In this paper, we show our propose method of pre-stack depth migration based on time domain inverse scattering wave equation. Hopefully this method can be as solution for imaging complex structure in Indonesia, especially in rich thrusting fault zones. In this research, we develop a recent advance wave equation migration based on time domain inverse scattering wave which use more natural wave propagation using scattering wave. This wave equation pre-stack depth migration use time domain inverse scattering wave equation based on Helmholtz equation. To provide true amplitude recovery, an inverse of divergence procedure and recovering transmission loss are considered of pre-stack migration. Benchmarking the propose inverse scattering pre-stack depth migration with the other migration methods are also presented, i.e.: wave equation pre-stack depth migration, waveequation depth migration, and pre-stack time migration method. This inverse scattering pre-stack depth migration could image successfully the rich fault zone which consist extremely dip and resulting superior quality of seismic image. The image quality of inverse scattering migration is much better than the others migration methods.

  4. Ellipsoidal head model for fetal magnetoencephalography: forward and inverse solutions

    NASA Astrophysics Data System (ADS)

    Gutiérrez, David; Nehorai, Arye; Preissl, Hubert

    2005-05-01

    Fetal magnetoencephalography (fMEG) is a non-invasive technique where measurements of the magnetic field outside the maternal abdomen are used to infer the source location and signals of the fetus' neural activity. There are a number of aspects related to fMEG modelling that must be addressed, such as the conductor volume, fetal position and orientation, gestation period, etc. We propose a solution to the forward problem of fMEG based on an ellipsoidal head geometry. This model has the advantage of highlighting special characteristics of the field that are inherent to the anisotropy of the human head, such as the spread and orientation of the field in relationship with the localization and position of the fetal head. Our forward solution is presented in the form of a kernel matrix that facilitates the solution of the inverse problem through decoupling of the dipole localization parameters from the source signals. Then, we use this model and the maximum likelihood technique to solve the inverse problem assuming the availability of measurements from multiple trials. The applicability and performance of our methods are illustrated through numerical examples based on a real 151-channel SQUID fMEG measurement system (SARA). SARA is an MEG system especially designed for fetal assessment and is currently used for heart and brain studies. Finally, since our model requires knowledge of the best-fitting ellipsoid's centre location and semiaxes lengths, we propose a method for estimating these parameters through a least-squares fit on anatomical information obtained from three-dimensional ultrasound images.

  5. Extended resolvent and inverse scattering with an application to KPI

    NASA Astrophysics Data System (ADS)

    Boiti, M.; Pempinelli, F.; Pogrebkov, A. K.; Prinari, B.

    2003-08-01

    We present in detail an extended resolvent approach for investigating linear problems associated to 2+1 dimensional integrable equations. Our presentation is based as an example on the nonstationary Schrödinger equation with potential being a perturbation of the one-soliton potential by means of a decaying two-dimensional function. Modification of the inverse scattering theory as well as properties of the Jost solutions and spectral data as follows from the resolvent approach are given.

  6. Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; Simutė, S.

    2017-12-01

    We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.

  7. Semianalytical solutions for transport in aquifer and fractured clay matrix system

    NASA Astrophysics Data System (ADS)

    Huang, Junqi; Goltz, Mark N.

    2015-09-01

    A three-dimensional mathematical model that describes transport of contaminant in a horizontal aquifer with simultaneous diffusion into a fractured clay formation is proposed. A group of semianalytical solutions is derived based on specific initial and boundary conditions as well as various source functions. The analytical model solutions are evaluated by numerical Laplace inverse transformation and analytical Fourier inverse transformation. The model solutions can be used to study the fate and transport in a three-dimensional spatial domain in which a nonaqueous phase liquid exists as a pool atop a fractured low-permeability clay layer. The nonaqueous phase liquid gradually dissolves into the groundwater flowing past the pool, while simultaneously diffusing into the fractured clay formation below the aquifer. Mass transfer of the contaminant into the clay formation is demonstrated to be significantly enhanced by the existence of the fractures, even though the volume of fractures is relatively small compared to the volume of the clay matrix. The model solution is a useful tool in assessing contaminant attenuation processes in a confined aquifer underlain by a fractured clay formation.

  8. 3D Gravity Inversion using Tikhonov Regularization

    NASA Astrophysics Data System (ADS)

    Toushmalani, Reza; Saibi, Hakim

    2015-08-01

    Subsalt exploration for oil and gas is attractive in regions where 3D seismic depth-migration to recover the geometry of a salt base is difficult. Additional information to reduce the ambiguity in seismic images would be beneficial. Gravity data often serve these purposes in the petroleum industry. In this paper, the authors present an algorithm for a gravity inversion based on Tikhonov regularization and an automatically regularized solution process. They examined the 3D Euler deconvolution to extract the best anomaly source depth as a priori information to invert the gravity data and provided a synthetic example. Finally, they applied the gravity inversion to recently obtained gravity data from the Bandar Charak (Hormozgan, Iran) to identify its subsurface density structure. Their model showed the 3D shape of salt dome in this region.

  9. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.

  10. Stress pattern of the Shanxi rift system, North China, inferred from the inversion of new focal mechanisms

    NASA Astrophysics Data System (ADS)

    Li, Bin; Atakan, Kuvvet; Sørensen, Mathilde Bøttger; Havskov, Jens

    2015-05-01

    Earthquake focal mechanisms of the Shanxi rift system, North China, are investigated for the time period 1965-April 2014. A total of 143 focal mechanisms of ML ≥ 3.0 earthquakes were compiled. Among them, 105 solutions are newly determined in this study by combining the P-wave first motions and full waveform inversion, and 38 solutions are from available published data. Stress tensor inversion was then performed based on the new database. The results show that most solutions in the Shanxi rift system exhibit normal or strike-slip faulting, and the regional stress field is transtensional and dominated by NNW-SSE extension. This correlates well with results from GPS data, geological field observations and levelling measurements across the faults. Heterogeneity exists in the regional stress field, as indicated by individual stress tensor inversions conducted for five subzones. While the minimum stress axis (σ3) appears to be consistent and stable, the orientations, especially the plunges, of the maximum and intermediate stresses (σ1 and σ2) vary significantly along the strike of the different subzones. Based on our results and combining multidisciplinary observations from geological surveys, GPS and cross-fault monitoring, a kinematic model is proposed for the Shanxi rift system, in which the rift is situated between two opposite rotating crustal blocks, exhibiting a transtensional stress regimes. This model illustrates the present-day stress field and its correlation to the regional tectonics, as well as the current crustal deformation of the Shanxi rift system. Results obtained in this study, may help to understand the geodynamics, neotectonic activity, active seismicity and potential seismic hazard in this region.

  11. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  12. Inverse scattering transform and soliton classification of the coupled modified Korteweg-de Vries equation

    NASA Astrophysics Data System (ADS)

    Wu, Jianping; Geng, Xianguo

    2017-12-01

    The inverse scattering transform of the coupled modified Korteweg-de Vries equation is studied by the Riemann-Hilbert approach. In the direct scattering process, the spectral analysis of the Lax pair is performed, from which a Riemann-Hilbert problem is established for the equation. In the inverse scattering process, by solving Riemann-Hilbert problems corresponding to the reflectionless cases, three types of multi-soliton solutions are obtained. The multi-soliton classification is based on the zero structures of the Riemann-Hilbert problem. In addition, some figures are given to illustrate the soliton characteristics of the coupled modified Korteweg-de Vries equation.

  13. Multi-level quantum mechanics theories and molecular mechanics study of the double-inversion mechanism of the F- + CH3I reaction in aqueous solution.

    PubMed

    Liu, Peng; Zhang, Jingxue; Wang, Dunyou

    2017-06-07

    A double-inversion mechanism of the F - + CH 3 I reaction was discovered in aqueous solution using combined multi-level quantum mechanics theories and molecular mechanics. The stationary points along the reaction path show very different structures to the ones in the gas phase due to the interactions between the solvent and solute, especially strong hydrogen bonds. An intermediate complex, a minimum on the potential of mean force, was found to serve as a connecting-link between the abstraction-induced inversion transition state and the Walden-inversion transition state. The potentials of mean force were calculated with both the DFT/MM and CCSD(T)/MM levels of theory. Our calculated free energy barrier of the abstraction-induced inversion is 69.5 kcal mol -1 at the CCSD(T)/MM level of theory, which agrees with the one at 72.9 kcal mol -1 calculated using the Born solvation model and gas-phase data; and our calculated free energy barrier of the Walden inversion is 24.2 kcal mol -1 , which agrees very well with the experimental value at 25.2 kcal mol -1 in aqueous solution. The calculations show that the aqueous solution makes significant contributions to the potentials of mean force and exerts a big impact on the molecular-level evolution along the reaction pathway.

  14. In situ forming phase-inversion implants for sustained ocular delivery of triamcinolone acetonide.

    PubMed

    Sheshala, Ravi; Hong, Gan Chew; Yee, Wong Pui; Meka, Venkata Srikanth; Thakur, Raghu Raj Singh

    2018-02-26

    The objectives of this study were to develop biodegradable poly-lactic-co-glycolic acid (PLGA) based injectable phase inversion in situ forming system for sustained delivery of triamcinolone acetonide (TA) and to conduct physicochemical characterisation including in vitro drug release of the prepared formulations. TA (at 0.5%, 1% and 2.5% w/w loading) was dissolved in N-methyl-2-pyrrolidone (NMP) solvent and then incorporated 30% w/w PLGA (50/50 and 75/25) polymer to prepare homogenous injectable solution. The formulations were evaluated for rheological behaviour using rheometer, syringeability by texture analyser, water uptake and rate of implant formation by optical coherence tomography (OCT) microscope. Phase inversion in situ forming formulations were injected into PBS pH 7.3 to form an implant and release samples were collected and analysed for drug content using a HPLC method. All formulations exhibited good syringeability and rheological properties (viscosity: 0.19-3.06 Pa.s) by showing shear thinning behaviour which enable them to remain as free-flowing solution for ease administration. The results from OCT microscope demonstrated that thickness of the implants were increased with the increase in time and the rate of implant formation indicated the fast phase inversion. The drug release from implants was sustained over a period of 42 days. The research findings demonstrated that PLGA/NMP-based phase inversion in situ forming implants can improve compliance in patient's suffering from ocular diseases by sustaining the drug release for a prolonged period of time and thereby reducing the frequency of ocular injections.

  15. Effects of adaptive refinement on the inverse EEG solution

    NASA Astrophysics Data System (ADS)

    Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.

    1995-10-01

    One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.

  16. Applied Mathematics in EM Studies with Special Emphasis on an Uncertainty Quantification and 3-D Integral Equation Modelling

    NASA Astrophysics Data System (ADS)

    Pankratov, Oleg; Kuvshinov, Alexey

    2016-01-01

    Despite impressive progress in the development and application of electromagnetic (EM) deterministic inverse schemes to map the 3-D distribution of electrical conductivity within the Earth, there is one question which remains poorly addressed—uncertainty quantification of the recovered conductivity models. Apparently, only an inversion based on a statistical approach provides a systematic framework to quantify such uncertainties. The Metropolis-Hastings (M-H) algorithm is the most popular technique for sampling the posterior probability distribution that describes the solution of the statistical inverse problem. However, all statistical inverse schemes require an enormous amount of forward simulations and thus appear to be extremely demanding computationally, if not prohibitive, if a 3-D set up is invoked. This urges development of fast and scalable 3-D modelling codes which can run large-scale 3-D models of practical interest for fractions of a second on high-performance multi-core platforms. But, even with these codes, the challenge for M-H methods is to construct proposal functions that simultaneously provide a good approximation of the target density function while being inexpensive to be sampled. In this paper we address both of these issues. First we introduce a variant of the M-H method which uses information about the local gradient and Hessian of the penalty function. This, in particular, allows us to exploit adjoint-based machinery that has been instrumental for the fast solution of deterministic inverse problems. We explain why this modification of M-H significantly accelerates sampling of the posterior probability distribution. In addition we show how Hessian handling (inverse, square root) can be made practicable by a low-rank approximation using the Lanczos algorithm. Ultimately we discuss uncertainty analysis based on stochastic inversion results. In addition, we demonstrate how this analysis can be performed within a deterministic approach. In the second part, we summarize modern trends in the development of efficient 3-D EM forward modelling schemes with special emphasis on recent advances in the integral equation approach.

  17. Estimating uncertainty of Full Waveform Inversion with Ensemble-based methods

    NASA Astrophysics Data System (ADS)

    Thurin, J.; Brossier, R.; Métivier, L.

    2017-12-01

    Uncertainty estimation is one key feature of tomographic applications for robust interpretation. However, this information is often missing in the frame of large scale linearized inversions, and only the results at convergence are shown, despite the ill-posed nature of the problem. This issue is common in the Full Waveform Inversion community.While few methodologies have already been proposed in the literature, standard FWI workflows do not include any systematic uncertainty quantifications methods yet, but often try to assess the result's quality through cross-comparison with other results from seismic or comparison with other geophysical data. With the development of large seismic networks/surveys, the increase in computational power and the more and more systematic application of FWI, it is crucial to tackle this problem and to propose robust and affordable workflows, in order to address the uncertainty quantification problem faced for near surface targets, crustal exploration, as well as regional and global scales.In this work (Thurin et al., 2017a,b), we propose an approach which takes advantage of the Ensemble Transform Kalman Filter (ETKF) proposed by Bishop et al., (2001), in order to estimate a low-rank approximation of the posterior covariance matrix of the FWI problem, allowing us to evaluate some uncertainty information of the solution. Instead of solving the FWI problem through a Bayesian inversion with the ETKF, we chose to combine a conventional FWI, based on local optimization, and the ETKF strategies. This scheme allows combining the efficiency of local optimization for solving large scale inverse problems and make the sampling of the local solution space possible thanks to its embarrassingly parallel property. References:Bishop, C. H., Etherton, B. J. and Majumdar, S. J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), 420-436.Thurin, J., Brossier, R. and Métivier, L. 2017,a.: Ensemble-Based Uncertainty Estimation in Full Waveform Inversion. 79th EAGE Conference and Exhibition 2017, (12 - 15 June, 2017)Thurin, J., Brossier, R. and Métivier, L. 2017,b.: An Ensemble-Transform Kalman Filter - Full Waveform Inversion scheme for Uncertainty estimation; SEG Technical Program Expanded Abstracts 2012

  18. Comparative Study of Three Data Assimilation Methods for Ice Sheet Model Initialisation

    NASA Astrophysics Data System (ADS)

    Mosbeux, Cyrille; Gillet-Chaulet, Fabien; Gagliardini, Olivier

    2015-04-01

    The current global warming has direct consequences on ice-sheet mass loss contributing to sea level rise. This loss is generally driven by an acceleration of some coastal outlet glaciers and reproducing these mechanisms is one of the major issues in ice-sheet and ice flow modelling. The construction of an initial state, as close as possible to current observations, is required as a prerequisite before producing any reliable projection of the evolution of ice-sheets. For this step, inverse methods are often used to infer badly known or unknown parameters. For instance, the adjoint inverse method has been implemented and applied with success by different authors in different ice flow models in order to infer the basal drag [ Schafer et al., 2012; Gillet-chauletet al., 2012; Morlighem et al., 2010]. Others data fields, such as ice surface and bedrock topography, are easily measurable with more or less uncertainty but only locally along tracks and interpolated on finer model grid. All these approximations lead to errors on the data elevation model and give rise to an ill-posed problem inducing non-physical anomalies in flux divergence [Seroussi et al, 2011]. A solution to dissipate these divergences of flux is to conduct a surface relaxation step at the expense of the accuracy of the modelled surface [Gillet-Chaulet et al., 2012]. Other solutions, based on the inversion of ice thickness and basal drag were proposed [Perego et al., 2014; Pralong & Gudmundsson, 2011]. In this study, we create a twin experiment to compare three different assimilation algorithms based on inverse methods and nudging to constrain the bedrock friction and the bedrock elevation: (i) cyclic inversion of friction parameter and bedrock topography using adjoint method, (ii) cycles coupling inversion of friction parameter using adjoint method and nudging of bedrock topography, (iii) one step inversion of both parameters with adjoint method. The three methods show a clear improvement in parameters knowledge leading to a significant reduction of flux divergence of the model before forecasting.

  19. Galerkin approximation for inverse problems for nonautonomous nonlinear distributed systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Reich, Simeon; Rosen, I. G.

    1988-01-01

    An abstract framework and convergence theory is developed for Galerkin approximation for inverse problems involving the identification of nonautonomous nonlinear distributed parameter systems. A set of relatively easily verified conditions is provided which are sufficient to guarantee the existence of optimal solutions and their approximation by a sequence of solutions to a sequence of approximating finite dimensional identification problems. The approach is based on the theory of monotone operators in Banach spaces and is applicable to a reasonably broad class of nonlinear distributed systems. Operator theoretic and variational techniques are used to establish a fundamental convergence result. An example involving evolution systems with dynamics described by nonstationary quasilinear elliptic operators along with some applications are presented and discussed.

  20. New procedure for the determination of Hansen solubility parameters by means of inverse gas chromatography.

    PubMed

    Adamska, K; Bellinghausen, R; Voelkel, A

    2008-06-27

    The Hansen solubility parameter (HSP) seems to be a useful tool for the thermodynamic characterization of different materials. Unfortunately, estimation of the HSP values can cause some problems. In this work different procedures by using inverse gas chromatography have been presented for calculation of pharmaceutical excipients' solubility parameter. The new procedure proposed, based on the Lindvig et al. methodology, where experimental data of Flory-Huggins interaction parameter are used, can be a reasonable alternative for the estimation of HSP values. The advantage of this method is that the values of Flory-Huggins interaction parameter chi for all test solutes are used for further calculation, thus diverse interactions between test solute and material are taken into consideration.

  1. Joint three-dimensional inversion of coupled groundwater flow and heat transfer based on automatic differentiation: sensitivity calculation, verification, and synthetic examples

    NASA Astrophysics Data System (ADS)

    Rath, V.; Wolf, A.; Bücker, H. M.

    2006-10-01

    Inverse methods are useful tools not only for deriving estimates of unknown parameters of the subsurface, but also for appraisal of the thus obtained models. While not being neither the most general nor the most efficient methods, Bayesian inversion based on the calculation of the Jacobian of a given forward model can be used to evaluate many quantities useful in this process. The calculation of the Jacobian, however, is computationally expensive and, if done by divided differences, prone to truncation error. Here, automatic differentiation can be used to produce derivative code by source transformation of an existing forward model. We describe this process for a coupled fluid flow and heat transport finite difference code, which is used in a Bayesian inverse scheme to estimate thermal and hydraulic properties and boundary conditions form measured hydraulic potentials and temperatures. The resulting derivative code was validated by comparison to simple analytical solutions and divided differences. Synthetic examples from different flow regimes demonstrate the use of the inverse scheme, and its behaviour in different configurations.

  2. Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Peng, E-mail: peng@ices.utexas.edu; Schwab, Christoph, E-mail: christoph.schwab@sam.math.ethz.ch

    2016-07-01

    We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov–Galerkin high-fidelity (“HiFi”) discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by themore » so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data assimilation and for Bayesian estimation. They also open a perspective for optimal experimental design.« less

  3. Using a derivative-free optimization method for multiple solutions of inverse transport problems

    DOE PAGES

    Armstrong, Jerawan C.; Favorite, Jeffrey A.

    2016-01-14

    Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less

  4. Modulating light propagation in ZnO-Cu₂O-inverse opal solar cells for enhanced photocurrents.

    PubMed

    Yantara, Natalia; Pham, Thi Thu Trang; Boix, Pablo P; Mathews, Nripan

    2015-09-07

    The advantages of employing an interconnected periodic ZnO morphology, i.e. an inverse opal structure, in electrodeposited ZnO/Cu2O devices are presented. The solar cells are fabricated using low cost solution based methods such as spin coating and electrodeposition. The impact of inverse opal geometry, mainly the diameter and thickness, is scrutinized. By employing 3 layers of an inverse opal structure with a 300 nm pore diameter, higher short circuit photocurrents (∼84% improvement) are observed; however the open circuit voltages decrease with increasing interfacial area. Optical simulation using a finite difference time domain method shows that the inverse opal structure modulates light propagation within the devices such that more photons are absorbed close to the ZnO/Cu2O junction. This increases the collection probability resulting in improved short circuit currents.

  5. Adaptive eigenspace method for inverse scattering problems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nahum, Uri

    2017-02-01

    A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.

  6. Inverse problems and optimal experiment design in unsteady heat transfer processes identification

    NASA Technical Reports Server (NTRS)

    Artyukhin, Eugene A.

    1991-01-01

    Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.

  7. Probabilistic Magnetotelluric Inversion with Adaptive Regularisation Using the No-U-Turns Sampler

    NASA Astrophysics Data System (ADS)

    Conway, Dennis; Simpson, Janelle; Didana, Yohannes; Rugari, Joseph; Heinson, Graham

    2018-04-01

    We present the first inversion of magnetotelluric (MT) data using a Hamiltonian Monte Carlo algorithm. The inversion of MT data is an underdetermined problem which leads to an ensemble of feasible models for a given dataset. A standard approach in MT inversion is to perform a deterministic search for the single solution which is maximally smooth for a given data-fit threshold. An alternative approach is to use Markov Chain Monte Carlo (MCMC) methods, which have been used in MT inversion to explore the entire solution space and produce a suite of likely models. This approach has the advantage of assigning confidence to resistivity models, leading to better geological interpretations. Recent advances in MCMC techniques include the No-U-Turns Sampler (NUTS), an efficient and rapidly converging method which is based on Hamiltonian Monte Carlo. We have implemented a 1D MT inversion which uses the NUTS algorithm. Our model includes a fixed number of layers of variable thickness and resistivity, as well as probabilistic smoothing constraints which allow sharp and smooth transitions. We present the results of a synthetic study and show the accuracy of the technique, as well as the fast convergence, independence of starting models, and sampling efficiency. Finally, we test our technique on MT data collected from a site in Boulia, Queensland, Australia to show its utility in geological interpretation and ability to provide probabilistic estimates of features such as depth to basement.

  8. Identification procedure for epistemic uncertainties using inverse fuzzy arithmetic

    NASA Astrophysics Data System (ADS)

    Haag, T.; Herrmann, J.; Hanss, M.

    2010-10-01

    For the mathematical representation of systems with epistemic uncertainties, arising, for example, from simplifications in the modeling procedure, models with fuzzy-valued parameters prove to be a suitable and promising approach. In practice, however, the determination of these parameters turns out to be a non-trivial problem. The identification procedure to appropriately update these parameters on the basis of a reference output (measurement or output of an advanced model) requires the solution of an inverse problem. Against this background, an inverse method for the computation of the fuzzy-valued parameters of a model with epistemic uncertainties is presented. This method stands out due to the fact that it only uses feedforward simulations of the model, based on the transformation method of fuzzy arithmetic, along with the reference output. An inversion of the system equations is not necessary. The advancement of the method presented in this paper consists of the identification of multiple input parameters based on a single reference output or measurement. An optimization is used to solve the resulting underdetermined problems by minimizing the uncertainty of the identified parameters. Regions where the identification procedure is reliable are determined by the computation of a feasibility criterion which is also based on the output data of the transformation method only. For a frequency response function of a mechanical system, this criterion allows a restriction of the identification process to some special range of frequency where its solution can be guaranteed. Finally, the practicability of the method is demonstrated by covering the measured output of a fluid-filled piping system by the corresponding uncertain FE model in a conservative way.

  9. Probabilistic numerical methods for PDE-constrained Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Cockayne, Jon; Oates, Chris; Sullivan, Tim; Girolami, Mark

    2017-06-01

    This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse problems. This method is tested on a challenging inverse problem with a nonlinear forward model.

  10. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  11. Phases, periphases, and interphases equilibrium by molecular modeling. I. Mass equilibrium by the semianalytical stochastic perturbations method and application to a solution between (120) gypsum faces

    NASA Astrophysics Data System (ADS)

    Pedesseau, Laurent; Jouanna, Paul

    2004-12-01

    The SASP (semianalytical stochastic perturbations) method is an original mixed macro-nano-approach dedicated to the mass equilibrium of multispecies phases, periphases, and interphases. This general method, applied here to the reflexive relation Ck⇔μk between the concentrations Ck and the chemical potentials μk of k species within a fluid in equilibrium, leads to the distribution of the particles at the atomic scale. The macroaspects of the method, based on analytical Taylor's developments of chemical potentials, are intimately mixed with the nanoaspects of molecular mechanics computations on stochastically perturbed states. This numerical approach, directly linked to definitions, is universal by comparison with current approaches, DLVO Derjaguin-Landau-Verwey-Overbeek, grand canonical Monte Carlo, etc., without any restriction on the number of species, concentrations, or boundary conditions. The determination of the relation Ck⇔μk implies in fact two problems: a direct problem Ck⇒μk and an inverse problem μk⇒Ck. Validation of the method is demonstrated in case studies A and B which treat, respectively, a direct problem and an inverse problem within a free saturated gypsum solution. The flexibility of the method is illustrated in case study C dealing with an inverse problem within a solution interphase, confined between two (120) gypsum faces, remaining in connection with a reference solution. This last inverse problem leads to the mass equilibrium of ions and water molecules within a 3 Å thick gypsum interface. The major unexpected observation is the repulsion of SO42- ions towards the reference solution and the attraction of Ca2+ ions from the reference solution, the concentration being 50 times higher within the interphase as compared to the free solution. The SASP method is today the unique approach able to tackle the simulation of the number and distribution of ions plus water molecules in such extreme confined conditions. This result is of prime importance for all coupled chemical-mechanical problems dealing with interfaces, and more generally for a wide variety of applications such as phase changes, osmotic equilibrium, surface energy, etc., in complex chemical-physics situations.

  12. Direct and Inverse Kinematics of a Novel Tip-Tilt-Piston Parallel Manipulator

    NASA Technical Reports Server (NTRS)

    Tahmasebi, Farhad

    2004-01-01

    Closed-form direct and inverse kinematics of a new three degree-of-freedom (DOF) parallel manipulator with inextensible limbs and base-mounted actuators are presented. The manipulator has higher resolution and precision than the existing three DOF mechanisms with extensible limbs. Since all of the manipulator actuators are base-mounted; higher payload capacity, smaller actuator sizes, and lower power dissipation can be obtained. The manipulator is suitable for alignment applications where only tip, tilt, and piston motions are significant. The direct kinematics of the manipulator is reduced to solving an eighth-degree polynomial in the square of tangent of half-angle between one of the limbs and the base plane. Hence, there are at most 16 assembly configurations for the manipulator. In addition, it is shown that the 16 solutions are eight pairs of reflected configurations with respect to the base plane. Numerical examples for the direct and inverse kinematics of the manipulator are also presented.

  13. Robust inverse kinematics using damped least squares with dynamic weighting

    NASA Technical Reports Server (NTRS)

    Schinstock, D. E.; Faddis, T. N.; Greenway, R. B.

    1994-01-01

    This paper presents a general method for calculating the inverse kinematics with singularity and joint limit robustness for both redundant and non-redundant serial-link manipulators. Damped least squares inverse of the Jacobian is used with dynamic weighting matrices in approximating the solution. This reduces specific joint differential vectors. The algorithm gives an exact solution away from the singularities and joint limits, and an approximate solution at or near the singularities and/or joint limits. The procedure is here implemented for a six d.o.f. teleoperator and a well behaved slave manipulator resulted under teleoperational control.

  14. User's guide to PHREEQC (Version 2) : a computer program for speciation, batch-reaction, one-dimensional transport, and inverse geochemical calculations

    USGS Publications Warehouse

    Parkhurst, David L.; Appelo, C.A.J.

    1999-01-01

    PHREEQC version 2 is a computer program written in the C programming language that is designed to perform a wide variety of low-temperature aqueous geochemical calculations. PHREEQC is based on an ion-association aqueous model and has capabilities for (1) speciation and saturation-index calculations; (2) batch-reaction and one-dimensional (1D) transport calculations involving reversible reactions, which include aqueous, mineral, gas, solid-solution, surface-complexation, and ion-exchange equilibria, and irreversible reactions, which include specified mole transfers of reactants, kinetically controlled reactions, mixing of solutions, and temperature changes; and (3) inverse modeling, which finds sets of mineral and gas mole transfers that account for differences in composition between waters, within specified compositional uncertainty limits.New features in PHREEQC version 2 relative to version 1 include capabilities to simulate dispersion (or diffusion) and stagnant zones in 1D-transport calculations, to model kinetic reactions with user-defined rate expressions, to model the formation or dissolution of ideal, multicomponent or nonideal, binary solid solutions, to model fixed-volume gas phases in addition to fixed-pressure gas phases, to allow the number of surface or exchange sites to vary with the dissolution or precipitation of minerals or kinetic reactants, to include isotope mole balances in inverse modeling calculations, to automatically use multiple sets of convergence parameters, to print user-defined quantities to the primary output file and (or) to a file suitable for importation into a spreadsheet, and to define solution compositions in a format more compatible with spreadsheet programs. This report presents the equations that are the basis for chemical equilibrium, kinetic, transport, and inverse-modeling calculations in PHREEQC; describes the input for the program; and presents examples that demonstrate most of the program's capabilities.

  15. The inverse problem of refraction travel times, part II: Quantifying refraction nonuniqueness using a three-layer model

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.

    2005-01-01

    This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.

  16. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  17. Moving from pixel to object scale when inverting radiative transfer models for quantitative estimation of biophysical variables in vegetation (Invited)

    NASA Astrophysics Data System (ADS)

    Atzberger, C.

    2013-12-01

    The robust and accurate retrieval of vegetation biophysical variables using RTM is seriously hampered by the ill-posedness of the inverse problem. The contribution presents our object-based inversion approach and evaluate it against measured data. The proposed method takes advantage of the fact that nearby pixels are generally more similar than those at a larger distance. For example, within a given vegetation patch, nearby pixels often share similar leaf angular distributions. This leads to spectral co-variations in the n-dimensional spectral features space, which can be used for regularization purposes. Using a set of leaf area index (LAI) measurements (n=26) acquired over alfalfa, sugar beet and garlic crops of the Barrax test site (Spain), it is demonstrated that the proposed regularization using neighbourhood information yields more accurate results compared to the traditional pixel-based inversion. Principle of the ill-posed inverse problem and the proposed solution illustrated in the red-nIR feature space using (PROSAIL). [A] spectral trajectory ('soil trajectory') obtained for one leaf angle (ALA) and one soil brightness (αsoil), when LAI varies between 0 and 10, [B] 'soil trajectories' for 5 soil brightness values and three leaf angles, [C] ill-posed inverse problem: different combinations of ALA × αsoil yield an identical crossing point, [D] object-based RTM inversion; only one 'soil trajectory' fits all nine pixelswithin a gliding (3×3) window. The black dots (plus the rectangle=central pixel) represent the hypothetical position of nine pixels within a 3×3 (gliding) window. Assuming that over short distances (× 1 pixel) variations in soil brightness can be neglected, the proposed object-based inversion searches for one common set of ALA × αsoil so that the resulting 'soil trajectory' best fits the nine measured pixels. Ground measured vs. retrieved LAI values for three crops. Left: proposed object-based approach. Right: pixel-based inversion

  18. Parana Basin Structure from Multi-Objective Inversion of Surface Wave and Receiver Function by Competent Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    An, M.; Assumpcao, M.

    2003-12-01

    The joint inversion of receiver function and surface wave is an effective way to diminish the influences of the strong tradeoff among parameters and the different sensitivity to the model parameters in their respective inversions, but the inversion problem becomes more complex. Multi-objective problems can be much more complicated than single-objective inversion in the model selection and optimization. If objectives are involved and conflicting, models can be ordered only partially. In this case, Pareto-optimal preference should be used to select solutions. On the other hand, the inversion to get only a few optimal solutions can not deal properly with the strong tradeoff between parameters, the uncertainties in the observation, the geophysical complexities and even the incompetency of the inversion technique. The effective way is to retrieve the geophysical information statistically from many acceptable solutions, which requires more competent global algorithms. Competent genetic algorithms recently proposed are far superior to the conventional genetic algorithm and can solve hard problems quickly, reliably and accurately. In this work we used one of competent genetic algorithms, Bayesian Optimization Algorithm as the main inverse procedure. This algorithm uses Bayesian networks to draw out inherited information and can use Pareto-optimal preference in the inversion. With this algorithm, the lithospheric structure of Paran"› basin is inverted to fit both the observations of inter-station surface wave dispersion and receiver function.

  19. Toward 2D and 3D imaging of magnetic nanoparticles using EPR measurements.

    PubMed

    Coene, A; Crevecoeur, G; Leliaert, J; Dupré, L

    2015-09-01

    Magnetic nanoparticles (MNPs) are an important asset in many biomedical applications. An effective working of these applications requires an accurate knowledge of the spatial MNP distribution. A promising, noninvasive, and sensitive technique to visualize MNP distributions in vivo is electron paramagnetic resonance (EPR). Currently only 1D MNP distributions can be reconstructed. In this paper, the authors propose extending 1D EPR toward 2D and 3D using computer simulations to allow accurate imaging of MNP distributions. To find the MNP distribution belonging to EPR measurements, an inverse problem needs to be solved. The solution of this inverse problem highly depends on the stability of the inverse problem. The authors adapt 1D EPR imaging to realize the imaging of multidimensional MNP distributions. Furthermore, the authors introduce partial volume excitation in which only parts of the volume are imaged to increase stability of the inverse solution and to speed up the measurements. The authors simulate EPR measurements of different 2D and 3D MNP distributions and solve the inverse problem. The stability is evaluated by calculating the condition measure and by comparing the actual MNP distribution to the reconstructed MNP distribution. Based on these simulations, the authors define requirements for the EPR system to cope with the added dimensions. Moreover, the authors investigate how EPR measurements should be conducted to improve the stability of the associated inverse problem and to increase reconstruction quality. The approach used in 1D EPR can only be employed for the reconstruction of small volumes in 2D and 3D EPRs due to numerical instability of the inverse solution. The authors performed EPR measurements of increasing cylindrical volumes and evaluated the condition measure. This showed that a reduction of the inherent symmetry in the EPR methodology is necessary. By reducing the symmetry of the EPR setup, quantitative images of larger volumes can be obtained. The authors found that, by selectively exciting parts of the volume, the authors could increase the reconstruction quality even further while reducing the amount of measurements. Additionally, the inverse solution of this activation method degrades slower for increasing volumes. Finally, the methodology was applied to noisy EPR measurements: using the reduced EPR setup's symmetry and the partial activation method, an increase in reconstruction quality of ≈ 80% can be seen with a speedup of the measurements with 10%. Applying the aforementioned requirements to the EPR setup and stabilizing the EPR measurements showed a tremendous increase in noise robustness, thereby making EPR a valuable method for quantitative imaging of multidimensional MNP distributions.

  20. Absolute mass scale calibration in the inverse problem of the physical theory of fireballs.

    NASA Astrophysics Data System (ADS)

    Kalenichenko, V. V.

    A method of the absolute mass scale calibration is suggested for solving the inverse problem of the physical theory of fireballs. The method is based on the data on the masses of the fallen meteorites whose fireballs have been photographed in their flight. The method may be applied to those fireballs whose bodies have not experienced considerable fragmentation during their destruction in the atmosphere and have kept their form well enough. Statistical analysis of the inverse problem solution for a sufficiently representative sample makes it possible to separate a subsample of such fireballs. The data on the Lost City and Innisfree meteorites are used to obtain calibration coefficients.

  1. Joint inversion of regional and teleseismic earthquake waveforms

    NASA Astrophysics Data System (ADS)

    Baker, Mark R.; Doser, Diane I.

    1988-03-01

    A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.

  2. Trajectory planning and control of a 6 DOF manipulator with Stewart platform-based mechanism

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Antrazi, Sami

    1990-01-01

    The trajectory planning and control was studied of a robot manipulator that has 6 degrees of freedom and was designed based on the mechanism of the Stewart Platform. First the main components of the manipulator is described along with its operation. The solutions are briefly prescribed for the forward and inverse kinematics of the manipulator. After that, two trajectory planning schemes are developed using the manipulator inverse kinematics to track straight lines and circular paths. Finally experiments conducted to study the performance of the developed planning schemes in tracking a straight line and a circle are presented and discussed.

  3. Performance evaluation of the inverse dynamics method for optimal spacecraft reorientation

    NASA Astrophysics Data System (ADS)

    Ventura, Jacopo; Romano, Marcello; Walter, Ulrich

    2015-05-01

    This paper investigates the application of the inverse dynamics in the virtual domain method to Euler angles, quaternions, and modified Rodrigues parameters for rapid optimal attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics method, it yields sub-optimal solutions for minimum time problems. Furthermore, the virtual domain improves the optimality of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the optimal solution can be obtained without the virtual domain with any considered attitude representation.

  4. A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications

    NASA Astrophysics Data System (ADS)

    Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.

    2018-04-01

    Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.

  5. Feasible Muscle Activation Ranges Based on Inverse Dynamics Analyses of Human Walking

    PubMed Central

    Simpson, Cole S.; Sohn, M. Hongchul; Allen, Jessica L.; Ting, Lena H.

    2015-01-01

    Although it is possible to produce the same movement using an infinite number of different muscle activation patterns owing to musculoskeletal redundancy, the degree to which observed variations in muscle activity can deviate from optimal solutions computed from biomechanical models is not known. Here, we examined the range of biomechanically permitted activation levels in individual muscles during human walking using a detailed musculoskeletal model and experimentally-measured kinetics and kinematics. Feasible muscle activation ranges define the minimum and maximum possible level of each muscle’s activation that satisfy inverse dynamics joint torques assuming that all other muscles can vary their activation as needed. During walking, 73% of the muscles had feasible muscle activation ranges that were greater than 95% of the total muscle activation range over more than 95% of the gait cycle, indicating that, individually, most muscles could be fully active or fully inactive while still satisfying inverse dynamics joint torques. Moreover, the shapes of the feasible muscle activation ranges did not resemble previously-reported muscle activation patterns nor optimal solutions, i.e. static optimization and computed muscle control, that are based on the same biomechanical constraints. Our results demonstrate that joint torque requirements from standard inverse dynamics calculations are insufficient to define the activation of individual muscles during walking in healthy individuals. Identifying feasible muscle activation ranges may be an effective way to evaluate the impact of additional biomechanical and/or neural constraints on possible versus actual muscle activity in both normal and impaired movements. PMID:26300401

  6. A fast iterative convolution weighting approach for gridding-based direct Fourier three-dimensional reconstruction with correction for the contrast transfer function.

    PubMed

    Abrishami, V; Bilbao-Castro, J R; Vargas, J; Marabini, R; Carazo, J M; Sorzano, C O S

    2015-10-01

    We describe a fast and accurate method for the reconstruction of macromolecular complexes from a set of projections. Direct Fourier inversion (in which the Fourier Slice Theorem plays a central role) is a solution for dealing with this inverse problem. Unfortunately, the set of projections provides a non-equidistantly sampled version of the macromolecule Fourier transform in the single particle field (and, therefore, a direct Fourier inversion) may not be an optimal solution. In this paper, we introduce a gridding-based direct Fourier method for the three-dimensional reconstruction approach that uses a weighting technique to compute a uniform sampled Fourier transform. Moreover, the contrast transfer function of the microscope, which is a limiting factor in pursuing a high resolution reconstruction, is corrected by the algorithm. Parallelization of this algorithm, both on threads and on multiple CPU's, makes the process of three-dimensional reconstruction even faster. The experimental results show that our proposed gridding-based direct Fourier reconstruction is slightly more accurate than similar existing methods and presents a lower computational complexity both in terms of time and memory, thereby allowing its use on larger volumes. The algorithm is fully implemented in the open-source Xmipp package and is downloadable from http://xmipp.cnb.csic.es. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Cerebellum-inspired neural network solution of the inverse kinematics problem.

    PubMed

    Asadi-Eydivand, Mitra; Ebadzadeh, Mohammad Mehdi; Solati-Hashjin, Mehran; Darlot, Christian; Abu Osman, Noor Azuan

    2015-12-01

    The demand today for more complex robots that have manipulators with higher degrees of freedom is increasing because of technological advances. Obtaining the precise movement for a desired trajectory or a sequence of arm and positions requires the computation of the inverse kinematic (IK) function, which is a major problem in robotics. The solution of the IK problem leads robots to the precise position and orientation of their end-effector. We developed a bioinspired solution comparable with the cerebellar anatomy and function to solve the said problem. The proposed model is stable under all conditions merely by parameter determination, in contrast to recursive model-based solutions, which remain stable only under certain conditions. We modified the proposed model for the simple two-segmented arm to prove the feasibility of the model under a basic condition. A fuzzy neural network through its learning method was used to compute the parameters of the system. Simulation results show the practical feasibility and efficiency of the proposed model in robotics. The main advantage of the proposed model is its generalizability and potential use in any robot.

  8. The Priority Inversion Problem and Real-Time Symbolic Model Checking

    DTIC Science & Technology

    1993-04-23

    real time systems unpredictable in subtle ways. This makes it more difficult to implement and debug such systems. Our work discusses this problem and presents one possible solution. The solution is formalized and verified using temporal logic model checking techniques. In order to perform the verification, the BDD-based symbolic model checking algorithm given in previous works was extended to handle real-time properties using the bounded until operator. We believe that this algorithm, which is based on discrete time, is able to handle many real-time properties

  9. Double Fourier Series Solution of Poisson’s Equation on a Sphere.

    DTIC Science & Technology

    1980-10-29

    algebraic systems, the solution of these systems, and the inverse transform of the solution in Fourier space back to physi- cal space. 6. Yee, S. Y. K...Multiply each count in steps (2) through (5) by K] 7. Inverse transform um(0j j = 1, J - 1, to obtain u k; set u(P) = u 0 (P). [K(J - 1) log 2 K

  10. Analytic Closed-Form Solution of a Mixed Layer Model for Stratocumulus Clouds

    NASA Astrophysics Data System (ADS)

    Akyurek, Bengu Ozge

    Stratocumulus clouds play an important role in climate cooling and are hard to predict using global climate and weather forecast models. Thus, previous studies in the literature use observations and numerical simulation tools, such as large-eddy simulation (LES), to solve the governing equations for the evolution of stratocumulus clouds. In contrast to the previous works, this work provides an analytic closed-form solution to the cloud thickness evolution of stratocumulus clouds in a mixed-layer model framework. With a focus on application over coastal lands, the diurnal cycle of cloud thickness and whether or not clouds dissipate are of particular interest. An analytic solution enables the sensitivity analysis of implicitly interdependent variables and extrema analysis of cloud variables that are hard to achieve using numerical solutions. In this work, the sensitivity of inversion height, cloud-base height, and cloud thickness with respect to initial and boundary conditions, such as Bowen ratio, subsidence, surface temperature, and initial inversion height, are studied. A critical initial cloud thickness value that can be dissipated pre- and post-sunrise is provided. Furthermore, an extrema analysis is provided to obtain the minima and maxima of the inversion height and cloud thickness within 24 h. The proposed solution is validated against LES results under the same initial and boundary conditions. Then, the proposed analytic framework is extended to incorporate multiple vertical columns that are coupled by advection through wind flow. This enables a bridge between the micro-scale and the mesoscale relations. The effect of advection on cloud evolution is studied and a sensitivity analysis is provided.

  11. Are two plasma equilibrium states possible when the emission coefficient exceeds unity?

    NASA Astrophysics Data System (ADS)

    Campanell, M. D.; Umansky, M. V.

    2017-05-01

    Two floating sheath solutions with strong electron emission in planar geometry have been proposed, a "space-charge limited" (SCL) sheath and an "inverse" sheath. SCL and inverse models contain different assumptions about conditions outside the sheath (e.g., the velocity of ions entering the sheath). So it is not yet clear whether both sheaths are possible in practice, or only one. Here we treat the global presheath-sheath problem for a plasma produced volumetrically between two planar walls. We show that all equilibrium requirements (a) floating condition, (b) plasma shielding, and (c) presheath force balance, can indeed be satisfied in two different ways when the emission coefficient γ > 1. There is one solution with SCL sheaths and one with inverse sheaths, each with sharply different presheath distributions. As we show for the first time in 1D-1V simulations, a SCL and inverse equilibrium are both possible in plasmas with the same upstream properties (e.g., same N and Te). However, maintaining a true SCL equilibrium requires no ionization or charge exchange collisions in the sheath, or else cold ion accumulation in the SCL's "dip" forces a transition to the inverse. This suggests that only a monotonic inverse type sheath potential should exist at any plasma-facing surface with strong emission, whether be a divertor plate, emissive probe, dust grain, Hall thruster channel wall, sunlit object in space, etc. Nevertheless, SCL sheaths might still be possible if the ions in the dip can escape. Our simulations demonstrate ways in which SCL and inverse regimes might be distinguished experimentally based on large-scale presheath effects, without having to probe inside the sheath.

  12. Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models

    USGS Publications Warehouse

    Plant, Nathaniel G.; Holland, K. Todd

    2011-01-01

    A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.

  13. Full-Wave Based Validation of Stripline Field Applicator For Low Frequency Material Measurements

    DTIC Science & Technology

    2009-03-01

    16 3.3.1 Principal Solution. . . . . . . . . . . . . . . . . 17 3.3.2 Reflected Solution. . . . . . . . . . . . . . . . . 22 3.4 Applying...potential) [ 17 ]. The vector potential BCs are found to be Ax(x, y = ±h, z) = 0 ∀ x, z (3.2) ∂Ay(x, y = ±h, z) ∂y = 0 ∀ x, z (3.3) Az(x, y = ±h, z...solution at y = ±h, an inverse Fourier transform must be performed on the principal contribution 17 ηre ηim x x η-plane −jp jp Figure 3.2

  14. Reconstruction of the temperature field for inverse ultrasound hyperthermia calculations at a muscle/bone interface.

    PubMed

    Liauh, Chihng-Tsung; Shih, Tzu-Ching; Huang, Huang-Wen; Lin, Win-Li

    2004-02-01

    An inverse algorithm with Tikhonov regularization of order zero has been used to estimate the intensity ratios of the reflected longitudinal wave to the incident longitudinal wave and that of the refracted shear wave to the total transmitted wave into bone in calculating the absorbed power field and then to reconstruct the temperature distribution in muscle and bone regions based on a limited number of temperature measurements during simulated ultrasound hyperthermia. The effects of the number of temperature sensors are investigated, as is the amount of noise superimposed on the temperature measurements, and the effects of the optimal sensor location on the performance of the inverse algorithm. Results show that noisy input data degrades the performance of this inverse algorithm, especially when the number of temperature sensors is small. Results are also presented demonstrating an improvement in the accuracy of the temperature estimates by employing an optimal value of the regularization parameter. Based on the analysis of singular-value decomposition, the optimal sensor position in a case utilizing only one temperature sensor can be determined to make the inverse algorithm converge to the true solution.

  15. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  16. Trajectory Correction and Locomotion Analysis of a Hexapod Walking Robot with Semi-Round Rigid Feet

    PubMed Central

    Zhu, Yaguang; Jin, Bo; Wu, Yongsheng; Guo, Tong; Zhao, Xiangmo

    2016-01-01

    Aimed at solving the misplaced body trajectory problem caused by the rolling of semi-round rigid feet when a robot is walking, a legged kinematic trajectory correction methodology based on the Least Squares Support Vector Machine (LS-SVM) is proposed. The concept of ideal foothold is put forward for the three-dimensional kinematic model modification of a robot leg, and the deviation value between the ideal foothold and real foothold is analyzed. The forward/inverse kinematic solutions between the ideal foothold and joint angular vectors are formulated and the problem of direct/inverse kinematic nonlinear mapping is solved by using the LS-SVM. Compared with the previous approximation method, this correction methodology has better accuracy and faster calculation speed with regards to inverse kinematics solutions. Experiments on a leg platform and a hexapod walking robot are conducted with multi-sensors for the analysis of foot tip trajectory, base joint vibration, contact force impact, direction deviation, and power consumption, respectively. The comparative analysis shows that the trajectory correction methodology can effectively correct the joint trajectory, thus eliminating the contact force influence of semi-round rigid feet, significantly improving the locomotion of the walking robot and reducing the total power consumption of the system. PMID:27589766

  17. Fuzzy logic control of telerobot manipulators

    NASA Technical Reports Server (NTRS)

    Franke, Ernest A.; Nedungadi, Ashok

    1992-01-01

    Telerobot systems for advanced applications will require manipulators with redundant 'degrees of freedom' (DOF) that are capable of adapting manipulator configurations to avoid obstacles while achieving the user specified goal. Conventional methods for control of manipulators (based on solution of the inverse kinematics) cannot be easily extended to these situations. Fuzzy logic control offers a possible solution to these needs. A current research program at SRI developed a fuzzy logic controller for a redundant, 4 DOF, planar manipulator. The manipulator end point trajectory can be specified by either a computer program (robot mode) or by manual input (teleoperator). The approach used expresses end-point error and the location of manipulator joints as fuzzy variables. Joint motions are determined by a fuzzy rule set without requiring solution of the inverse kinematics. Additional rules for sensor data, obstacle avoidance and preferred manipulator configuration, e.g., 'righty' or 'lefty', are easily accommodated. The procedure used to generate the fuzzy rules can be extended to higher DOF systems.

  18. The inverse problem of estimating the gravitational time dilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gusev, A. V., E-mail: avg@sai.msu.ru; Litvinov, D. A.; Rudenko, V. N.

    2016-11-15

    Precise testing of the gravitational time dilation effect suggests comparing the clocks at points with different gravitational potentials. Such a configuration arises when radio frequency standards are installed at orbital and ground stations. The ground-based standard is accessible directly, while the spaceborne one is accessible only via the electromagnetic signal exchange. Reconstructing the current frequency of the spaceborne standard is an ill-posed inverse problem whose solution depends significantly on the characteristics of the stochastic electromagnetic background. The solution for Gaussian noise is known, but the nature of the standards themselves is associated with nonstationary fluctuations of a wide class ofmore » distributions. A solution is proposed for a background of flicker fluctuations with a spectrum (1/f){sup γ}, where 1 < γ < 3, and stationary increments. The results include formulas for the error in reconstructing the frequency of the spaceborne standard and numerical estimates for the accuracy of measuring the relativistic redshift effect.« less

  19. Reconstruction of electrical impedance tomography (EIT) images based on the expectation maximum (EM) method.

    PubMed

    Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi

    2012-11-01

    Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Application of genetic algorithms to focal mechanism determination

    NASA Astrophysics Data System (ADS)

    Kobayashi, Reiji; Nakanishi, Ichiro

    1994-04-01

    Genetic algorithms are a new class of methods for global optimization. They resemble Monte Carlo techniques, but search for solutions more efficiently than uniform Monte Carlo sampling. In the field of geophysics, genetic algorithms have recently been used to solve some non-linear inverse problems (e.g., earthquake location, waveform inversion, migration velocity estimation). We present an application of genetic algorithms to focal mechanism determination from first-motion polarities of P-waves and apply our method to two recent large events, the Kushiro-oki earthquake of January 15, 1993 and the SW Hokkaido (Japan Sea) earthquake of July 12, 1993. Initial solution and curvature information of the objective function that gradient methods need are not required in our approach. Moreover globally optimal solutions can be efficiently obtained. Calculation of polarities based on double-couple models is the most time-consuming part of the source mechanism determination. The amount of calculations required by the method designed in this study is much less than that of previous grid search methods.

  1. Continuous-wave infrared optical gain and amplified spontaneous emission at ultralow threshold by colloidal HgTe quantum dots.

    PubMed

    Geiregat, Pieter; Houtepen, Arjan J; Sagar, Laxmi Kishore; Infante, Ivan; Zapata, Felipe; Grigel, Valeriia; Allan, Guy; Delerue, Christophe; Van Thourhout, Dries; Hens, Zeger

    2018-01-01

    Colloidal quantum dots (QDs) raise more and more interest as solution-processable and tunable optical gain materials. However, especially for infrared active QDs, optical gain remains inefficient. Since stimulated emission involves multifold degenerate band-edge states, population inversion can be attained only at high pump power and must compete with efficient multi-exciton recombination. Here, we show that mercury telluride (HgTe) QDs exhibit size-tunable stimulated emission throughout the near-infrared telecom window at thresholds unmatched by any QD studied before. We attribute this unique behaviour to surface-localized states in the bandgap that turn HgTe QDs into 4-level systems. The resulting long-lived population inversion induces amplified spontaneous emission under continuous-wave optical pumping at power levels compatible with solar irradiation and direct current electrical pumping. These results introduce an alternative approach for low-threshold QD-based gain media based on intentional trap states that paves the way for solution-processed infrared QD lasers and amplifiers.

  2. Continuous-wave infrared optical gain and amplified spontaneous emission at ultralow threshold by colloidal HgTe quantum dots

    NASA Astrophysics Data System (ADS)

    Geiregat, Pieter; Houtepen, Arjan J.; Sagar, Laxmi Kishore; Infante, Ivan; Zapata, Felipe; Grigel, Valeriia; Allan, Guy; Delerue, Christophe; van Thourhout, Dries; Hens, Zeger

    2018-01-01

    Colloidal quantum dots (QDs) raise more and more interest as solution-processable and tunable optical gain materials. However, especially for infrared active QDs, optical gain remains inefficient. Since stimulated emission involves multifold degenerate band-edge states, population inversion can be attained only at high pump power and must compete with efficient multi-exciton recombination. Here, we show that mercury telluride (HgTe) QDs exhibit size-tunable stimulated emission throughout the near-infrared telecom window at thresholds unmatched by any QD studied before. We attribute this unique behaviour to surface-localized states in the bandgap that turn HgTe QDs into 4-level systems. The resulting long-lived population inversion induces amplified spontaneous emission under continuous-wave optical pumping at power levels compatible with solar irradiation and direct current electrical pumping. These results introduce an alternative approach for low-threshold QD-based gain media based on intentional trap states that paves the way for solution-processed infrared QD lasers and amplifiers.

  3. Invisibility problem in acoustics, electromagnetism and heat transfer. Inverse design method

    NASA Astrophysics Data System (ADS)

    Alekseev, G.; Tokhtina, A.; Soboleva, O.

    2017-10-01

    Two approaches (direct design and inverse design methods) for solving problems of designing devices providing invisibility of material bodies of detection using different physical fields - electromagnetic, acoustic and static are discussed. The second method is applied for solving problems of designing cloaking devices for the 3D stationary thermal scattering model. Based on this method the design problems under study are reduced to respective control problems. The material parameters (radial and tangential heat conductivities) of the inhomogeneous anisotropic medium filling the thermal cloak and the density of auxiliary heat sources play the role of controls. A unique solvability of direct thermal scattering problem in the Sobolev space is proved and the new estimates of solutions are established. Using these results, the solvability of control problem is proved and the optimality system is derived. Based on analysis of optimality system, the stability estimates of optimal solutions are established and numerical algorithms for solving particular thermal cloaking problem are proposed.

  4. Inverse transport calculations in optical imaging with subspace optimization algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu

    2014-09-15

    Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less

  5. A stationary bulk planar ideal flow solution for the double shearing model

    NASA Astrophysics Data System (ADS)

    Lyamina, E. A.; Kalenova, N. V.; Date, P. P.

    2018-04-01

    This paper provides a general ideal flow solution for the double shearing model of pressure-dependent plasticity. This new solution is restricted to a special class of stationary planar flows. A distinguished feature of this class of solutions is that one family of characteristic lines is straight. The solution is analytic. The mapping between Cartesian and principal lines based coordinate systems is given in parametric form with characteristic coordinates being the parameters. A simple relation that connects the scale factor for one family of coordinate curves of the principal lines based coordinate system and the magnitude of velocity is derived. The original ideal flow theory is widely used as the basis for inverse methods for the preliminary design of metal forming processes driven by minimum plastic work. The new theory extends this area of application to granular materials.

  6. Solving Large-Scale Inverse Magnetostatic Problems using the Adjoint Method

    PubMed Central

    Bruckner, Florian; Abert, Claas; Wautischer, Gregor; Huber, Christian; Vogler, Christoph; Hinze, Michael; Suess, Dieter

    2017-01-01

    An efficient algorithm for the reconstruction of the magnetization state within magnetic components is presented. The occurring inverse magnetostatic problem is solved by means of an adjoint approach, based on the Fredkin-Koehler method for the solution of the forward problem. Due to the use of hybrid FEM-BEM coupling combined with matrix compression techniques the resulting algorithm is well suited for large-scale problems. Furthermore the reconstruction of the magnetization state within a permanent magnet as well as an optimal design application are demonstrated. PMID:28098851

  7. Control Theory based Shape Design for the Incompressible Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Cowles, G.; Martinelli, L.

    2003-12-01

    A design method for shape optimization in incompressible turbulent viscous flow has been developed and validated for inverse design. The gradient information is determined using a control theory based algorithm. With such an approach, the cost of computing the gradient is negligible. An additional adjoint system must be solved which requires the cost of a single steady state flow solution. Thus, this method has an enormous advantage over traditional finite-difference based algorithms. The method of artificial compressibility is utilized to solve both the flow and adjoint systems. An algebraic turbulence model is used to compute the eddy viscosity. The method is validated using several inverse wing design test cases. In each case, the program must modify the shape of the initial wing such that its pressure distribution matches that of the target wing. Results are shown for the inversion of both finite thickness wings as well as zero thickness wings which can be considered a model of yacht sails.

  8. Statistic inversion of multi-zone transition probability models for aquifer characterization in alluvial fans

    DOE PAGES

    Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...

    2015-06-12

    Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less

  9. Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984

    USGS Publications Warehouse

    Sipkin, S.A.

    1987-01-01

    The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.

  10. Recovery of time-dependent volatility in option pricing model

    NASA Astrophysics Data System (ADS)

    Deng, Zui-Cha; Hon, Y. C.; Isakov, V.

    2016-11-01

    In this paper we investigate an inverse problem of determining the time-dependent volatility from observed market prices of options with different strikes. Due to the non linearity and sparsity of observations, an analytical solution to the problem is generally not available. Numerical approximation is also difficult to obtain using most of the existing numerical algorithms. Based on our recent theoretical results, we apply the linearisation technique to convert the problem into an inverse source problem from which recovery of the unknown volatility function can be achieved. Two kinds of strategies, namely, the integral equation method and the Landweber iterations, are adopted to obtain the stable numerical solution to the inverse problem. Both theoretical analysis and numerical examples confirm that the proposed approaches are effective. The work described in this paper was partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region (Project No. CityU 101112) and grants from the NNSF of China (Nos. 11261029, 11461039), and NSF grants DMS 10-08902 and 15-14886 and by Emylou Keith and Betty Dutcher Distinguished Professorship at the Wichita State University (USA).

  11. Scaling of plane-wave functions in statistically optimized near-field acoustic holography.

    PubMed

    Hald, Jørgen

    2014-11-01

    Statistically Optimized Near-field Acoustic Holography (SONAH) is a Patch Holography method, meaning that it can be applied in cases where the measurement area covers only part of the source surface. The method performs projections directly in the spatial domain, avoiding the use of spatial discrete Fourier transforms and the associated errors. First, an inverse problem is solved using regularization. For each calculation point a multiplication must then be performed with two transfer vectors--one to get the sound pressure and the other to get the particle velocity. Considering SONAH based on sound pressure measurements, existing derivations consider only pressure reconstruction when setting up the inverse problem, so the evanescent wave amplification associated with the calculation of particle velocity is not taken into account in the regularized solution of the inverse problem. The present paper introduces a scaling of the applied plane wave functions that takes the amplification into account, and it is shown that the previously published virtual source-plane retraction has almost the same effect. The effectiveness of the different solutions is verified through a set of simulated measurements.

  12. Research on Inversion Models for Forest Height Estimation Using Polarimetric SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Duan, B.; Zou, B.

    2017-09-01

    The forest height is an important forest resource information parameter and usually used in biomass estimation. Forest height extraction with PolInSAR is a hot research field of imaging SAR remote sensing. SAR interferometry is a well-established SAR technique to estimate the vertical location of the effective scattering center in each resolution cell through the phase difference in images acquired from spatially separated antennas. The manipulation of PolInSAR has applications ranging from climate monitoring to disaster detection especially when used in forest area, is of particular interest because it is quite sensitive to the location and vertical distribution of vegetation structure components. However, some of the existing methods can't estimate forest height accurately. Here we introduce several available inversion models and compare the precision of some classical inversion approaches using simulated data. By comparing the advantages and disadvantages of these inversion methods, researchers can find better solutions conveniently based on these inversion methods.

  13. Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas

    2017-04-01

    Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.

  14. Solution of some types of differential equations: operational calculus and inverse differential operators.

    PubMed

    Zhukovsky, K

    2014-01-01

    We present a general method of operational nature to analyze and obtain solutions for a variety of equations of mathematical physics and related mathematical problems. We construct inverse differential operators and produce operational identities, involving inverse derivatives and families of generalised orthogonal polynomials, such as Hermite and Laguerre polynomial families. We develop the methodology of inverse and exponential operators, employing them for the study of partial differential equations. Advantages of the operational technique, combined with the use of integral transforms, generating functions with exponentials and their integrals, for solving a wide class of partial derivative equations, related to heat, wave, and transport problems, are demonstrated.

  15. Approximate non-linear multiparameter inversion for multicomponent single and double P-wave scattering in isotropic elastic media

    NASA Astrophysics Data System (ADS)

    Ouyang, Wei; Mao, Weijian

    2018-03-01

    An asymptotic quadratic true-amplitude inversion method for isotropic elastic P waves is proposed to invert medium parameters. The multicomponent P-wave scattered wavefield is computed based on a forward relationship using second-order Born approximation and corresponding high-frequency ray theoretical methods. Within the local double scattering mechanism, the P-wave transmission factors are elaborately calculated, which results in the radiation pattern for P-waves scattering being a quadratic combination of the density and Lamé's moduli perturbation parameters. We further express the elastic P-wave scattered wavefield in a form of generalized Radon transform (GRT). After introducing classical backprojection operators, we obtain an approximate solution of the inverse problem by solving a quadratic non-linear system. Numerical tests with synthetic data computed by finite-differences scheme demonstrate that our quadratic inversion can accurately invert perturbation parameters for strong perturbations, compared with the P-wave single-scattering linear inversion method. Although our inversion strategy here is only syncretized with P-wave scattering, it can be extended to invert multicomponent elastic data containing both P-wave and S-wave information.

  16. Approximate nonlinear multiparameter inversion for multicomponent single and double P-wave scattering in isotropic elastic media

    NASA Astrophysics Data System (ADS)

    Ouyang, Wei; Mao, Weijian

    2018-07-01

    An asymptotic quadratic true-amplitude inversion method for isotropic elastic P waves is proposed to invert medium parameters. The multicomponent P-wave scattered wavefield is computed based on a forward relationship using second-order Born approximation and corresponding high-frequency ray theoretical methods. Within the local double scattering mechanism, the P-wave transmission factors are elaborately calculated, which results in the radiation pattern for P-wave scattering being a quadratic combination of the density and Lamé's moduli perturbation parameters. We further express the elastic P-wave scattered wavefield in a form of generalized Radon transform. After introducing classical backprojection operators, we obtain an approximate solution of the inverse problem by solving a quadratic nonlinear system. Numerical tests with synthetic data computed by finite-differences scheme demonstrate that our quadratic inversion can accurately invert perturbation parameters for strong perturbations, compared with the P-wave single-scattering linear inversion method. Although our inversion strategy here is only syncretized with P-wave scattering, it can be extended to invert multicomponent elastic data containing both P- and S-wave information.

  17. Note on the practical significance of the Drazin inverse

    NASA Technical Reports Server (NTRS)

    Wilkinson, J. H.

    1979-01-01

    The solution of the differential system Bx = Ax + f where A and B are n x n matrices, and A - Lambda B is not a singular pencil, may be expressed in terms of the Drazin inverse. It is shown that there is a simple reduced form for the pencil A - Lambda B which is adequate for the determination of the general solution and that although the Drazin inverse could be determined efficiently from this reduced form it is inadvisable to do so.

  18. Multimodal, high-dimensional, model-based, Bayesian inverse problems with applications in biomechanics

    NASA Astrophysics Data System (ADS)

    Franck, I. M.; Koutsourelakis, P. S.

    2017-01-01

    This paper is concerned with the numerical solution of model-based, Bayesian inverse problems. We are particularly interested in cases where the cost of each likelihood evaluation (forward-model call) is expensive and the number of unknown (latent) variables is high. This is the setting in many problems in computational physics where forward models with nonlinear PDEs are used and the parameters to be calibrated involve spatio-temporarily varying coefficients, which upon discretization give rise to a high-dimensional vector of unknowns. One of the consequences of the well-documented ill-posedness of inverse problems is the possibility of multiple solutions. While such information is contained in the posterior density in Bayesian formulations, the discovery of a single mode, let alone multiple, poses a formidable computational task. The goal of the present paper is two-fold. On one hand, we propose approximate, adaptive inference strategies using mixture densities to capture multi-modal posteriors. On the other, we extend our work in [1] with regard to effective dimensionality reduction techniques that reveal low-dimensional subspaces where the posterior variance is mostly concentrated. We validate the proposed model by employing Importance Sampling which confirms that the bias introduced is small and can be efficiently corrected if the analyst wishes to do so. We demonstrate the performance of the proposed strategy in nonlinear elastography where the identification of the mechanical properties of biological materials can inform non-invasive, medical diagnosis. The discovery of multiple modes (solutions) in such problems is critical in achieving the diagnostic objectives.

  19. The inverse Numerical Computer Program FLUX-BOT for estimating Vertical Water Fluxes from Temperature Time-Series.

    NASA Astrophysics Data System (ADS)

    Trauth, N.; Schmidt, C.; Munz, M.

    2016-12-01

    Heat as a natural tracer to quantify water fluxes between groundwater and surface water has evolved to a standard hydrological method. Typically, time series of temperatures in the surface water and in the sediment are observed and are subsequently evaluated by a vertical 1D representation of heat transport by advection and dispersion. Several analytical solutions as well as their implementation into user-friendly software exist in order to estimate water fluxes from the observed temperatures. Analytical solutions can be easily implemented but assumptions on the boundary conditions have to be made a priori, e.g. sinusoidal upper temperature boundary. Numerical models offer more flexibility and can handle temperature data which is characterized by irregular variations such as storm-event induced temperature changes and thus cannot readily be incorporated in analytical solutions. This also reduced the effort of data preprocessing such as the extraction of the diurnal temperature variation. We developed a software to estimate water FLUXes Based On Temperatures- FLUX-BOT. FLUX-BOT is a numerical code written in MATLAB which is intended to calculate vertical water fluxes in saturated sediments, based on the inversion of measured temperature time series observed at multiple depths. It applies a cell-centered Crank-Nicolson implicit finite difference scheme to solve the one-dimensional heat advection-conduction equation. Besides its core inverse numerical routines, FLUX-BOT includes functions visualizing the results and functions for performing uncertainty analysis. We provide applications of FLUX-BOT to generic as well as to measured temperature data to demonstrate its performance.

  20. Model-based assist feature insertion for sub-40nm memory device

    NASA Astrophysics Data System (ADS)

    Suh, Sungsoo; Lee, Suk-joo; Choi, Seong-woon; Lee, Sung-Woo; Park, Chan-hoon

    2009-04-01

    Many issues need to be resolved for a production-worthy model based assist feature insertion flow for single and double exposure patterning process to extend low k1 process at 193 nm immersion technology. Model based assist feature insertion is not trivial to implement either for single and double exposure patterning compared to rule based methods. As shown in Fig. 1, pixel based mask inversion technology in itself has difficulties in mask writing and inspection although it presents as one of key technology to extend single exposure for contact layer. Thus far, inversion technology is tried as a cooptimization of target mask to simultaneously generate optimized main and sub-resolution assists features for a desired process window. Alternatively, its technology can also be used to optimize for a target feature after an assist feature types are inserted in order to simplify the mask complexity. Simplification of inversion mask is one of major issue with applying inversion technology to device development even if a smaller mask feature can be fabricated since the mask writing time is also a major factor. As shown in Figure 2, mask writing time may be a limiting factor in determining whether or not an inversion solution is viable. It can be reasoned that increased number of shot counts relates to increase in margin for inversion methodology. On the other hand, there is a limit on how complex a mask can be in order to be production worthy. There is also source and mask co-optimization which influences the final mask patterns and assist feature sizes and positions for a given target. In this study, we will discuss assist feature insertion methods for sub 40-nm technology.

  1. Superresolution radar imaging based on fast inverse-free sparse Bayesian learning for multiple measurement vectors

    NASA Astrophysics Data System (ADS)

    He, Xingyu; Tong, Ningning; Hu, Xiaowei

    2018-01-01

    Compressive sensing has been successfully applied to inverse synthetic aperture radar (ISAR) imaging of moving targets. By exploiting the block sparse structure of the target image, sparse solution for multiple measurement vectors (MMV) can be applied in ISAR imaging and a substantial performance improvement can be achieved. As an effective sparse recovery method, sparse Bayesian learning (SBL) for MMV involves a matrix inverse at each iteration. Its associated computational complexity grows significantly with the problem size. To address this problem, we develop a fast inverse-free (IF) SBL method for MMV. A relaxed evidence lower bound (ELBO), which is computationally more amiable than the traditional ELBO used by SBL, is obtained by invoking fundamental property for smooth functions. A variational expectation-maximization scheme is then employed to maximize the relaxed ELBO, and a computationally efficient IF-MSBL algorithm is proposed. Numerical results based on simulated and real data show that the proposed method can reconstruct row sparse signal accurately and obtain clear superresolution ISAR images. Moreover, the running time and computational complexity are reduced to a great extent compared with traditional SBL methods.

  2. A comparison between Gauss-Newton and Markov chain Monte Carlo basedmethods for inverting spectral induced polarization data for Cole-Coleparameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.

    2008-05-15

    We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less

  3. Forward and inverse kinematics of double universal joint robot wrists

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1991-01-01

    A robot wrist consisting of two universal joints can eliminate the wrist singularity problem found on many individual robots. Forward and inverse position and velocity kinematics are presented for such a wrist having three degrees of freedom. Denavit-Hartenberg parameters are derived to find the transforms required for the kinematic equations. The Omni-Wrist, a commercial double universal joint robot wrist, is studied in detail. There are four levels of kinematic parameters identified for this wrist; three forward and three inverse maps are presented for both position and velocity. These equations relate the hand coordinate frame to the wrist base frame. They are sufficient for control of the wrist standing alone. When the wrist is attached to a manipulator arm; the offset between the two universal joints complicates the solution of the overall kinematics problem. All wrist coordinate frame origins are not coincident, which prevents decoupling of position and orientation for manipulator inverse kinematics.

  4. A model reduction approach to numerical inversion for a parabolic partial differential equation

    NASA Astrophysics Data System (ADS)

    Borcea, Liliana; Druskin, Vladimir; Mamonov, Alexander V.; Zaslavsky, Mikhail

    2014-12-01

    We propose a novel numerical inversion algorithm for the coefficients of parabolic partial differential equations, based on model reduction. The study is motivated by the application of controlled source electromagnetic exploration, where the unknown is the subsurface electrical resistivity and the data are time resolved surface measurements of the magnetic field. The algorithm presented in this paper considers inversion in one and two dimensions. The reduced model is obtained with rational interpolation in the frequency (Laplace) domain and a rational Krylov subspace projection method. It amounts to a nonlinear mapping from the function space of the unknown resistivity to the small dimensional space of the parameters of the reduced model. We use this mapping as a nonlinear preconditioner for the Gauss-Newton iterative solution of the inverse problem. The advantage of the inversion algorithm is twofold. First, the nonlinear preconditioner resolves most of the nonlinearity of the problem. Thus the iterations are less likely to get stuck in local minima and the convergence is fast. Second, the inversion is computationally efficient because it avoids repeated accurate simulations of the time-domain response. We study the stability of the inversion algorithm for various rational Krylov subspaces, and assess its performance with numerical experiments.

  5. Stability and uncertainty of finite-fault slip inversions: Application to the 2004 Parkfield, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.; Mendoza, C.; Ji, C.; Larson, K.M.

    2007-01-01

    The 2004 Parkfield, California, earthquake is used to investigate stability and uncertainty aspects of the finite-fault slip inversion problem with different a priori model assumptions. We utilize records from 54 strong ground motion stations and 13 continuous, 1-Hz sampled, geodetic instruments. Two inversion procedures are compared: a linear least-squares subfault-based methodology and a nonlinear global search algorithm. These two methods encompass a wide range of the different approaches that have been used to solve the finite-fault slip inversion problem. For the Parkfield earthquake and the inversion of velocity or displacement waveforms, near-surface related site response (top 100 m, frequencies above 1 Hz) is shown to not significantly affect the solution. Results are also insensitive to selection of slip rate functions with similar duration and to subfault size if proper stabilizing constraints are used. The linear and nonlinear formulations yield consistent results when the same limitations in model parameters are in place and the same inversion norm is used. However, the solution is sensitive to the choice of inversion norm, the bounds on model parameters, such as rake and rupture velocity, and the size of the model fault plane. The geodetic data set for Parkfield gives a slip distribution different from that of the strong-motion data, which may be due to the spatial limitation of the geodetic stations and the bandlimited nature of the strong-motion data. Cross validation and the bootstrap method are used to set limits on the upper bound for rupture velocity and to derive mean slip models and standard deviations in model parameters. This analysis shows that slip on the northwestern half of the Parkfield rupture plane from the inversion of strong-motion data is model dependent and has a greater uncertainty than slip near the hypocenter.

  6. Simultaneous estimation of aquifer thickness, conductivity, and BC using borehole and hydrodynamic data with geostatistical inverse direct method

    NASA Astrophysics Data System (ADS)

    Gao, F.; Zhang, Y.

    2017-12-01

    A new inverse method is developed to simultaneously estimate aquifer thickness and boundary conditions using borehole and hydrodynamic measurements from a homogeneous confined aquifer under steady-state ambient flow. This method extends a previous groundwater inversion technique which had assumed known aquifer geometry and thickness. In this research, thickness inversion was successfully demonstrated when hydrodynamic data were supplemented with measured thicknesses from boreholes. Based on a set of hybrid formulations which describe approximate solutions to the groundwater flow equation, the new inversion technique can incorporate noisy observed data (i.e., thicknesses, hydraulic heads, Darcy fluxes or flow rates) at measurement locations as a set of conditioning constraints. Given sufficient quantity and quality of the measurements, the inverse method yields a single well-posed system of equations that can be solved efficiently with nonlinear optimization. The method is successfully tested on two-dimensional synthetic aquifer problems with regular geometries. The solution is stable when measurement errors are increased, with error magnitude reaching up to +/- 10% of the range of the respective measurement. When error-free observed data are used to condition the inversion, the estimated thickness is within a +/- 5% error envelope surrounding the true value; when data contain increasing errors, the estimated thickness become less accurate, as expected. Different combinations of measurement types are then investigated to evaluate data worth. Thickness can be inverted with the combination of observed heads and at least one of the other types of observations such as thickness, Darcy fluxes, or flow rates. Data requirement of the new inversion method is thus not much different from that of interpreting classic well tests. Future work will improve upon this research by developing an estimation strategy for heterogeneous aquifers while drawdown data from hydraulic tests will also be incorporated as conditioning measurements.

  7. Top-down constraints on global N2O emissions at optimal resolution: application of a new dimension reduction technique

    NASA Astrophysics Data System (ADS)

    Wells, Kelley C.; Millet, Dylan B.; Bousserez, Nicolas; Henze, Daven K.; Griffis, Timothy J.; Chaliyakunnel, Sreelekha; Dlugokencky, Edward J.; Saikawa, Eri; Xiang, Gao; Prinn, Ronald G.; O'Doherty, Simon; Young, Dickon; Weiss, Ray F.; Dutton, Geoff S.; Elkins, James W.; Krummel, Paul B.; Langenfelds, Ray; Steele, L. Paul

    2018-01-01

    We present top-down constraints on global monthly N2O emissions for 2011 from a multi-inversion approach and an ensemble of surface observations. The inversions employ the GEOS-Chem adjoint and an array of aggregation strategies to test how well current observations can constrain the spatial distribution of global N2O emissions. The strategies include (1) a standard 4D-Var inversion at native model resolution (4° × 5°), (2) an inversion for six continental and three ocean regions, and (3) a fast 4D-Var inversion based on a novel dimension reduction technique employing randomized singular value decomposition (SVD). The optimized global flux ranges from 15.9 Tg N yr-1 (SVD-based inversion) to 17.5-17.7 Tg N yr-1 (continental-scale, standard 4D-Var inversions), with the former better capturing the extratropical N2O background measured during the HIAPER Pole-to-Pole Observations (HIPPO) airborne campaigns. We find that the tropics provide a greater contribution to the global N2O flux than is predicted by the prior bottom-up inventories, likely due to underestimated agricultural and oceanic emissions. We infer an overestimate of natural soil emissions in the extratropics and find that predicted emissions are seasonally biased in northern midlatitudes. Here, optimized fluxes exhibit a springtime peak consistent with the timing of spring fertilizer and manure application, soil thawing, and elevated soil moisture. Finally, the inversions reveal a major emission underestimate in the US Corn Belt in the bottom-up inventory used here. We extensively test the impact of initial conditions on the analysis and recommend formally optimizing the initial N2O distribution to avoid biasing the inferred fluxes. We find that the SVD-based approach provides a powerful framework for deriving emission information from N2O observations: by defining the optimal resolution of the solution based on the information content of the inversion, it provides spatial information that is lost when aggregating to political or geographic regions, while also providing more temporal information than a standard 4D-Var inversion.

  8. Two-dimensional joint inversion of Magnetotelluric and local earthquake data: Discussion on the contribution to the solution of deep subsurface structures

    NASA Astrophysics Data System (ADS)

    Demirci, İsmail; Dikmen, Ünal; Candansayar, M. Emin

    2018-02-01

    Joint inversion of data sets collected by using several geophysical exploration methods has gained importance and associated algorithms have been developed. To explore the deep subsurface structures, Magnetotelluric and local earthquake tomography algorithms are generally used individually. Due to the usage of natural resources in both methods, it is not possible to increase data quality and resolution of model parameters. For this reason, the solution of the deep structures with the individual usage of the methods cannot be fully attained. In this paper, we firstly focused on the effects of both Magnetotelluric and local earthquake data sets on the solution of deep structures and discussed the results on the basis of the resolving power of the methods. The presence of deep-focus seismic sources increase the resolution of deep structures. Moreover, conductivity distribution of relatively shallow structures can be solved with high resolution by using MT algorithm. Therefore, we developed a new joint inversion algorithm based on the cross gradient function in order to jointly invert Magnetotelluric and local earthquake data sets. In the study, we added a new regularization parameter into the second term of the parameter correction vector of Gallardo and Meju (2003). The new regularization parameter is enhancing the stability of the algorithm and controls the contribution of the cross gradient term in the solution. The results show that even in cases where resistivity and velocity boundaries are different, both methods influence each other positively. In addition, the region of common structural boundaries of the models are clearly mapped compared with original models. Furthermore, deep structures are identified satisfactorily even with using the minimum number of seismic sources. In this paper, in order to understand the future studies, we discussed joint inversion of Magnetotelluric and local earthquake data sets only in two-dimensional space. In the light of these results and by means of the acceleration on the three-dimensional modelling and inversion algorithms, it is thought that it may be easier to identify underground structures with high resolution.

  9. Probabilistic joint inversion of waveforms and polarity data for double-couple focal mechanisms of local earthquakes

    NASA Astrophysics Data System (ADS)

    Wéber, Zoltán

    2018-06-01

    Estimating the mechanisms of small (M < 4) earthquakes is quite challenging. A common scenario is that neither the available polarity data alone nor the well predictable near-station seismograms alone are sufficient to obtain reliable focal mechanism solutions for weak events. To handle this situation we introduce here a new method that jointly inverts waveforms and polarity data following a probabilistic approach. The procedure called joint waveform and polarity (JOWAPO) inversion maps the posterior probability density of the model parameters and estimates the maximum likelihood double-couple mechanism, the optimal source depth and the scalar seismic moment of the investigated event. The uncertainties of the solution are described by confidence regions. We have validated the method on two earthquakes for which well-determined focal mechanisms are available. The validation tests show that including waveforms in the inversion considerably reduces the uncertainties of the usually poorly constrained polarity solutions. The JOWAPO method performs best when it applies waveforms from at least two seismic stations. If the number of the polarity data is large enough, even single-station JOWAPO inversion can produce usable solutions. When only a few polarities are available, however, single-station inversion may result in biased mechanisms. In this case some caution must be taken when interpreting the results. We have successfully applied the JOWAPO method to an earthquake in North Hungary, whose mechanism could not be estimated by long-period waveform inversion. Using 17 P-wave polarities and waveforms at two nearby stations, the JOWAPO method produced a well-constrained focal mechanism. The solution is very similar to those obtained previously for four other events that occurred in the same earthquake sequence. The analysed event has a strike-slip mechanism with a P axis oriented approximately along an NE-SW direction.

  10. Astrophysical masers - Inverse methods, precision, resolution and uniqueness

    NASA Astrophysics Data System (ADS)

    Lerche, I.

    1986-07-01

    The paper provides exact analytic solutions to the two-level, steady-state, maser problem in parametric form, with the emergent intensities expressed in terms of the incident intensities and with the maser length also given in terms of an integral over the intensities. It is shown that some assumption must be made on the emergent intensity on the nonobservable side of the astrophysical maser in order to obtain any inversion of the equations. The incident intensities can then be expressed in terms of the emergent, observable, flux. It is also shown that the inversion is nonunique unless a homogeneous linear integral equation has only a null solution. Constraints imposed by knowledge of the physical length of the maser are felt in a nonlinear manner by the parametric variable and do not appear to provide any substantive additional information to reduce the degree of nonuniqueness of the inverse solutions. It is concluded that the questions of precision, resolution and uniqueness for solutions to astrophysical maser problems will remain more of an emotional art than a logical science for some time to come.

  11. Large-scale 3D inversion of marine controlled source electromagnetic data using the integral equation method

    NASA Astrophysics Data System (ADS)

    Zhdanov, M. S.; Cuma, M.; Black, N.; Wilson, G. A.

    2009-12-01

    The marine controlled source electromagnetic (MCSEM) method has become widely used in offshore oil and gas exploration. Interpretation of MCSEM data is still a very challenging problem, especially if one would like to take into account the realistic 3D structure of the subsurface. The inversion of MCSEM data is complicated by the fact that the EM response of a hydrocarbon-bearing reservoir is very weak in comparison with the background EM fields generated by an electric dipole transmitter in complex geoelectrical structures formed by a conductive sea-water layer and the terranes beneath it. In this paper, we present a review of the recent developments in the area of large-scale 3D EM forward modeling and inversion. Our approach is based on using a new integral form of Maxwell’s equations allowing for an inhomogeneous background conductivity, which results in a numerically effective integral representation for 3D EM field. This representation provides an efficient tool for the solution of 3D EM inverse problems. To obtain a robust inverse model of the conductivity distribution, we apply regularization based on a focusing stabilizing functional which allows for the recovery of models with both smooth and sharp geoelectrical boundaries. The method is implemented in a fully parallel computer code, which makes it possible to run large-scale 3D inversions on grids with millions of inversion cells. This new technique can be effectively used for active EM detection and monitoring of the subsurface targets.

  12. Moment Inversion of the DPRK Nuclear Tests Using Finite-Difference Three-dimensional Strain Green's Tensors

    NASA Astrophysics Data System (ADS)

    Bao, X.; Shen, Y.; Wang, N.

    2017-12-01

    Accurate estimation of the source moment is important for discriminating underground explosions from earthquakes and other seismic sources. In this study, we invert for the full moment tensors of the recent seismic events (since 2016) at the Democratic People's Republic of Korea (PRRK) Punggye-ri test site. We use waveform data from broadband seismic stations located in China, Korea, and Japan in the inversion. Using a non-staggered-grid, finite-difference algorithm, we calculate the strain Green's tensors (SGT) based on one-dimensional (1D) and three-dimensional (3D) Earth models. Taking advantage of the source-receiver reciprocity, a SGT database pre-calculated and stored for the Punggye-ri test site is used in inversion for the source mechanism of each event. With the source locations estimated from cross-correlation using regional Pn and Pn-coda waveforms, we obtain the optimal source mechanism that best fits synthetics to the observed waveforms of both body and surface waves. The moment solutions of the first three events (2016-01-06, 2016-09-09, and 2017-09-03) show dominant isotropic components, as expected from explosions, though there are also notable non-isotropic components. The last event ( 8 minutes after the mb6.3 explosion in 2017) contained mainly implosive component, suggesting a collapse following the explosion. The solutions from the 3D model can better fit observed waveforms than the corresponding solutions from the 1D model. The uncertainty in the resulting moment solution is influenced by heterogeneities not resolved by the Earth model according to the waveform misfit. Using the moment solutions, we predict the peak ground acceleration at the Punggye-ri test site and compare the prediction with corresponding InSAR and other satellite images.

  13. Inverse opal carbons for counter electrode of dye-sensitized solar cells.

    PubMed

    Kang, Da-Young; Lee, Youngshin; Cho, Chang-Yeol; Moon, Jun Hyuk

    2012-05-01

    We investigated the fabrication of inverse opal carbon counter electrodes using a colloidal templating method for DSSCs. Specifically, bare inverse opal carbon, mesopore-incoporated inverse opal carbon, and graphitized inverse opal carbon were synthesized and stably dispersed in ethanol solution for spray coating on a FTO substrate. The thickness of the electrode was controlled by the number of coatings, and the average relative thickness was evaluated by measuring the transmittance spectrum. The effect of the counter electrode thickness on the photovoltaic performance of the DSSCs was investigated and analyzed by interfacial charge transfer resistance (R(CT)) under EIS measurement. The effect of the surface area and conductivity of the inverse opal was also investigated by considering the increase in surface area due to the mesopore in the inverse opal carbon and conductivity by graphitization of the carbon matrix. The results showed that the FF and thereby the efficiency of DSSCs were increased as the electrode thickness increased. Consequently, the larger FF and thereby the greater efficiency of the DSSCs were achieved for mIOC and gIOC compared to IOC, which was attributed to the lower R(CT). Finally, compared to a conventional Pt counter electrode, the inverse opal-based carbon showed a comparable efficiency upon application to DSSCs.

  14. Feasibility of waveform inversion of Rayleigh waves for shallow shear-wave velocity using a genetic algorithm

    USGS Publications Warehouse

    Zeng, C.; Xia, J.; Miller, R.D.; Tsoflias, G.P.

    2011-01-01

    Conventional surface wave inversion for shallow shear (S)-wave velocity relies on the generation of dispersion curves of Rayleigh waves. This constrains the method to only laterally homogeneous (or very smooth laterally heterogeneous) earth models. Waveform inversion directly fits waveforms on seismograms, hence, does not have such a limitation. Waveforms of Rayleigh waves are highly related to S-wave velocities. By inverting the waveforms of Rayleigh waves on a near-surface seismogram, shallow S-wave velocities can be estimated for earth models with strong lateral heterogeneity. We employ genetic algorithm (GA) to perform waveform inversion of Rayleigh waves for S-wave velocities. The forward problem is solved by finite-difference modeling in the time domain. The model space is updated by generating offspring models using GA. Final solutions can be found through an iterative waveform-fitting scheme. Inversions based on synthetic records show that the S-wave velocities can be recovered successfully with errors no more than 10% for several typical near-surface earth models. For layered earth models, the proposed method can generate one-dimensional S-wave velocity profiles without the knowledge of initial models. For earth models containing lateral heterogeneity in which case conventional dispersion-curve-based inversion methods are challenging, it is feasible to produce high-resolution S-wave velocity sections by GA waveform inversion with appropriate priori information. The synthetic tests indicate that the GA waveform inversion of Rayleigh waves has the great potential for shallow S-wave velocity imaging with the existence of strong lateral heterogeneity. ?? 2011 Elsevier B.V.

  15. Inverse kinematic-based robot control

    NASA Technical Reports Server (NTRS)

    Wolovich, W. A.; Flueckiger, K. F.

    1987-01-01

    A fundamental problem which must be resolved in virtually all non-trivial robotic operations is the well-known inverse kinematic question. More specifically, most of the tasks which robots are called upon to perform are specified in Cartesian (x,y,z) space, such as simple tracking along one or more straight line paths or following a specified surfacer with compliant force sensors and/or visual feedback. In all cases, control is actually implemented through coordinated motion of the various links which comprise the manipulator; i.e., in link space. As a consequence, the control computer of every sophisticated anthropomorphic robot must contain provisions for solving the inverse kinematic problem which, in the case of simple, non-redundant position control, involves the determination of the first three link angles, theta sub 1, theta sub 2, and theta sub 3, which produce a desired wrist origin position P sub xw, P sub yw, and P sub zw at the end of link 3 relative to some fixed base frame. Researchers outline a new inverse kinematic solution and demonstrate its potential via some recent computer simulations. They also compare it to current inverse kinematic methods and outline some of the remaining problems which will be addressed in order to render it fully operational. Also discussed are a number of practical consequences of this technique beyond its obvious use in solving the inverse kinematic question.

  16. Inversion Of Jacobian Matrix For Robot Manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Report discusses inversion of Jacobian matrix for class of six-degree-of-freedom arms with spherical wrist, i.e., with last three joints intersecting. Shows by taking advantage of simple geometry of such arms, closed-form solution of Q=J-1X, which represents linear transformation from task space to joint space, obtained efficiently. Presents solutions for PUMA arm, JPL/Stanford arm, and six-revolute-joint coplanar arm along with all singular points. Main contribution of paper shows simple geometry of this type of arms exploited in performing inverse transformation without any need to compute Jacobian or its inverse explicitly. Implication of this computational efficiency advanced task-space control schemes for spherical-wrist arms implemented more efficiently.

  17. Are two plasma equilibrium states possible when the emission coefficient exceeds unity?

    DOE PAGES

    Campanell, Michael D.; Umansky, M. V.

    2017-02-28

    Two floating sheath solutions with strong electron emission in planar geometry have been proposed, a “space-charge limited” (SCL) sheath and an “inverse” sheath. SCL and inverse models contain different assumptions about conditions outside the sheath (e.g., the velocity of ions entering the sheath). So it is not yet clear whether both sheaths are possible in practice, or only one. Here we treat the global presheath-sheath problem for a plasma produced volumetrically between two planar walls. We show that all equilibrium requirements (a) floating condition, (b) plasma shielding, and (c) presheath force balance, can indeed be satisfied in two different waysmore » when the emission coefficient γ > 1. There is one solution with SCL sheaths and one with inverse sheaths, each with sharply different presheath distributions. As we show for the first time in 1D-1V simulations, a SCL and inverse equilibrium are both possible in plasmas with the same upstream properties (e.g., same N and Te). However, maintaining a true SCL equilibrium requires no ionization or charge exchange collisions in the sheath, or else cold ion accumulation in the SCL's “dip” forces a transition to the inverse. This suggests that only a monotonic inverse type sheath potential should exist at any plasma-facing surface with strong emission, whether be a divertor plate, emissive probe, dust grain, Hall thruster channel wall, sunlit object in space, etc. Nevertheless, SCL sheaths might still be possible if the ions in the dip can escape. Finally, our simulations demonstrate ways in which SCL and inverse regimes might be distinguished experimentally based on large-scale presheath effects, without having to probe inside the sheath.« less

  18. 2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Brossier, R.; Virieux, J.; Operto, S.

    2008-12-01

    Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.

  19. Value of epicardial potential maps in localizing pre-excitation sites for radiofrequency ablation. A simulation study

    NASA Astrophysics Data System (ADS)

    Hren, Rok

    1998-06-01

    Using computer simulations, we systematically investigated the limitations of an inverse solution that employs the potential distribution on the epicardial surface as an equivalent source model in localizing pre-excitation sites in Wolff-Parkinson-White syndrome. A model of the human ventricular myocardium that features an anatomically accurate geometry, an intramural rotating anisotropy and a computational implementation of the excitation process based on electrotonic interactions among cells, was used to simulate body surface potential maps (BSPMs) for 35 pre-excitation sites positioned along the atrioventricular ring. Two individualized torso models were used to account for variations in torso boundaries. Epicardial potential maps (EPMs) were computed using the L-curve inverse solution. The measure for accuracy of the localization was the distance between a position of the minimum in the inverse EPMs and the actual site of pre-excitation in the ventricular model. When the volume conductor properties and lead positions of the torso were precisely known and the measurement noise was added to the simulated BSPMs, the minimum in the inverse EPMs was at 12 ms after the onset on average within cm of the pre-excitation site. When the standard torso model was used to localize the sites of onset of the pre-excitation sequence initiated in individualized male and female torso models, the mean distance between the minimum and the pre-excitation site was cm for the male torso and cm for the female torso. The findings of our study indicate that a location of the minimum in EPMs computed using the inverse solution can offer non-invasive means for pre-interventional planning of the ablative treatment.

  20. Stochastic seismic inversion based on an improved local gradual deformation method

    NASA Astrophysics Data System (ADS)

    Yang, Xiuwei; Zhu, Peimin

    2017-12-01

    A new stochastic seismic inversion method based on the local gradual deformation method is proposed, which can incorporate seismic data, well data, geology and their spatial correlations into the inversion process. Geological information, such as sedimentary facies and structures, could provide significant a priori information to constrain an inversion and arrive at reasonable solutions. The local a priori conditional cumulative distributions at each node of model to be inverted are first established by indicator cokriging, which integrates well data as hard data and geological information as soft data. Probability field simulation is used to simulate different realizations consistent with the spatial correlations and local conditional cumulative distributions. The corresponding probability field is generated by the fast Fourier transform moving average method. Then, optimization is performed to match the seismic data via an improved local gradual deformation method. Two improved strategies are proposed to be suitable for seismic inversion. The first strategy is that we select and update local areas of bad fitting between synthetic seismic data and real seismic data. The second one is that we divide each seismic trace into several parts and obtain the optimal parameters for each part individually. The applications to a synthetic example and a real case study demonstrate that our approach can effectively find fine-scale acoustic impedance models and provide uncertainty estimations.

  1. Double point source W-phase inversion: Real-time implementation and automated model selection

    USGS Publications Warehouse

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  2. Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm

    USGS Publications Warehouse

    Chen, C.; Xia, J.; Liu, J.; Feng, G.

    2006-01-01

    Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.

  3. An industrial robot singular trajectories planning based on graphs and neural networks

    NASA Astrophysics Data System (ADS)

    Łęgowski, Adrian; Niezabitowski, Michał

    2016-06-01

    Singular trajectories are rarely used because of issues during realization. A method of planning trajectories for given set of points in task space with use of graphs and neural networks is presented. In every desired point the inverse kinematics problem is solved in order to derive all possible solutions. A graph of solutions is made. The shortest path is determined to define required nodes in joint space. Neural networks are used to define the path between these nodes.

  4. Efficient option valuation of single and double barrier options

    NASA Astrophysics Data System (ADS)

    Kabaivanov, Stanimir; Milev, Mariyan; Koleva-Petkova, Dessislava; Vladev, Veselin

    2017-12-01

    In this paper we present an implementation of pricing algorithm for single and double barrier options using Mellin transformation with Maximum Entropy Inversion and its suitability for real-world applications. A detailed analysis of the applied algorithm is accompanied by implementation in C++ that is then compared to existing solutions in terms of efficiency and computational power. We then compare the applied method with existing closed-form solutions and well known methods of pricing barrier options that are based on finite differences.

  5. The Ostrovsky-Vakhnenko equation by a Riemann-Hilbert approach

    NASA Astrophysics Data System (ADS)

    Boutet de Monvel, Anne; Shepelsky, Dmitry

    2015-01-01

    We present an inverse scattering transform (IST) approach for the (differentiated) Ostrovsky-Vakhnenko equation This equation can also be viewed as the short wave model for the Degasperis-Procesi (sDP) equation. Our IST approach is based on an associated Riemann-Hilbert problem, which allows us to give a representation for the classical (smooth) solution, to get the principal term of its long time asymptotics, and also to describe loop soliton solutions. Dedicated to Johannes Sjöstrand with gratitude and admiration.

  6. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.

    PubMed

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR.

  7. Sensitivity computation of the ell1 minimization problem and its application to dictionary design of ill-posed problems

    NASA Astrophysics Data System (ADS)

    Horesh, L.; Haber, E.

    2009-09-01

    The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.

  8. Polynomial compensation, inversion, and approximation of discrete time linear systems

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1987-01-01

    The least-squares transformation of a discrete-time multivariable linear system into a desired one by convolving the first with a polynomial system yields optimal polynomial solutions to the problems of system compensation, inversion, and approximation. The polynomial coefficients are obtained from the solution to a so-called normal linear matrix equation, whose coefficients are shown to be the weighting patterns of certain linear systems. These, in turn, can be used in the recursive solution of the normal equation.

  9. The Foundation GPS Water Vapor Inversion and its Application Research

    NASA Astrophysics Data System (ADS)

    Liu, R.; Lee, T.; Lv, H.; Fan, C.; Liu, Q.

    2018-04-01

    Using GPS technology to retrieve atmospheric water vapor is a new water vapor detection method, which can effectively compensate for the shortcomings of conventional water vapor detection methods, to provide high-precision, large-capacity, near real-time water vapor information. In-depth study of ground-based GPS detection of atmospheric water vapor technology aims to further improve the accuracy and practicability of GPS inversion of water vapor and to explore its ability to detect atmospheric water vapor information to better serve the meteorological services. In this paper, the influence of the setting parameters of initial station coordinates, satellite ephemeris and solution observation on the total delay accuracy of the tropospheric zenith is discussed based on the observed data. In this paper, the observations obtained from the observation network consisting of 8 IGS stations in China in June 2013 are used to inverse the water vapor data of the 8 stations. The data of Wuhan station is further selected and compared with the data of Nanhu Sounding Station in Wuhan The error between the two data was between -6mm-6mm, and the trend of the two was almost the same, the correlation reached 95.8 %. The experimental results also verify the reliability of ground-based GPS inversion of water vapor technology.

  10. Solution of the Inverse Problem for Thin Film Patterning by Electrohydrodynamic Forces

    NASA Astrophysics Data System (ADS)

    Zhou, Chengzhe; Troian, Sandra

    2017-11-01

    Micro- and nanopatterning techniques for applications ranging from optoelectronics to biofluidics have multiplied in number over the past decade to include adaptations of mature technologies as well as novel lithographic techniques based on periodic spatial modulation of surface stresses. We focus here on one such technique which relies on shape changes in nanofilms responding to a patterned counter-electrode. The interaction of a patterned electric field with the polarization charges at the liquid interface causes a patterned electrostatic pressure counterbalanced by capillary pressure which leads to 3D protrusions whose shape and evolution can be terminated as needed. All studies to date, however, have investigated the evolution of the liquid film in response to a preset counter-electrode pattern. In this talk, we present solution of the inverse problem for the thin film equation governing the electrohydrodynamic response by treating the system as a transient control problem. Optimality conditions are derived and an efficient corresponding solution algorithm is presented. We demonstrate such implementation of film control to achieve periodic, free surface shapes ranging from simple circular cap arrays to more complex square and sawtooth patterns.

  11. Inverse dynamics of underactuated mechanical systems: A simple case study and experimental verification

    NASA Astrophysics Data System (ADS)

    Blajer, W.; Dziewiecki, K.; Kołodziejczyk, K.; Mazur, Z.

    2011-05-01

    Underactuated systems are featured by fewer control inputs than the degrees-of-freedom, m < n. The determination of an input control strategy that forces such a system to complete a set of m specified motion tasks is a challenging task, and the explicit solution existence is conditioned to differential flatness of the problem. The flatness-based solution denotes that all the 2 n states and m control inputs can be algebraically expressed in terms of the m specified outputs and their time derivatives up to a certain order, which is in practice attainable only for simple systems. In this contribution the problem is posed in a more practical way as a set of index-three differential-algebraic equations, and the solution is obtained numerically. The formulation is then illustrated by a two-degree-of-freedom underactuated system composed of two rotating discs connected by a torsional spring, in which the pre-specified motion of one of the discs is actuated by the torque applied to the other disc, n = 2 and m = 1. Experimental verification of the inverse simulation control methodology is reported.

  12. Inverse dynamics of a 3 degree of freedom spatial flexible manipulator

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Serna, M.

    1989-01-01

    A technique is presented for solving the inverse dynamics and kinematics of 3 degree of freedom spatial flexible manipulator. The proposed method finds the joint torques necessary to produce a specified end effector motion. Since the inverse dynamic problem in elastic manipulators is closely coupled to the inverse kinematic problem, the solution of the first also renders the displacements and rotations at any point of the manipulator, including the joints. Furthermore the formulation is complete in the sense that it includes all the nonlinear terms due to the large rotation of the links. The Timoshenko beam theory is used to model the elastic characteristics, and the resulting equations of motion are discretized using the finite element method. An iterative solution scheme is proposed that relies on local linearization of the problem. The solution of each linearization is carried out in the frequency domain. The performance and capabilities of this technique are tested through simulation analysis. Results show the potential use of this method for the smooth motion control of space telerobots.

  13. Pumping Test Determination of Unsaturated Aquifer Properties

    NASA Astrophysics Data System (ADS)

    Mishra, P. K.; Neuman, S. P.

    2008-12-01

    Tartakovsky and Neuman [2007] presented a new analytical solution for flow to a partially penetrating well pumping at a constant rate from a compressible unconfined aquifer considering the unsaturated zone. In their solution three-dimensional, axially symmetric unsaturated flow is described by a linearized version of Richards' equation in which both hydraulic conductivity and water content vary exponentially with incremental capillary pressure head relative to its air entry value, the latter defining the interface between the saturated and unsaturated zones. Both exponential functions are characterized by a common exponent k having the dimension of inverse length, or equivalently a dimensionless exponent kd=kb where b is initial saturated thickness. The authors used their solution to analyze drawdown data from a pumping test conducted by Moench et al. [2001] in a Glacial Outwash Deposit at Cape Cod, Massachusetts. Their analysis yielded estimates of horizontal and vertical saturated hydraulic conductivities, specific storage, specific yield and k . Recognizing that hydraulic conductivity and water content seldom vary identically with incremental capillary pressure head, as assumed by Tartakovsky and Neuman [2007], we note that k is at best an effective rather than a directly measurable soil parameter. We therefore ask to what extent does interpretation of a pumping test based on the Tartakovsky-Neuman solution allow estimating aquifer unsaturated parameters as described by more common constitutive water retention and relative hydraulic conductivity models such as those of Brooks and Corey [1964] or van Genuchten [1980] and Mualem [1976a]? We address this question by showing how may be used to estimate the capillary air entry pressure head k and the parameters of such constitutive models directly, without a need for inverse unsaturated numerical simulations of the kind described by Moench [2003]. To assess the validity of such direct estimates we use maximum likelihood- based model selection criteria to compare the abilities of numerical models based on the STOMP code to reproduce observed drawdowns during the test when saturated and unsaturated aquifer parameters are estimated either in the above manner or by means of the inverse code PEST.

  14. An introduction of Markov chain Monte Carlo method to geochemical inverse problems: Reading melting parameters from REE abundances in abyssal peridotites

    NASA Astrophysics Data System (ADS)

    Liu, Boda; Liang, Yan

    2017-04-01

    Markov chain Monte Carlo (MCMC) simulation is a powerful statistical method in solving inverse problems that arise from a wide range of applications. In Earth sciences applications of MCMC simulations are primarily in the field of geophysics. The purpose of this study is to introduce MCMC methods to geochemical inverse problems related to trace element fractionation during mantle melting. MCMC methods have several advantages over least squares methods in deciphering melting processes from trace element abundances in basalts and mantle rocks. Here we use an MCMC method to invert for extent of melting, fraction of melt present during melting, and extent of chemical disequilibrium between the melt and residual solid from REE abundances in clinopyroxene in abyssal peridotites from Mid-Atlantic Ridge, Central Indian Ridge, Southwest Indian Ridge, Lena Trough, and American-Antarctic Ridge. We consider two melting models: one with exact analytical solution and the other without. We solve the latter numerically in a chain of melting models according to the Metropolis-Hastings algorithm. The probability distribution of inverted melting parameters depends on assumptions of the physical model, knowledge of mantle source composition, and constraints from the REE data. Results from MCMC inversion are consistent with and provide more reliable uncertainty estimates than results based on nonlinear least squares inversion. We show that chemical disequilibrium is likely to play an important role in fractionating LREE in residual peridotites during partial melting beneath mid-ocean ridge spreading centers. MCMC simulation is well suited for more complicated but physically more realistic melting problems that do not have analytical solutions.

  15. Hessian Schatten-norm regularization for linear inverse problems.

    PubMed

    Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael

    2013-05-01

    We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.

  16. Amplitude inversion of the 2D analytic signal of magnetic anomalies through the differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ekinci, Yunus Levent; Özyalın, Şenol; Sındırgı, Petek; Balkaya, Çağlayan; Göktürkler, Gökhan

    2017-12-01

    In this work, analytic signal amplitude (ASA) inversion of total field magnetic anomalies has been achieved by differential evolution (DE) which is a population-based evolutionary metaheuristic algorithm. Using an elitist strategy, the applicability and effectiveness of the proposed inversion algorithm have been evaluated through the anomalies due to both hypothetical model bodies and real isolated geological structures. Some parameter tuning studies relying mainly on choosing the optimum control parameters of the algorithm have also been performed to enhance the performance of the proposed metaheuristic. Since ASAs of magnetic anomalies are independent of both ambient field direction and the direction of magnetization of the causative sources in a two-dimensional (2D) case, inversions of synthetic noise-free and noisy single model anomalies have produced satisfactory solutions showing the practical applicability of the algorithm. Moreover, hypothetical studies using multiple model bodies have clearly showed that the DE algorithm is able to cope with complicated anomalies and some interferences from neighbouring sources. The proposed algorithm has then been used to invert small- (120 m) and large-scale (40 km) magnetic profile anomalies of an iron deposit (Kesikköprü-Bala, Turkey) and a deep-seated magnetized structure (Sea of Marmara, Turkey), respectively to determine depths, geometries and exact origins of the source bodies. Inversion studies have yielded geologically reasonable solutions which are also in good accordance with the results of normalized full gradient and Euler deconvolution techniques. Thus, we propose the use of DE not only for the amplitude inversion of 2D analytical signals of magnetic profile anomalies having induced or remanent magnetization effects but also the low-dimensional data inversions in geophysics. A part of this paper was presented as an abstract at the 2nd International Conference on Civil and Environmental Engineering, 8-10 May 2017, Cappadocia-Nevşehir (Turkey).

  17. Probability density of spatially distributed soil moisture inferred from crosshole georadar traveltime measurements

    NASA Astrophysics Data System (ADS)

    Linde, N.; Vrugt, J. A.

    2009-04-01

    Geophysical models are increasingly used in hydrological simulations and inversions, where they are typically treated as an artificial data source with known uncorrelated "data errors". The model appraisal problem in classical deterministic linear and non-linear inversion approaches based on linearization is often addressed by calculating model resolution and model covariance matrices. These measures offer only a limited potential to assign a more appropriate "data covariance matrix" for future hydrological applications, simply because the regularization operators used to construct a stable inverse solution bear a strong imprint on such estimates and because the non-linearity of the geophysical inverse problem is not explored. We present a parallelized Markov Chain Monte Carlo (MCMC) scheme to efficiently derive the posterior spatially distributed radar slowness and water content between boreholes given first-arrival traveltimes. This method is called DiffeRential Evolution Adaptive Metropolis (DREAM_ZS) with snooker updater and sampling from past states. Our inverse scheme does not impose any smoothness on the final solution, and uses uniform prior ranges of the parameters. The posterior distribution of radar slowness is converted into spatially distributed soil moisture values using a petrophysical relationship. To benchmark the performance of DREAM_ZS, we first apply our inverse method to a synthetic two-dimensional infiltration experiment using 9421 traveltimes contaminated with Gaussian errors and 80 different model parameters, corresponding to a model discretization of 0.3 m × 0.3 m. After this, the method is applied to field data acquired in the vadose zone during snowmelt. This work demonstrates that fully non-linear stochastic inversion can be applied with few limiting assumptions to a range of common two-dimensional tomographic geophysical problems. The main advantage of DREAM_ZS is that it provides a full view of the posterior distribution of spatially distributed soil moisture, which is key to appropriately treat geophysical parameter uncertainty and infer hydrologic models.

  18. Developments in spectrophotometry III: Multiple-field-of-view spectrometer to determine particle-size distribution and refractive index

    NASA Technical Reports Server (NTRS)

    Fymat, A. L.

    1975-01-01

    Instrument is based on inverse solution ot equations for light scattered by a transparent medium. Measurements are taken over several angles of incidence rather than over several frequencies. Measurements can be used to simultaneously determine chemical and physical properties of particles in mixed gas or liquid.

  19. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  20. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.

  1. Integrated Analytic and Linearized Inverse Kinematics for Precise Full Body Interactions

    NASA Astrophysics Data System (ADS)

    Boulic, Ronan; Raunhardt, Daniel

    Despite the large success of games grounded on movement-based interactions the current state of full body motion capture technologies still prevents the exploitation of precise interactions with complex environments. This paper focuses on ensuring a precise spatial correspondence between the user and the avatar. We build upon our past effort in human postural control with a Prioritized Inverse Kinematics framework. One of its key advantage is to ease the dynamic combination of postural and collision avoidance constraints. However its reliance on a linearized approximation of the problem makes it vulnerable to the well-known full extension singularity of the limbs. In such context the tracking performance is reduced and/or less believable intermediate postural solutions are produced. We address this issue by introducing a new type of analytic constraint that smoothly integrates within the prioritized Inverse Kinematics framework. The paper first recalls the background of full body 3D interactions and the advantages and drawbacks of the linearized IK solution. Then the Flexion-EXTension constraint (FLEXT in short) is introduced for the partial position control of limb-like articulated structures. Comparative results illustrate the interest of this new type of integrated analytical and linearized IK control.

  2. Destructive materials thermal characteristics determination with application for spacecraft structures testing

    NASA Astrophysics Data System (ADS)

    Alifanov, O. M.; Budnik, S. A.; Nenarokomov, A. V.; Netelev, A. V.; Titov, D. M.

    2013-04-01

    In many practical situations it is impossible to measure directly thermal and thermokinetic properties of analyzed composite materials. The only way that can often be used to overcome these difficulties is indirect measurements. This type of measurements is usually formulated as the solution of inverse heat transfer problems. Such problems are ill-posed in mathematical sense and their main feature shows itself in the solution instabilities. That is why special regularizing methods are needed to solve them. The general method of iterative regularization is concerned with application to the estimation of materials properties. The objective of this paper is to estimate thermal and thermokinetic properties of advanced materials using the approach based on inverse methods. An experimental-computational system is presented for investigating the thermal and kinetics properties of composite materials by methods of inverse heat transfer problems and which is developed at the Thermal Laboratory of Department Space Systems Engineering, of Moscow Aviation Institute (MAI). The system is aimed at investigating the materials in conditions of unsteady contact and/or radiation heating over a wide range of temperature changes and heating rates in a vacuum, air and inert gas medium.

  3. Inverse imaging of the breast with a material classification technique.

    PubMed

    Manry, C W; Broschat, S L

    1998-03-01

    In recent publications [Chew et al., IEEE Trans. Blomed. Eng. BME-9, 218-225 (1990); Borup et al., Ultrason. Imaging 14, 69-85 (1992)] the inverse imaging problem has been solved by means of a two-step iterative method. In this paper, a third step is introduced for ultrasound imaging of the breast. In this step, which is based on statistical pattern recognition, classification of tissue types and a priori knowledge of the anatomy of the breast are integrated into the iterative method. Use of this material classification technique results in more rapid convergence to the inverse solution--approximately 40% fewer iterations are required--as well as greater accuracy. In addition, tumors are detected early in the reconstruction process. Results for reconstructions of a simple two-dimensional model of the human breast are presented. These reconstructions are extremely accurate when system noise and variations in tissue parameters are not too great. However, for the algorithm used, degradation of the reconstructions and divergence from the correct solution occur when system noise and variations in parameters exceed threshold values. Even in this case, however, tumors are still identified within a few iterations.

  4. Inversion of sonobuoy data from shallow-water sites with simulated annealing.

    PubMed

    Lindwall, Dennis; Brozena, John

    2005-02-01

    An enhanced simulated annealing algorithm is used to invert sparsely sampled seismic data collected with sonobuoys to obtain seafloor geoacoustic properties at two littoral marine environments as well as for a synthetic data set. Inversion of field data from a 750-m water-depth site using a water-gun sound source found a good solution which included a pronounced subbottom reflector after 6483 iterations over seven variables. Field data from a 250-m water-depth site using an air-gun source required 35,421 iterations for a good inversion solution because 30 variables had to be solved for, including the shot-to-receiver offsets. The sonobuoy derived compressional wave velocity-depth (Vp-Z) models compare favorably with Vp-Z models derived from nearby, high-quality, multichannel seismic data. There are, however, substantial differences between seafloor reflection coefficients calculated from field models and seafloor reflection coefficients based on commonly used Vp regression curves (gradients). Reflection loss is higher at one field site and lower at the other than predicted from commonly used Vp gradients for terrigenous sediments. In addition, there are strong effects on reflection loss due to the subseafloor interfaces that are also not predicted by Vp gradients.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lin; Dai, Zhenxue; Gong, Huili

    Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less

  6. A matched-peak inversion approach for ocean acoustic travel-time tomography

    PubMed

    Skarsoulis

    2000-03-01

    A new approach for the inversion of travel-time data is proposed, based on the matching between model arrivals and observed peaks. Using the linearized model relations between sound-speed and arrival-time perturbations about a set of background states, arrival times and associated errors are calculated on a fine grid of model states discretizing the sound-speed parameter space. Each model state can explain (identify) a number of observed peaks in a particular reception lying within the uncertainty intervals of the corresponding predicted arrival times. The model states that explain the maximum number of observed peaks are considered as the more likely parametric descriptions of the reception; these model states can be described in terms of mean values and variances providing a statistical answer (matched-peak solution) to the inversion problem. A basic feature of the matched-peak inversion approach is that each reception can be treated independently, i.e., no constraints are posed from previous-reception identification or inversion results. Accordingly, there is no need for initialization of the inversion procedure and, furthermore, discontinuous travel-time data can be treated. The matched-peak inversion method is demonstrated by application to 9-month-long travel-time data from the Thetis-2 tomography experiment in the western Mediterranean sea.

  7. Comparison of iterative inverse coarse-graining methods

    NASA Astrophysics Data System (ADS)

    Rosenberger, David; Hanke, Martin; van der Vegt, Nico F. A.

    2016-10-01

    Deriving potentials for coarse-grained Molecular Dynamics (MD) simulations is frequently done by solving an inverse problem. Methods like Iterative Boltzmann Inversion (IBI) or Inverse Monte Carlo (IMC) have been widely used to solve this problem. The solution obtained by application of these methods guarantees a match in the radial distribution function (RDF) between the underlying fine-grained system and the derived coarse-grained system. However, these methods often fail in reproducing thermodynamic properties. To overcome this deficiency, additional thermodynamic constraints such as pressure or Kirkwood-Buff integrals (KBI) may be added to these methods. In this communication we test the ability of these methods to converge to a known solution of the inverse problem. With this goal in mind we have studied a binary mixture of two simple Lennard-Jones (LJ) fluids, in which no actual coarse-graining is performed. We further discuss whether full convergence is actually needed to achieve thermodynamic representability.

  8. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    USGS Publications Warehouse

    Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2015-01-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  9. Performance impact of mutation operators of a subpopulation-based genetic algorithm for multi-robot task allocation problems.

    PubMed

    Liu, Chun; Kroll, Andreas

    2016-01-01

    Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.

  10. Laplace Transform Based Radiative Transfer Studies

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Lin, B.; Ng, T.; Yang, P.; Wiscombe, W.; Herath, J.; Duffy, D.

    2006-12-01

    Multiple scattering is the major uncertainty for data analysis of space-based lidar measurements. Until now, accurate quantitative lidar data analysis has been limited to very thin objects that are dominated by single scattering, where photons from the laser beam only scatter a single time with particles in the atmosphere before reaching the receiver, and simple linear relationship between physical property and lidar signal exists. In reality, multiple scattering is always a factor in space-based lidar measurement and it dominates space- based lidar returns from clouds, dust aerosols, vegetation canopy and phytoplankton. While multiple scattering are clear signals, the lack of a fast-enough lidar multiple scattering computation tool forces us to treat the signal as unwanted "noise" and use simple multiple scattering correction scheme to remove them. Such multiple scattering treatments waste the multiple scattering signals and may cause orders of magnitude errors in retrieved physical properties. Thus the lack of fast and accurate time-dependent radiative transfer tools significantly limits lidar remote sensing capabilities. Analyzing lidar multiple scattering signals requires fast and accurate time-dependent radiative transfer computations. Currently, multiple scattering is done with Monte Carlo simulations. Monte Carlo simulations take minutes to hours and are too slow for interactive satellite data analysis processes and can only be used to help system / algorithm design and error assessment. We present an innovative physics approach to solve the time-dependent radiative transfer problem. The technique utilizes FPGA based reconfigurable computing hardware. The approach is as following, 1. Physics solution: Perform Laplace transform on the time and spatial dimensions and Fourier transform on the viewing azimuth dimension, and convert the radiative transfer differential equation solving into a fast matrix inversion problem. The majority of the radiative transfer computation goes to matrix inversion processes, FFT and inverse Laplace transforms. 2. Hardware solutions: Perform the well-defined matrix inversion, FFT and Laplace transforms on highly parallel, reconfigurable computing hardware. This physics-based computational tool leads to accurate quantitative analysis of space-based lidar signals and improves data quality of current lidar mission such as CALIPSO. This presentation will introduce the basic idea of this approach, preliminary results based on SRC's FPGA-based Mapstation, and how we may apply it to CALIPSO data analysis.

  11. Cybernetic group method of data handling (GMDH) statistical learning for hyperspectral remote sensing inverse problems in coastal ocean optics

    NASA Astrophysics Data System (ADS)

    Filippi, Anthony Matthew

    For complex systems, sufficient a priori knowledge is often lacking about the mathematical or empirical relationship between cause and effect or between inputs and outputs of a given system. Automated machine learning may offer a useful solution in such cases. Coastal marine optical environments represent such a case, as the optical remote sensing inverse problem remains largely unsolved. A self-organizing, cybernetic mathematical modeling approach known as the group method of data handling (GMDH), a type of statistical learning network (SLN), was used to generate explicit spectral inversion models for optically shallow coastal waters. Optically shallow water light fields represent a particularly difficult challenge in oceanographic remote sensing. Several algorithm-input data treatment combinations were utilized in multiple experiments to automatically generate inverse solutions for various inherent optical property (IOP), bottom optical property (BOP), constituent concentration, and bottom depth estimations. The objective was to identify the optimal remote-sensing reflectance Rrs(lambda) inversion algorithm. The GMDH also has the potential of inductive discovery of physical hydro-optical laws. Simulated data were used to develop generalized, quasi-universal relationships. The Hydrolight numerical forward model, based on radiative transfer theory, was used to compute simulated above-water remote-sensing reflectance Rrs(lambda) psuedodata, matching the spectral channels and resolution of the experimental Naval Research Laboratory Ocean PHILLS (Portable Hyperspectral Imager for Low-Light Spectroscopy) sensor. The input-output pairs were for GMDH and artificial neural network (ANN) model development, the latter of which was used as a baseline, or control, algorithm. Both types of models were applied to in situ and aircraft data. Also, in situ spectroradiometer-derived Rrs(lambda) were used as input to an optimization-based inversion procedure. Target variables included bottom depth z b, chlorophyll a concentration [chl- a], spectral bottom irradiance reflectance Rb(lambda), and spectral total absorption a(lambda) and spectral total backscattering bb(lambda) coefficients. When applying the cybernetic and neural models to in situ HyperTSRB-derived Rrs, the difference in the means of the absolute error of the inversion estimates for zb was significant (alpha = 0.05). GMDH yielded significantly better zb than the ANN. The ANN model posted a mean absolute error (MAE) of 0.62214 m, compared with 0.55161 m for GMDH.

  12. Comparison result of inversion of gravity data of a fault by particle swarm optimization and Levenberg-Marquardt methods.

    PubMed

    Toushmalani, Reza

    2013-01-01

    The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.

  13. Real-time monitoring and massive inversion of source parameters of very long period seismic signals: An application to Stromboli Volcano, Italy

    USGS Publications Warehouse

    Auger, E.; D'Auria, L.; Martini, M.; Chouet, B.; Dawson, P.

    2006-01-01

    We present a comprehensive processing tool for the real-time analysis of the source mechanism of very long period (VLP) seismic data based on waveform inversions performed in the frequency domain for a point source. A search for the source providing the best-fitting solution is conducted over a three-dimensional grid of assumed source locations, in which the Green's functions associated with each point source are calculated by finite differences using the reciprocal relation between source and receiver. Tests performed on 62 nodes of a Linux cluster indicate that the waveform inversion and search for the best-fitting signal over 100,000 point sources require roughly 30 s of processing time for a 2-min-long record. The procedure is applied to post-processing of a data archive and to continuous automatic inversion of real-time data at Stromboli, providing insights into different modes of degassing at this volcano. Copyright 2006 by the American Geophysical Union.

  14. Determination of elastic moduli from measured acoustic velocities.

    PubMed

    Brown, J Michael

    2018-06-01

    Methods are evaluated in solution of the inverse problem associated with determination of elastic moduli for crystals of arbitrary symmetry from elastic wave velocities measured in many crystallographic directions. A package of MATLAB functions provides a robust and flexible environment for analysis of ultrasonic, Brillouin, or Impulsive Stimulated Light Scattering datasets. Three inverse algorithms are considered: the gradient-based methods of Levenberg-Marquardt and Backus-Gilbert, and a non-gradient-based (Nelder-Mead) simplex approach. Several data types are considered: body wave velocities alone, surface wave velocities plus a side constraint on X-ray-diffraction-based axes compressibilities, or joint body and surface wave velocities. The numerical algorithms are validated through comparisons with prior published results and through analysis of synthetic datasets. Although all approaches succeed in finding low-misfit solutions, the Levenberg-Marquardt method consistently demonstrates effectiveness and computational efficiency. However, linearized gradient-based methods, when applied to a strongly non-linear problem, may not adequately converge to the global minimum. The simplex method, while slower, is less susceptible to being trapped in local misfit minima. A "multi-start" strategy (initiate searches from more than one initial guess) provides better assurance that global minima have been located. Numerical estimates of parameter uncertainties based on Monte Carlo simulations are compared to formal uncertainties based on covariance calculations. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Comparison of gravimetric and mantle flow solutions for sub-lithopsheric stress modeling and their combination

    NASA Astrophysics Data System (ADS)

    Eshagh, Mehdi; Steinberger, Bernhard; Tenzer, Robert; Tassara, Andrés

    2018-05-01

    Based on Hager and O'Connell's solution to mantle flow equations, the stresses induced by mantle convection are determined using the density and viscosity structure in addition to topographic data and a plate velocity model. The solution to mantle flow equations requires the knowledge of mantle properties that are typically retrieved from seismic information. Large parts of the world are, however, not yet covered sufficiently by seismic surveys. An alternative method of modeling the stress field was introduced by Runcorn. He formulated a direct relation between the stress field and gravity data, while adopting several assumptions, particularly disregarding the toroidal mantle flow component and mantle viscosity variations. A possible way to overcome theoretical deficiencies of Runcorn's theory as well as some practical limitations of applying Hager and O'Connell's theory (in the absence of seismic data) is to combine these two methods. In this study, we apply a least-squares analysis to combine these two methods based on the gravity data inversion constraint on mantle flow equations. In particular, we use vertical gravity gradients from the Gravity field and steady state Ocean Circulation Explorer that are corrected for the gravitational contribution of crustal density heterogeneities prior to applying a localized gravity-gradient inversion. This gravitational contribution is estimated based on combining the Vening Meinesz-Moritz and flexural isostatic theories. Moreover, we treat the non-isostatic effect implicitly by applying a band-limited kernel of the integral equation during the inversion. In numerical studies of modeling, the stress field within the South American continental lithosphere we compare the results obtained after applying Runcorn and Hager and O'Connell's methods as well as their combination. The results show that, according to Hager and O'Connell's (mantle flow) solution, the maximum stress intensity is inferred under the northern Andes. Additional large stress anomalies are detected along the central and southern Andes, while stresses under most of old, stable cratonic formations are much less pronounced or absent. A prevailing stress-vector orientation realistically resembles a convergent mantle flow and downward currents under continental basins that separate Andean Orogeny from the Amazonian Shield and adjacent cratons. Runcorn's (gravimetric) solution, on the other hand, reflects a tectonic response of the lithosphere to mantle flow, with the maximum stress intensity detected along the subduction zone between the Nazca and Altiplano plates and along the convergent tectonic margin between the Altiplano and South American plates. The results also reveal a very close agreement between the results obtained from the combined and Hager and O'Connell's solutions.

  16. Adjoint Sensitivity Analysis of Orbital Mechanics: Application to Computations of Observables' Partials with Respect to Harmonics of the Planetary Gravity Fields

    NASA Technical Reports Server (NTRS)

    Ustinov, Eugene A.; Sunseri, Richard F.

    2005-01-01

    An approach is presented to the inversion of gravity fields based on evaluation of partials of observables with respect to gravity harmonics using the solution of adjoint problem of orbital dynamics of the spacecraft. Corresponding adjoint operator is derived directly from the linear operator of the linearized forward problem of orbital dynamics. The resulting adjoint problem is similar to the forward problem and can be solved by the same methods. For given highest degree N of gravity harmonics desired, this method involves integration of N adjoint solutions as compared to integration of N2 partials of the forward solution with respect to gravity harmonics in the conventional approach. Thus, for higher resolution gravity models, this approach becomes increasingly more effective in terms of computer resources as compared to the approach based on the solution of the forward problem of orbital dynamics.

  17. An approximation theory for the identification of linear thermoelastic systems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.; Su, Chien-Hua Frank

    1990-01-01

    An abstract approximation framework and convergence theory for the identification of thermoelastic systems is developed. Starting from an abstract operator formulation consisting of a coupled second order hyperbolic equation of elasticity and first order parabolic equation for heat conduction, well-posedness is established using linear semigroup theory in Hilbert space, and a class of parameter estimation problems is then defined involving mild solutions. The approximation framework is based upon generic Galerkin approximation of the mild solutions, and convergence of solutions of the resulting sequence of approximating finite dimensional parameter identification problems to a solution of the original infinite dimensional inverse problem is established using approximation results for operator semigroups. An example involving the basic equations of one dimensional linear thermoelasticity and a linear spline based scheme are discussed. Numerical results indicate how the approach might be used in a study of damping mechanisms in flexible structures.

  18. Numerical solution of quadratic matrix equations for free vibration analysis of structures

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1975-01-01

    This paper is concerned with the efficient and accurate solution of the eigenvalue problem represented by quadratic matrix equations. Such matrix forms are obtained in connection with the free vibration analysis of structures, discretized by finite 'dynamic' elements, resulting in frequency-dependent stiffness and inertia matrices. The paper presents a new numerical solution procedure of the quadratic matrix equations, based on a combined Sturm sequence and inverse iteration technique enabling economical and accurate determination of a few required eigenvalues and associated vectors. An alternative procedure based on a simultaneous iteration procedure is also described when only the first few modes are the usual requirement. The employment of finite dynamic elements in conjunction with the presently developed eigenvalue routines results in a most significant economy in the dynamic analysis of structures.

  19. On stability of the solutions of inverse problem for determining the right-hand side of a degenerate parabolic equation with two independent variables

    NASA Astrophysics Data System (ADS)

    Kamynin, V. L.; Bukharova, T. I.

    2017-01-01

    We prove the estimates of stability with respect to perturbations of input data for the solutions of inverse problems for degenerate parabolic equations with unbounded coefficients. An important feature of these estimates is that the constants in these estimates are written out explicitly by the input data of the problem.

  20. Nonlinear Problems in Fluid Dynamics and Inverse Scattering

    DTIC Science & Technology

    1993-05-31

    nonlinear Kadomtsev - Petviashvili (KP) equations , have solutions which will become infinite in finite time. This phenomenon is sometimes referred to as...40 (November 1992). 4 7. Wave Collapse and Instability of Solitary Waves of a Generalized Nonlinear Kaoiomtsev- Petviashvili Equation , X.P. Wang, M.J...words) The inverse scattering of a class of differential-difference equations and multidimensional operators has been constructed. Solutions of nonlinear

  1. Turbulent Equilibria for Charged Particles in Space

    NASA Astrophysics Data System (ADS)

    Yoon, Peter

    2017-04-01

    The solar wind electron distribution function is apparently composed of several components including non-thermal tail population. The electron distribution that contains energetic tail feature is well fitted with the kappa distribution function. The solar wind protons also possess quasi power-law tail distribution function that is well fitted with an inverse power law model. The present paper discusses the latest theoretical development regarding the dynamical steady-state solution of electrons and Langmuir turbulence that are in turbulent equilibrium. According to such a theory, the Maxwellian and kappa distribution functions for the electrons emerge as the only two possible solution that satisfy the steady-state weak turbulence plasma kinetic equation. For the proton inverse power-law tail problem, a similar turbulent equilibrium solution can be conceived of, but instead of high-frequency Langmuir fluctuation, the theory involves low-frequency kinetic Alfvenic turbulence. The steady-state solution of the self-consistent proton kinetic equation and wave kinetic equation for Alfvenic waves can be found in order to obtain a self-consistent solution for the inverse power law tail distribution function.

  2. Convergent radial dispersion: A note on evaluation of the Laplace transform solution

    USGS Publications Warehouse

    Moench, Allen F.

    1991-01-01

    A numerical inversion algorithm for Laplace transforms that is capable of handling rapid changes in the computed function is applied to the Laplace transform solution to the problem of convergent radial dispersion in a homogeneous aquifer. Prior attempts by the author to invert this solution were unsuccessful for highly advective systems where the Peclet number was relatively large. The algorithm used in this note allows for rapid and accurate inversion of the solution for all Peclet numbers of practical interest, and beyond. Dimensionless breakthrough curves are illustrated for tracer input in the form of a step function, a Dirac impulse, or a rectangular input.

  3. Frnakenstein: multiple target inverse RNA folding.

    PubMed

    Lyngsø, Rune B; Anderson, James W J; Sizikova, Elena; Badugu, Amarendra; Hyland, Tomas; Hein, Jotun

    2012-10-09

    RNA secondary structure prediction, or folding, is a classic problem in bioinformatics: given a sequence of nucleotides, the aim is to predict the base pairs formed in its three dimensional conformation. The inverse problem of designing a sequence folding into a particular target structure has only more recently received notable interest. With a growing appreciation and understanding of the functional and structural properties of RNA motifs, and a growing interest in utilising biomolecules in nano-scale designs, the interest in the inverse RNA folding problem is bound to increase. However, whereas the RNA folding problem from an algorithmic viewpoint has an elegant and efficient solution, the inverse RNA folding problem appears to be hard. In this paper we present a genetic algorithm approach to solve the inverse folding problem. The main aims of the development was to address the hitherto mostly ignored extension of solving the inverse folding problem, the multi-target inverse folding problem, while simultaneously designing a method with superior performance when measured on the quality of designed sequences. The genetic algorithm has been implemented as a Python program called Frnakenstein. It was benchmarked against four existing methods and several data sets totalling 769 real and predicted single structure targets, and on 292 two structure targets. It performed as well as or better at finding sequences which folded in silico into the target structure than all existing methods, without the heavy bias towards CG base pairs that was observed for all other top performing methods. On the two structure targets it also performed well, generating a perfect design for about 80% of the targets. Our method illustrates that successful designs for the inverse RNA folding problem does not necessarily have to rely on heavy biases in base pair and unpaired base distributions. The design problem seems to become more difficult on larger structures when the target structures are real structures, while no deterioration was observed for predicted structures. Design for two structure targets is considerably more difficult, but far from impossible, demonstrating the feasibility of automated design of artificial riboswitches. The Python implementation is available at http://www.stats.ox.ac.uk/research/genome/software/frnakenstein.

  4. Frnakenstein: multiple target inverse RNA folding

    PubMed Central

    2012-01-01

    Background RNA secondary structure prediction, or folding, is a classic problem in bioinformatics: given a sequence of nucleotides, the aim is to predict the base pairs formed in its three dimensional conformation. The inverse problem of designing a sequence folding into a particular target structure has only more recently received notable interest. With a growing appreciation and understanding of the functional and structural properties of RNA motifs, and a growing interest in utilising biomolecules in nano-scale designs, the interest in the inverse RNA folding problem is bound to increase. However, whereas the RNA folding problem from an algorithmic viewpoint has an elegant and efficient solution, the inverse RNA folding problem appears to be hard. Results In this paper we present a genetic algorithm approach to solve the inverse folding problem. The main aims of the development was to address the hitherto mostly ignored extension of solving the inverse folding problem, the multi-target inverse folding problem, while simultaneously designing a method with superior performance when measured on the quality of designed sequences. The genetic algorithm has been implemented as a Python program called Frnakenstein. It was benchmarked against four existing methods and several data sets totalling 769 real and predicted single structure targets, and on 292 two structure targets. It performed as well as or better at finding sequences which folded in silico into the target structure than all existing methods, without the heavy bias towards CG base pairs that was observed for all other top performing methods. On the two structure targets it also performed well, generating a perfect design for about 80% of the targets. Conclusions Our method illustrates that successful designs for the inverse RNA folding problem does not necessarily have to rely on heavy biases in base pair and unpaired base distributions. The design problem seems to become more difficult on larger structures when the target structures are real structures, while no deterioration was observed for predicted structures. Design for two structure targets is considerably more difficult, but far from impossible, demonstrating the feasibility of automated design of artificial riboswitches. The Python implementation is available at http://www.stats.ox.ac.uk/research/genome/software/frnakenstein. PMID:23043260

  5. Joint Geophysical Inversion With Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelievre, P. G.; Bijani, R.; Farquharson, C. G.

    2015-12-01

    Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.

  6. Inverse approaches with lithologic information for a regional groundwater system in southwest Kansas

    USGS Publications Warehouse

    Tsou, Ming‐shu; Perkins, S.P.; Zhan, X.; Whittemore, Donald O.; Zheng, Lingyun

    2006-01-01

    Two practical approaches incorporating lithologic information for groundwater modeling calibration are presented to estimate distributed, cell-based hydraulic conductivity. The first approach is to estimate optimal hydraulic conductivities for geological materials by incorporating thickness distribution of materials into inverse modeling. In the second approach, residuals for the groundwater model solution are minimized according to a globalized Newton method with the aid of a Geographic Information System (GIS) to calculate a cell-wise distribution of hydraulic conductivity. Both approaches honor geologic data and were effective in characterizing the heterogeneity of a regional groundwater modeling system in southwest Kansas. ?? 2005 Elsevier Ltd All rights reserved.

  7. Prize for Industrial Applications of Physics Talk: The Inverse Scattering Problem and the role of measurements in its solution

    NASA Astrophysics Data System (ADS)

    Wyatt, Philip

    2009-03-01

    The electromagnetic inverse scattering problem suggests that if a homogeneous and non-absorbing object be illuminated with a monochromatic light source and if the far field scattered light intensity is known at sufficient scattering angles, then, in principle, one could derive the dielectric structure of the scattering object. In general, this is an ill-posed problem and methods must be developed to regularize the search for unique solutions. An iterative procedure often begins with a model of the scattering object, solves the forward scattering problem using this model, and then compares these calculated results with the measured values. Key to any such solution is instrumentation capable of providing adequate data. To this end, the development of the first laser based absolute light scattering photometers is described together with their continuing evolution and some of the remarkable discoveries made with them. For particles much smaller than the wavelength of the incident light (e.g. macromolecules), the inverse scattering problems are easily solved. Among the many solutions derived with this instrumentation are the in situ structure of bacterial cells, new drug delivery mechanisms, the development of new vaccines and other biologicals, characterization of wines, the possibility of custom chemotherapy, development of new polymeric materials, identification of protein crystallization conditions, and a variety discoveries concerning protein interactions. A new form of the problem is described to address bioterrorist threats. Over the many years of development and refinement, one element stands out as essential for the successes that followed: the R and D teams were always directed and executed by physics trained theorists and experimentalists. 14 Ph. D. physicists each made his/her unique contribution to the development of these evolving instruments and the interpretation of their results.

  8. Estimating Discharge, Depth and Bottom Friction in Sand Bed Rivers Using Surface Currents and Water Surface Elevation Observations

    NASA Astrophysics Data System (ADS)

    Simeonov, J.; Czapiga, M. J.; Holland, K. T.

    2017-12-01

    We developed an inversion model for river bathymetry estimation using measurements of surface currents, water surface elevation slope and shoreline position. The inversion scheme is based on explicit velocity-depth and velocity-slope relationships derived from the along-channel momentum balance and mass conservation. The velocity-depth relationship requires the discharge value to quantitatively relate the depth to the measured velocity field. The ratio of the discharge and the bottom friction enter as a coefficient in the velocity-slope relationship and is determined by minimizing the difference between the predicted and the measured streamwise variation of the total head. Completing the inversion requires an estimate of the bulk friction, which in the case of sand bed rivers is a strong function of the size of dune bedforms. We explored the accuracy of existing and new empirical closures that relate the bulk roughness to parameters such as the median grain size diameter, ratio of shear velocity to sediment fall velocity or the Froude number. For given roughness parameterization, the inversion solution is determined iteratively since the hydraulic roughness depends on the unknown depth. We first test the new hydraulic roughness parameterization using estimates of the Manning roughness in sand bed rivers based on field measurements. The coupled inversion and roughness model is then tested using in situ and remote sensing measurements of the Kootenai River east of Bonners Ferry, ID.

  9. Markov chain Monte Carlo techniques and spatial-temporal modelling for medical EIT.

    PubMed

    West, Robert M; Aykroyd, Robert G; Meng, Sha; Williams, Richard A

    2004-02-01

    Many imaging problems such as imaging with electrical impedance tomography (EIT) can be shown to be inverse problems: that is either there is no unique solution or the solution does not depend continuously on the data. As a consequence solution of inverse problems based on measured data alone is unstable, particularly if the mapping between the solution distribution and the measurements is also nonlinear as in EIT. To deliver a practical stable solution, it is necessary to make considerable use of prior information or regularization techniques. The role of a Bayesian approach is therefore of fundamental importance, especially when coupled with Markov chain Monte Carlo (MCMC) sampling to provide information about solution behaviour. Spatial smoothing is a commonly used approach to regularization. In the human thorax EIT example considered here nonlinearity increases the difficulty of imaging, using only boundary data, leading to reconstructions which are often rather too smooth. In particular, in medical imaging the resistivity distribution usually contains substantial jumps at the boundaries of different anatomical regions. With spatial smoothing these boundaries can be masked by blurring. This paper focuses on the medical application of EIT to monitor lung and cardiac function and uses explicit geometric information regarding anatomical structure and incorporates temporal correlation. Some simple properties are assumed known, or at least reliably estimated from separate studies, whereas others are estimated from the voltage measurements. This structural formulation will also allow direct estimation of clinically important quantities, such as ejection fraction and residual capacity, along with assessment of precision.

  10. The inverse problem for definition of the shape of a molten contact bridge

    NASA Astrophysics Data System (ADS)

    Kharin, Stanislav N.; Sarsengeldin, Merey M.

    2017-09-01

    The paper presents the results of investigation of bridging phenomenon occurring at opening of electrical contacts. The mathematical model describing the dynamics of metal molten bridge takes into account the Thomson effect. It is based on the system of partial differential equations for temperature and electrical fields of the bridge in the domain containing two moving unknown boundaries. One of them is an interface between liquid and solid zones of the bridge and should be found by the solution of the corresponding Stefan problem. The second free boundary corresponds to the shape of the visible part of a bridge. Its definition is an inverse problem, for which solution it is necessary to find minimum of the energy consuming for the formation of the shape of a quasi-stationary bridge. Three components of this energy, namely surface tension, pinch effect and gravitation, are defined by the functional which minimum gives the required shape of the bridge. The solution of corresponding variation problem is found by the reduction of the problem to the solution of the system of ordinary differential equations. Calculated values of the voltage of the bridge rupture for various metals are in a good agreement with the experimental data. The criteria responsible for the mechanism of molten bridge rupture are introduced in the paper.

  11. A Model-Based Architecture Approach to Ship Design Linking Capability Needs to System Solutions

    DTIC Science & Technology

    2012-06-01

    NSSM NATO Sea Sparrow Missile RAM Rolling Airframe Missile CIWS Close-In Weapon System 3D Three Dimensional Ps Probability of Survival PHit ...example effectiveness model. The primary MOP is the inverse of the probability of taking a hit (1- PHit ), which in, this study, will be referred to as

  12. An efficient assisted history matching and uncertainty quantification workflow using Gaussian processes proxy models and variogram based sensitivity analysis: GP-VARS

    NASA Astrophysics Data System (ADS)

    Rana, Sachin; Ertekin, Turgay; King, Gregory R.

    2018-05-01

    Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.

  13. Inverse Scattering and Local Observable Algebras in Integrable Quantum Field Theories

    NASA Astrophysics Data System (ADS)

    Alazzawi, Sabina; Lechner, Gandalf

    2017-09-01

    We present a solution method for the inverse scattering problem for integrable two-dimensional relativistic quantum field theories, specified in terms of a given massive single particle spectrum and a factorizing S-matrix. An arbitrary number of massive particles transforming under an arbitrary compact global gauge group is allowed, thereby generalizing previous constructions of scalar theories. The two-particle S-matrix S is assumed to be an analytic solution of the Yang-Baxter equation with standard properties, including unitarity, TCP invariance, and crossing symmetry. Using methods from operator algebras and complex analysis, we identify sufficient criteria on S that imply the solution of the inverse scattering problem. These conditions are shown to be satisfied in particular by so-called diagonal S-matrices, but presumably also in other cases such as the O( N)-invariant nonlinear {σ}-models.

  14. On one-dimensional stretching functions for finite-difference calculations. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1979-01-01

    The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One is an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent.

  15. Multicasting based optical inverse multiplexing in elastic optical network.

    PubMed

    Guo, Bingli; Xu, Yingying; Zhu, Paikun; Zhong, Yucheng; Chen, Yuanxiang; Li, Juhao; Chen, Zhangyuan; He, Yongqi

    2014-06-16

    Optical multicasting based inverse multiplexing (IM) is introduced in spectrum allocation of elastic optical network to resolve the spectrum fragmentation problem, where superchannels could be split and fit into several discrete spectrum blocks in the intermediate node. We experimentally demonstrate it with a 1-to-7 optical superchannel multicasting module and selecting/coupling components. Also, simulation results show that, comparing with several emerging spectrum defragmentation solutions (e.g., spectrum conversion, split spectrum), IM could reduce blocking performance significantly but without adding too much system complexity as split spectrum. On the other hand, service fairness for traffic with different granularity of these schemes is investigated for the first time and it shows that IM performs better than spectrum conversion and almost as well as split spectrum, especially for smaller size traffic under light traffic intensity.

  16. Inverse design of near unity efficiency perfectly vertical grating couplers

    NASA Astrophysics Data System (ADS)

    Michaels, Andrew; Yablonovitch, Eli

    2018-02-01

    Efficient coupling between integrated optical waveguides and optical fibers is essential to the success of integrated photonics. While many solutions exist, perfectly vertical grating couplers which scatter light out of a waveguide in the direction normal to the waveguide's top surface are an ideal candidate due to their potential to reduce packaging complexity. Designing such couplers with high efficiency, however, has proven difficult. In this paper, we use electromagnetic inverse design techniques to optimize a high efficiency two-layer perfectly vertical silicon grating coupler. Our base design achieves a chip-to-fiber coupling efficiency of over 99% (-0.04 dB) at 1550 nm. Using this base design, we apply subsequent constrained optimizations to achieve vertical couplers with over 96% efficiency which are fabricable using a 65 nm process.

  17. Lithographically Encrypted Inverse Opals for Anti-Counterfeiting Applications.

    PubMed

    Heo, Yongjoon; Kang, Hyelim; Lee, Joon-Seok; Oh, You-Kwan; Kim, Shin-Hyun

    2016-07-01

    Colloidal photonic crystals possess inimitable optical properties of iridescent structural colors and unique spectral shape, which render them useful for security materials. This work reports a novel method to encrypt graphical and spectral codes in polymeric inverse opals to provide advanced security. To accomplish this, this study prepares lithographically featured micropatterns on the top surface of hydrophobic inverse opals, which serve as shadow masks against the surface modification of air cavities to achieve hydrophilicity. The resultant inverse opals allow rapid infiltration of aqueous solution into the hydrophilic cavities while retaining air in the hydrophobic cavities. Therefore, the structural color of inverse opals is regioselectively red-shifted, disclosing the encrypted graphical codes. The decoded inverse opals also deliver unique reflectance spectral codes originated from two distinct regions. The combinatorial code composed of graphical and optical codes is revealed only when the aqueous solution agreed in advance is used for decoding. In addition, the encrypted inverse opals are chemically stable, providing invariant codes with high reproducibility. In addition, high mechanical stability enables the transfer of the films onto any surfaces. This novel encryption technology will provide a new opportunity in a wide range of security applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Adjoint tomography and centroid-moment tensor inversion of the Kanto region, Japan

    NASA Astrophysics Data System (ADS)

    Miyoshi, T.

    2017-12-01

    A three-dimensional seismic wave speed model in the Kanto region of Japan was developed using adjoint tomography based on large computing. Starting with a model based on previous travel time tomographic results, we inverted the waveforms obtained at seismic broadband stations from 140 local earthquakes in the Kanto region to obtain the P- and S-wave speeds Vp and Vs. The synthetic displacements were calculated using the spectral element method (SEM; e.g. Komatitsch and Tromp 1999; Peter et al. 2011) in which the Kanto region was parameterized using 16 million grid points. The model parameters Vp and Vs were updated iteratively by Newton's method using the misfit and Hessian kernels until the misfit between the observed and synthetic waveforms was minimized. The proposed model reveals several anomalous areas with extremely low Vs values in comparison with those of the initial model. The synthetic waveforms obtained using the newly proposed model for the selected earthquakes show better fit than the initial model to the observed waveforms in different period ranges within 5-30 s. In the present study, all centroid times of the source solutions were determined using time shifts based on cross correlation to prevent high computing resources before the structural inversion. Additionally, parameters of centroid-moment solutions were fully determined using the SEM assuming the 3D structure (e.g. Liu et al. 2004). As a preliminary result, new solutions were basically same as their initial solutions. This may indicate that the 3D structure is not effective for the source estimation. Acknowledgements: This study was supported by JSPS KAKENHI Grant Number 16K21699.

  19. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  20. Bayesian Approach to the Joint Inversion of Gravity and Magnetic Data, with Application to the Ismenius Area of Mars

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Raymond, C.; Smrekar, S.; Millbury, C.

    2004-01-01

    This viewgraph presentation reviews a Bayesian approach to the inversion of gravity and magnetic data with specific application to the Ismenius Area of Mars. Many inverse problems encountered in geophysics and planetary science are well known to be non-unique (i.e. inversion of gravity the density structure of a body). In hopes of reducing the non-uniqueness of solutions, there has been interest in the joint analysis of data. An example is the joint inversion of gravity and magnetic data, with the assumption that the same physical anomalies generate both the observed magnetic and gravitational anomalies. In this talk, we formulate the joint analysis of different types of data in a Bayesian framework and apply the formalism to the inference of the density and remanent magnetization structure for a local region in the Ismenius area of Mars. The Bayesian approach allows prior information or constraints in the solutions to be incorporated in the inversion, with the "best" solutions those whose forward predictions most closely match the data while remaining consistent with assumed constraints. The application of this framework to the inversion of gravity and magnetic data on Mars reveals two typical challenges - the forward predictions of the data have a linear dependence on some of the quantities of interest, and non-linear dependence on others (termed the "linear" and "non-linear" variables, respectively). For observations with Gaussian noise, a Bayesian approach to inversion for "linear" variables reduces to a linear filtering problem, with an explicitly computable "error" matrix. However, for models whose forward predictions have non-linear dependencies, inference is no longer given by such a simple linear problem, and moreover, the uncertainty in the solution is no longer completely specified by a computable "error matrix". It is therefore important to develop methods for sampling from the full Bayesian posterior to provide a complete and statistically consistent picture of model uncertainty, and what has been learned from observations. We will discuss advanced numerical techniques, including Monte Carlo Markov

  1. Model Order Reduction for the fast solution of 3D Stokes problems and its application in geophysical inversion

    NASA Astrophysics Data System (ADS)

    Ortega Gelabert, Olga; Zlotnik, Sergio; Afonso, Juan Carlos; Díez, Pedro

    2017-04-01

    The determination of the present-day physical state of the thermal and compositional structure of the Earth's lithosphere and sub-lithospheric mantle is one of the main goals in modern lithospheric research. All this data is essential to build Earth's evolution models and to reproduce many geophysical observables (e.g. elevation, gravity anomalies, travel time data, heat flow, etc) together with understanding the relationship between them. Determining the lithospheric state involves the solution of high-resolution inverse problems and, consequently, the solution of many direct models is required. The main objective of this work is to contribute to the existing inversion techniques in terms of improving the estimation of the elevation (topography) by including a dynamic component arising from sub-lithospheric mantle flow. In order to do so, we implement an efficient Reduced Order Method (ROM) built upon classic Finite Elements. ROM allows to reduce significantly the computational cost of solving a family of problems, for example all the direct models that are required in the solution of the inverse problem. The strategy of the method consists in creating a (reduced) basis of solutions, so that when a new problem has to be solved, its solution is sought within the basis instead of attempting to solve the problem itself. In order to check the Reduced Basis approach, we implemented the method in a 3D domain reproducing a portion of Earth that covers up to 400 km depth. Within the domain the Stokes equation is solved with realistic viscosities and densities. The different realizations (the family of problems) is created by varying viscosities and densities in a similar way as it would happen in an inversion problem. The Reduced Basis method is shown to be an extremely efficiently solver for the Stokes equation in this context.

  2. Cancellation of the central singularity of the Schwarzschild solution with natural mass inversion process

    NASA Astrophysics Data System (ADS)

    Petit, Jean-Pierre; D'Agostini, G.

    2015-03-01

    We reconsider the classical Schwarzschild solution in the context of a Janus cosmological model. We show that the central singularity can be eliminated through a simple coordinate change and that the subsequent transit from one fold to the other is accompanied by mass inversion. In such scenario matter swallowed by black holes could be ejected as invisible negative mass and dispersed in space.

  3. High performance GPU processing for inversion using uniform grid searches

    NASA Astrophysics Data System (ADS)

    Venetis, Ioannis E.; Saltogianni, Vasso; Stiros, Stathis; Gallopoulos, Efstratios

    2017-04-01

    Many geophysical problems are described by systems of redundant, highly non-linear systems of ordinary equations with constant terms deriving from measurements and hence representing stochastic variables. Solution (inversion) of such problems is based on numerical, optimization methods, based on Monte Carlo sampling or on exhaustive searches in cases of two or even three "free" unknown variables. Recently the TOPological INVersion (TOPINV) algorithm, a grid search-based technique in the Rn space, has been proposed. TOPINV is not based on the minimization of a certain cost function and involves only forward computations, hence avoiding computational errors. The basic concept is to transform observation equations into inequalities on the basis of an optimization parameter k and of their standard errors, and through repeated "scans" of n-dimensional search grids for decreasing values of k to identify the optimal clusters of gridpoints which satisfy observation inequalities and by definition contain the "true" solution. Stochastic optimal solutions and their variance-covariance matrices are then computed as first and second statistical moments. Such exhaustive uniform searches produce an excessive computational load and are extremely time consuming for common computers based on a CPU. An alternative is to use a computing platform based on a GPU, which nowadays is affordable to the research community, which provides a much higher computing performance. Using the CUDA programming language to implement TOPINV allows the investigation of the attained speedup in execution time on such a high performance platform. Based on synthetic data we compared the execution time required for two typical geophysical problems, modeling magma sources and seismic faults, described with up to 18 unknown variables, on both CPU/FORTRAN and GPU/CUDA platforms. The same problems for several different sizes of search grids (up to 1012 gridpoints) and numbers of unknown variables were solved on both platforms, and execution time as a function of the grid dimension for each problem was recorded. Results indicate an average speedup in calculations by a factor of 100 on the GPU platform; for example problems with 1012 grid-points require less than two hours instead of several days on conventional desktop computers. Such a speedup encourages the application of TOPINV on high performance platforms, as a GPU, in cases where nearly real time decisions are necessary, for example finite fault modeling to identify possible tsunami sources.

  4. Three-dimensional inversion of multisource array electromagnetic data

    NASA Astrophysics Data System (ADS)

    Tartaras, Efthimios

    Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.

  5. Space-Based Remote Sensing of Atmospheric Aerosols: The Multi-Angle Spectro-Polarimetric Frontier

    NASA Technical Reports Server (NTRS)

    Kokhanovsky, A. A.; Davis, A. B.; Cairns, B.; Dubovik, O.; Hasekamp, O. P.; Sano, I.; Mukai, S.; Rozanov, V. V.; Litvinov, P.; Lapyonok, T.; hide

    2015-01-01

    The review of optical instrumentation, forward modeling, and inverse problem solution for the polarimetric aerosol remote sensing from space is presented. The special emphasis is given to the description of current airborne and satellite imaging polarimeters and also to modern satellite aerosol retrieval algorithms based on the measurements of the Stokes vector of reflected solar light as detected on a satellite. Various underlying surface reflectance models are discussed and evaluated.

  6. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  7. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas

    2016-11-01

    Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.

  8. 3D CSEM inversion based on goal-oriented adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Key, K.

    2016-12-01

    We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.

  9. Experimental validation of a Monte-Carlo-based inversion scheme for 3D quantitative photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Buchmann, Jens; Kaplan, Bernhard A.; Prohaska, Steffen; Laufer, Jan

    2017-03-01

    Quantitative photoacoustic tomography (qPAT) aims to extract physiological parameters, such as blood oxygen saturation (sO2), from measured multi-wavelength image data sets. The challenge of this approach lies in the inherently nonlinear fluence distribution in the tissue, which has to be accounted for by using an appropriate model, and the large scale of the inverse problem. In addition, the accuracy of experimental and scanner-specific parameters, such as the wavelength dependence of the incident fluence, the acoustic detector response, the beam profile and divergence, needs to be considered. This study aims at quantitative imaging of blood sO2, as it has been shown to be a more robust parameter compared to absolute concentrations. We propose a Monte-Carlo-based inversion scheme in conjunction with a reduction in the number of variables achieved using image segmentation. The inversion scheme is experimentally validated in tissue-mimicking phantoms consisting of polymer tubes suspended in a scattering liquid. The tubes were filled with chromophore solutions at different concentration ratios. 3-D multi-spectral image data sets were acquired using a Fabry-Perot based PA scanner. A quantitative comparison of the measured data with the output of the forward model is presented. Parameter estimates of chromophore concentration ratios were found to be within 5 % of the true values.

  10. Three-dimensional full waveform inversion of short-period teleseismic wavefields based upon the SEM-DSM hybrid method

    NASA Astrophysics Data System (ADS)

    Monteiller, Vadim; Chevrot, Sébastien; Komatitsch, Dimitri; Wang, Yi

    2015-08-01

    We present a method for high-resolution imaging of lithospheric structures based on full waveform inversion of teleseismic waveforms. We model the propagation of seismic waves using our recently developed direct solution method/spectral-element method hybrid technique, which allows us to simulate the propagation of short-period teleseismic waves through a regional 3-D model. We implement an iterative quasi-Newton method based upon the L-BFGS algorithm, where the gradient of the misfit function is computed using the adjoint-state method. Compared to gradient or conjugate-gradient methods, the L-BFGS algorithm has a much faster convergence rate. We illustrate the potential of this method on a synthetic test case that consists of a crustal model with a crustal discontinuity at 25 km depth and a sharp Moho jump. This model contains short- and long-wavelength heterogeneities along the lateral and vertical directions. The iterative inversion starts from a smooth 1-D model derived from the IASP91 reference Earth model. We invert both radial and vertical component waveforms, starting from long-period signals filtered at 10 s and gradually decreasing the cut-off period down to 1.25 s. This multiscale algorithm quickly converges towards a model that is very close to the true model, in contrast to inversions involving short-period waveforms only, which always get trapped into a local minimum of the cost function.

  11. Inverts permittivity and conductivity with structural constraint in GPR FWI based on truncated Newton method

    NASA Astrophysics Data System (ADS)

    Ren, Qianci

    2018-04-01

    Full waveform inversion (FWI) of ground penetrating radar (GPR) is a promising technique to quantitatively evaluate the permittivity and conductivity of near subsurface. However, these two parameters are simultaneously inverted in the GPR FWI, increasing the difficulty to obtain accurate inversion results for both parameters. In this study, I present a structural constrained GPR FWI procedure to jointly invert the two parameters, aiming to force a structural relationship between permittivity and conductivity in the process of model reconstruction. The structural constraint is enforced by a cross-gradient function. In this procedure, the permittivity and conductivity models are inverted alternately at each iteration and updated with hierarchical frequency components in the frequency domain. The joint inverse problem is solved by the truncated Newton method which considering the effect of Hessian operator and using the approximated solution of Newton equation to be the perturbation model in the updating process. The joint inversion procedure is tested by three synthetic examples. The results show that jointly inverting permittivity and conductivity in GPR FWI effectively increases the structural similarities between the two parameters, corrects the structures of parameter models, and significantly improves the accuracy of conductivity model, resulting in a better inversion result than the individual inversion.

  12. Inverse metal-assisted chemical etching produces smooth high aspect ratio InP nanostructures.

    PubMed

    Kim, Seung Hyun; Mohseni, Parsian K; Song, Yi; Ishihara, Tatsumi; Li, Xiuling

    2015-01-14

    Creating high aspect ratio (AR) nanostructures by top-down fabrication without surface damage remains challenging for III-V semiconductors. Here, we demonstrate uniform, array-based InP nanostructures with lateral dimensions as small as sub-20 nm and AR > 35 using inverse metal-assisted chemical etching (I-MacEtch) in hydrogen peroxide (H2O2) and sulfuric acid (H2SO4), a purely solution-based yet anisotropic etching method. The mechanism of I-MacEtch, in contrast to regular MacEtch, is explored through surface characterization. Unique to I-MacEtch, the sidewall etching profile is remarkably smooth, independent of metal pattern edge roughness. The capability of this simple method to create various InP nanostructures, including high AR fins, can potentially enable the aggressive scaling of InP based transistors and optoelectronic devices with better performance and at lower cost than conventional etching methods.

  13. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    NASA Astrophysics Data System (ADS)

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.

  14. A gradient-based model parametrization using Bernstein polynomials in Bayesian inversion of surface wave dispersion

    NASA Astrophysics Data System (ADS)

    Gosselin, Jeremy M.; Dosso, Stan E.; Cassidy, John F.; Quijano, Jorge E.; Molnar, Sheri; Dettmer, Jan

    2017-10-01

    This paper develops and applies a Bernstein-polynomial parametrization to efficiently represent general, gradient-based profiles in nonlinear geophysical inversion, with application to ambient-noise Rayleigh-wave dispersion data. Bernstein polynomials provide a stable parametrization in that small perturbations to the model parameters (basis-function coefficients) result in only small perturbations to the geophysical parameter profile. A fully nonlinear Bayesian inversion methodology is applied to estimate shear wave velocity (VS) profiles and uncertainties from surface wave dispersion data extracted from ambient seismic noise. The Bayesian information criterion is used to determine the appropriate polynomial order consistent with the resolving power of the data. Data error correlations are accounted for in the inversion using a parametric autoregressive model. The inversion solution is defined in terms of marginal posterior probability profiles for VS as a function of depth, estimated using Metropolis-Hastings sampling with parallel tempering. This methodology is applied to synthetic dispersion data as well as data processed from passive array recordings collected on the Fraser River Delta in British Columbia, Canada. Results from this work are in good agreement with previous studies, as well as with co-located invasive measurements. The approach considered here is better suited than `layered' modelling approaches in applications where smooth gradients in geophysical parameters are expected, such as soil/sediment profiles. Further, the Bernstein polynomial representation is more general than smooth models based on a fixed choice of gradient type (e.g. power-law gradient) because the form of the gradient is determined objectively by the data, rather than by a subjective parametrization choice.

  15. Effects of two-temperature parameter and thermal nonlocal parameter on transient responses of a half-space subjected to ramp-type heating

    NASA Astrophysics Data System (ADS)

    Xue, Zhang-Na; Yu, Ya-Jun; Tian, Xiao-Geng

    2017-07-01

    Based upon the coupled thermoelasticity and Green and Lindsay theory, the new governing equations of two-temperature thermoelastic theory with thermal nonlocal parameter is formulated. To more realistically model thermal loading of a half-space surface, a linear temperature ramping function is adopted. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Specific attention is paid to study the effect of thermal nonlocal parameter, ramping time, and two-temperature parameter on the distributions of temperature, displacement and stress distribution.

  16. Estimating uncertainties in complex joint inverse problems

    NASA Astrophysics Data System (ADS)

    Afonso, Juan Carlos

    2016-04-01

    Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related to the forward and statistical models, I will also address other uncertainties associated with data and uncertainty propagation.

  17. Calculation of earthquake rupture histories using a hybrid global search algorithm: Application to the 1992 Landers, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.

    1996-01-01

    A method is presented for the simultaneous calculation of slip amplitudes and rupture times for a finite fault using a hybrid global search algorithm. The method we use combines simulated annealing with the downhill simplex method to produce a more efficient search algorithm then either of the two constituent parts. This formulation has advantages over traditional iterative or linearized approaches to the problem because it is able to escape local minima in its search through model space for the global optimum. We apply this global search method to the calculation of the rupture history for the Landers, California, earthquake. The rupture is modeled using three separate finite-fault planes to represent the three main fault segments that failed during this earthquake. Both the slip amplitude and the time of slip are calculated for a grid work of subfaults. The data used consist of digital, teleseismic P and SH body waves. Long-period, broadband, and short-period records are utilized to obtain a wideband characterization of the source. The results of the global search inversion are compared with a more traditional linear-least-squares inversion for only slip amplitudes. We use a multi-time-window linear analysis to relax the constraints on rupture time and rise time in the least-squares inversion. Both inversions produce similar slip distributions, although the linear-least-squares solution has a 10% larger moment (7.3 ?? 1026 dyne-cm compared with 6.6 ?? 1026 dyne-cm). Both inversions fit the data equally well and point out the importance of (1) using a parameterization with sufficient spatial and temporal flexibility to encompass likely complexities in the rupture process, (2) including suitable physically based constraints on the inversion to reduce instabilities in the solution, and (3) focusing on those robust rupture characteristics that rise above the details of the parameterization and data set.

  18. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less

  19. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    DOE PAGES

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg; ...

    2016-07-13

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less

  20. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    NASA Astrophysics Data System (ADS)

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg; Isaac, Tobin; Hughes, Thomas J. R.; Ghattas, Omar

    2016-07-01

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection-diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations and model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov-Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems - i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian - we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. We show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.

  1. Lax Integrability and the Peakon Problem for the Modified Camassa-Holm Equation

    NASA Astrophysics Data System (ADS)

    Chang, Xiangke; Szmigielski, Jacek

    2018-02-01

    Peakons are special weak solutions of a class of nonlinear partial differential equations modelling non-linear phenomena such as the breakdown of regularity and the onset of shocks. We show that the natural concept of weak solutions in the case of the modified Camassa-Holm equation studied in this paper is dictated by the distributional compatibility of its Lax pair and, as a result, it differs from the one proposed and used in the literature based on the concept of weak solutions used for equations of the Burgers type. Subsequently, we give a complete construction of peakon solutions satisfying the modified Camassa-Holm equation in the sense of distributions; our approach is based on solving certain inverse boundary value problem, the solution of which hinges on a combination of classical techniques of analysis involving Stieltjes' continued fractions and multi-point Padé approximations. We propose sufficient conditions needed to ensure the global existence of peakon solutions and analyze the large time asymptotic behaviour whose special features include a formation of pairs of peakons that share asymptotic speeds, as well as Toda-like sorting property.

  2. Iterative solution of the inverse Cauchy problem for an elliptic equation by the conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Vasil'ev, V. I.; Kardashevsky, A. M.; Popov, V. V.; Prokopev, G. A.

    2017-10-01

    This article presents results of computational experiment carried out using a finite-difference method for solving the inverse Cauchy problem for a two-dimensional elliptic equation. The computational algorithm involves an iterative determination of the missing boundary condition from the override condition using the conjugate gradient method. The results of calculations are carried out on the examples with exact solutions as well as at specifying an additional condition with random errors are presented. Results showed a high efficiency of the iterative method of conjugate gradients for numerical solution

  3. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models

    PubMed Central

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994

  4. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models.

    PubMed

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.

  5. New solitary wave and multiple soliton solutions for fifth order nonlinear evolution equation with time variable coefficients

    NASA Astrophysics Data System (ADS)

    Jaradat, H. M.; Syam, Muhammed; Jaradat, M. M. M.; Mustafa, Zead; Moman, S.

    2018-03-01

    In this paper, we investigate the multiple soliton solutions and multiple singular soliton solutions of a class of the fifth order nonlinear evolution equation with variable coefficients of t using the simplified bilinear method based on a transformation method combined with the Hirota's bilinear sense. In addition, we present analysis for some parameters such as the soliton amplitude and the characteristic line. Several equation in the literature are special cases of the class which we discuss such as Caudrey-Dodd-Gibbon equation and Sawada-Kotera. Comparison with several methods in the literature, such as Helmholtz solution of the inverse variational problem, rational exponential function method, tanh method, homotopy perturbation method, exp-function method, and coth method, are made. From these comparisons, we conclude that the proposed method is efficient and our solutions are correct. It is worth mention that the proposed solution can solve many physical problems.

  6. A variational regularization of Abel transform for GPS radio occultation

    NASA Astrophysics Data System (ADS)

    Wee, Tae-Kwon

    2018-04-01

    In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.

  7. Seismic data restoration with a fast L1 norm trust region method

    NASA Astrophysics Data System (ADS)

    Cao, Jingjie; Wang, Yanfei

    2014-08-01

    Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.

  8. The in situ synthesis of PbS nanocrystals from lead(II) n-octylxanthate within a 1,3-diisopropenylbenzene–bisphenol A dimethacrylate sulfur copolymer

    PubMed Central

    Bear, J. C.; Mayes, A. G.; Parkin, I. P.; O'Brien, P.

    2017-01-01

    The synthesis of lead sulfide nanocrystals within a solution processable sulfur ‘inverse vulcanization’ polymer thin film matrix was achieved from the in situ thermal decomposition of lead(II) n-octylxanthate, [Pb(S2COOct)2]. The growth of nanocrystals within polymer thin films from single-source precursors offers a faster route to networks of nanocrystals within polymers when compared with ex situ routes. The ‘inverse vulcanization’ sulfur polymer described herein contains a hybrid linker system which demonstrates high solubility in organic solvents, allowing solution processing of the sulfur-based polymer, ideal for the formation of thin films. The process of nanocrystal synthesis within sulfur films was optimized by observing nanocrystal formation by X-ray photoelectron spectroscopy and X-ray diffraction. Examination of the film morphology by scanning electron microscopy showed that beyond a certain precursor concentration the nanocrystals formed were not only within the film but also on the surface suggesting a loading limit within the polymer. We envisage this material could be used as the basis of a new generation of materials where solution processed sulfur polymers act as an alternative to traditional polymers. PMID:28878986

  9. Inverse effects of Polyacrylamide (PAM) usage in furrow irrigation on advance time and deep percolation.

    PubMed

    Meral, Ramazan; Cemek, Bilal; Apan, Mehmet; Merdun, Hasan

    2006-10-01

    The positive effects of Polyacrylamide (PAM), which is used as a soil conditioner in furrow irrigation, on sediment transport, erosion, and infiltration have been investigated intensively in recent years. However, the effects of PAM have not been considered enough in irrigation system planning and design. As a result of increased infiltration because of PAM, advance time may be inversely affected and deep percolation increases. However, advance time in furrow irrigation is a crucial parameter in order to get high application efficiency. In this study, inverse effects of PAM were discussed, and as an alternative solution, the applicability of surge flow was investigated. PAM application significantly increased the advance time at the rates of 41.3-56.3% in the first irrigation. The application of surge flow with PAM removed this negative effect on advance time, where there was no statistically significant difference according to normal continuous flow (without PAM). PAM applications significantly increased the deep percolation, 80.3-117.1%. Surge flow with PAM had significantly positive effect on the deep percolation compared to continuous flow with PAM but not compared to normal continuous flow. These results suggested that irrigation planning should me made based on the new soil and flow conditions because of PAM usage, and surge flow can be a solution to these problems.

  10. Radiative-conductive inverse problem for lumped parameter systems

    NASA Astrophysics Data System (ADS)

    Alifanov, O. M.; Nenarokomov, A. V.; Gonzalez, V. M.

    2008-11-01

    The purpose of this paper is to introduce a iterative regularization method in the research of radiative and thermal properties of materials with applications in the design of Thermal Control Systems (TCS) of spacecrafts. In this paper the radiative and thermal properties (emissivity and thermal conductance) of a multilayered thermal-insulating blanket (MLI), which is a screen-vacuum thermal insulation as a part of the (TCS) for perspective spacecrafts, are estimated. Properties of the materials under study are determined in the result of temperature and heat flux measurement data processing based on the solution of the Inverse Heat Transfer Problem (IHTP) technique. Given are physical and mathematical models of heat transfer processes in a specimen of the multilayered thermal-insulating blanket located in the experimental facility. A mathematical formulation of the inverse heat conduction problem is presented too. The practical testing were performed for specimen of the real MLI.

  11. Study of multilayer thermal insulation by inverse problems method

    NASA Astrophysics Data System (ADS)

    Alifanov, O. M.; Nenarokomov, A. V.; Gonzalez, V. M.

    2009-11-01

    The purpose of this paper is to introduce a new method in the research of radiative and thermal properties of materials with further applications in the design of thermal control systems (TCS) of spacecrafts. In this paper the radiative and thermal properties (emissivity and thermal conductance) of a multilayered thermal-insulating blanket (MLI), which is a screen-vacuum thermal insulation as a part of the TCS for perspective spacecrafts, are estimated. Properties of the materials under study are determined in the result of temperature and heat flux measurement data processing based on the solution of the inverse heat transfer problem (IHTP) technique. Given are physical and mathematical models of heat transfer processes in a specimen of the multilayered thermal-insulating blanket located in the experimental facility. A mathematical formulation of the inverse heat conduction problem is presented as well. The practical approves were made for specimen of the real MLI.

  12. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  13. On the inverse problem of blade design for centrifugal pumps and fans

    NASA Astrophysics Data System (ADS)

    Kruyt, N. P.; Westra, R. W.

    2014-06-01

    The inverse problem of blade design for centrifugal pumps and fans has been studied. The solution to this problem provides the geometry of rotor blades that realize specified performance characteristics, together with the corresponding flow field. Here a three-dimensional solution method is described in which the so-called meridional geometry is fixed and the distribution of the azimuthal angle at the three-dimensional blade surface is determined for blades of infinitesimal thickness. The developed formulation is based on potential-flow theory. Besides the blade impermeability condition at the pressure and suction side of the blades, an additional boundary condition at the blade surface is required in order to fix the unknown blade geometry. For this purpose the mean-swirl distribution is employed. The iterative numerical method is based on a three-dimensional finite element method approach in which the flow equations are solved on the domain determined by the latest estimate of the blade geometry, with the mean-swirl distribution boundary condition at the blade surface being enforced. The blade impermeability boundary condition is then used to find an improved estimate of the blade geometry. The robustness of the method is increased by specific techniques, such as spanwise-coupled solution of the discretized impermeability condition and the use of under-relaxation in adjusting the estimates of the blade geometry. Various examples are shown that demonstrate the effectiveness and robustness of the method in finding a solution for the blade geometry of different types of centrifugal pumps and fans. The influence of the employed mean-swirl distribution on the performance characteristics is also investigated.

  14. The determination of solubility and diffusion coefficient for solids in liquids by an inverse measurement technique using cylinders of amorphous glucose as a model compound

    NASA Astrophysics Data System (ADS)

    Hu, Chengyao; Huang, Pei

    2011-05-01

    The importance of sugar and sugar-containing materials is well recognized nowadays, owing to their application in industrial processes, particularly in the food, pharmaceutical and cosmetic industries. Because of the large numbers of those compounds involved and the relatively small number of solubility and/or diffusion coefficient data for each compound available, it is highly desirable to measure the solubility and/or diffusion coefficient as efficiently as possible and to be able to improve the accuracy of the methods used. In this work, a new technique was developed for the measurement of the diffusion coefficient of a stationary solid solute in a stagnant solvent which simultaneously measures solubility based on an inverse measurement problem algorithm with the real-time dissolved amount profile as a function of time. This study differs from established techniques in both the experimental method and the data analysis. The experimental method was developed in which the dissolved amount of solid solute in quiescent solvent was investigated using a continuous weighing technique. In the data analysis, the hybrid genetic algorithm is used to minimize an objective function containing a calculated and a measured dissolved amount with time. This is measured on a cylindrical sample of amorphous glucose in methanol or ethanol. The calculated dissolved amount, that is a function of the unknown physical properties of the solid solute in the solvent, is calculated by the solution of the two-dimensional nonlinear inverse natural convection problem. The estimated values of the solubility of amorphous glucose in methanol and ethanol at 293 K were respectively 32.1 g/100 g methanol and 1.48 g/100 g ethanol, in agreement with the literature values, and support the validity of the simultaneously measured diffusion coefficient. These results show the efficiency and the stability of the developed technique to simultaneously estimate the solubility and diffusion coefficient. Also the influence of the solution density change and the initial concentration conditions on the dissolved amount was investigated by the numerical results using the estimated parameters. It is found that the theoretical assumption to simplify the inverse measurement problem algorithm is reasonable for low solubility.

  15. Inverse scattering transform for the nonlocal nonlinear Schrödinger equation with nonzero boundary conditions

    NASA Astrophysics Data System (ADS)

    Ablowitz, Mark J.; Luo, Xu-Dan; Musslimani, Ziad H.

    2018-01-01

    In 2013, a new nonlocal symmetry reduction of the well-known AKNS (an integrable system of partial differential equations, introduced by and named after Mark J. Ablowitz, David J. Kaup, and Alan C. Newell et al. (1974)) scattering problem was found. It was shown to give rise to a new nonlocal PT symmetric and integrable Hamiltonian nonlinear Schrödinger (NLS) equation. Subsequently, the inverse scattering transform was constructed for the case of rapidly decaying initial data and a family of spatially localized, time periodic one-soliton solutions was found. In this paper, the inverse scattering transform for the nonlocal NLS equation with nonzero boundary conditions at infinity is presented in four different cases when the data at infinity have constant amplitudes. The direct and inverse scattering problems are analyzed. Specifically, the direct problem is formulated, the analytic properties of the eigenfunctions and scattering data and their symmetries are obtained. The inverse scattering problem, which arises from a novel nonlocal system, is developed via a left-right Riemann-Hilbert problem in terms of a suitable uniformization variable and the time dependence of the scattering data is obtained. This leads to a method to linearize/solve the Cauchy problem. Pure soliton solutions are discussed, and explicit 1-soliton solution and two 2-soliton solutions are provided for three of the four different cases corresponding to two different signs of nonlinearity and two different values of the phase difference between plus and minus infinity. In another case, there are no solitons.

  16. Wavelet-sparsity based regularization over time in the inverse problem of electrocardiography.

    PubMed

    Cluitmans, Matthijs J M; Karel, Joël M H; Bonizzi, Pietro; Volders, Paul G A; Westra, Ronald L; Peeters, Ralf L M

    2013-01-01

    Noninvasive, detailed assessment of electrical cardiac activity at the level of the heart surface has the potential to revolutionize diagnostics and therapy of cardiac pathologies. Due to the requirement of noninvasiveness, body-surface potentials are measured and have to be projected back to the heart surface, yielding an ill-posed inverse problem. Ill-posedness ensures that there are non-unique solutions to this problem, resulting in a problem of choice. In the current paper, it is proposed to restrict this choice by requiring that the time series of reconstructed heart-surface potentials is sparse in the wavelet domain. A local search technique is introduced that pursues a sparse solution, using an orthogonal wavelet transform. Epicardial potentials reconstructed from this method are compared to those from existing methods, and validated with actual intracardiac recordings. The new technique improves the reconstructions in terms of smoothness and recovers physiologically meaningful details. Additionally, reconstruction of activation timing seems to be improved when pursuing sparsity of the reconstructed signals in the wavelet domain.

  17. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  19. Three-dimensional magnetotelluric inversion in practice—the electrical conductivity structure of the San Andreas Fault in Central California

    NASA Astrophysics Data System (ADS)

    Tietze, Kristina; Ritter, Oliver

    2013-10-01

    3-D inversion techniques have become a widely used tool in magnetotelluric (MT) data interpretation. However, with real data sets, many of the controlling factors for the outcome of 3-D inversion are little explored, such as alignment of the coordinate system, handling and influence of data errors and model regularization. Here we present 3-D inversion results of 169 MT sites from the central San Andreas Fault in California. Previous extensive 2-D inversion and 3-D forward modelling of the data set revealed significant along-strike variation of the electrical conductivity structure. 3-D inversion can recover these features but only if the inversion parameters are tuned in accordance with the particularities of the data set. Based on synthetic 3-D data we explore the model space and test the impacts of a wide range of inversion settings. The tests showed that the recovery of a pronounced regional 2-D structure in inversion of the complete impedance tensor depends on the coordinate system. As interdependencies between data components are not considered in standard 3-D MT inversion codes, 2-D subsurface structures can vanish if data are not aligned with the regional strike direction. A priori models and data weighting, that is, how strongly individual components of the impedance tensor and/or vertical magnetic field transfer functions dominate the solution, are crucial controls for the outcome of 3-D inversion. If deviations from a prior model are heavily penalized, regularization is prone to result in erroneous and misleading 3-D inversion models, particularly in the presence of strong conductivity contrasts. A `good' overall rms misfit is often meaningless or misleading as a huge range of 3-D inversion results exist, all with similarly `acceptable' misfits but producing significantly differing images of the conductivity structures. Reliable and meaningful 3-D inversion models can only be recovered if data misfit is assessed systematically in the frequency-space domain.

  20. Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun

    2017-12-01

    At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.

  1. The inverse problem of acoustic wave scattering by an air-saturated poroelastic cylinder.

    PubMed

    Ogam, Erick; Fellah, Z E A; Baki, Paul

    2013-03-01

    The efficient use of plastic foams in a diverse range of structural applications like in noise reduction, cushioning, and sleeping mattresses requires detailed characterization of their permeability and deformation (load-bearing) behavior. The elastic moduli and airflow resistance properties of foams are often measured using two separate techniques, one employing mechanical vibration methods and the other, flow rates of fluids based on fluid mechanics technology, respectively. A multi-parameter inverse acoustic scattering problem to recover airflow resistivity (AR) and mechanical properties of an air-saturated foam cylinder is solved. A wave-fluid saturated poroelastic structure interaction model based on the modified Biot theory and plane-wave decomposition using orthogonal cylindrical functions is employed to solve the inverse problem. The solutions to the inverse problem are obtained by constructing the objective functional given by the total square of the difference between predictions from the model and scattered acoustic field data acquired in an anechoic chamber. The value of the recovered AR is in good agreement with that of a slab sample cut from the cylinder and characterized using a method employing low frequency transmitted and reflected acoustic waves in a long waveguide developed by Fellah et al. [Rev. Sci. Instrum. 78(11), 114902 (2007)].

  2. A New Paradigm for Satellite Retrieval of Hydrologic Variables: The CDRD Methodology

    NASA Astrophysics Data System (ADS)

    Smith, E. A.; Mugnai, A.; Tripoli, G. J.

    2009-09-01

    Historically, retrieval of thermodynamically active geophysical variables in the atmosphere (e.g., temperature, moisture, precipitation) involved some time of inversion scheme - embedded within the retrieval algorithm - to transform radiometric observations (a vector) to the desired geophysical parameter(s) (either a scalar or a vector). Inversion is fundamentally a mathematical operation involving some type of integral-differential radiative transfer equation - often resisting a straightforward algebraic solution - in which the integral side of the equation (typically the right-hand side) contains the desired geophysical vector, while the left-hand side contains the radiative measurement vector often free of operators. Inversion was considered more desirable than forward modeling because the forward model solution had to be selected from a generally unmanageable set of parameter-observation relationships. However, in the classical inversion problem for retrieval of temperature using multiple radiative frequencies along the wing of an absorption band (or line) of a well-mixed radiatively active gas, in either the infrared or microwave spectrums, the inversion equation to be solved consists of a Fredholm integral equation of the 2nd kind - a specific type of transform problem in which there are an infinite number of solutions. This meant that special treatment of the transform process was required in order to obtain a single solution. Inversion had become the method of choice for retrieval in the 1950s because it appealed to the use of mathematical elegance, and because the numerical approaches used to solve the problems (typically some type of relaxation or perturbation scheme) were computationally fast in an age when computers speeds were slow. Like many solution schemes, inversion has lingered on regardless of the fact that computer speeds have increased many orders of magnitude and forward modeling itself has become far more elegant in combination with Bayesian averaging procedures given that the a priori probabilities of occurrence in the true environment of the parameter(s) in question can be approximated (or are actually known). In this presentation, the theory of the more modern retrieval approach using a combination of cloud, radiation and other specialized forward models in conjunction with Bayesian weighted averaging will be reviewed in light of a brief history of inversion. The application of the theory will be cast in the framework of what we call the Cloud-Dynamics-Radiation-Database (CDRD) methodology - which we now use for the retrieval of precipitation from spaceborne passive microwave radiometers. In a companion presentation, we will specifically describe the CDRD methodology and present results for its application within the Mediterranean basin.

  3. A three-step maximum a posteriori probability method for InSAR data inversion of coseismic rupture with application to the 14 April 2010 Mw 6.9 Yushu, China, earthquake

    NASA Astrophysics Data System (ADS)

    Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei

    2013-08-01

    develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.

  4. Fully probabilistic earthquake source inversion on teleseismic scales

    NASA Astrophysics Data System (ADS)

    Stähler, Simon; Sigloch, Karin

    2017-04-01

    Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.

  5. TOPEX/POSEIDON tides estimated using a global inverse model

    NASA Technical Reports Server (NTRS)

    Egbert, Gary D.; Bennett, Andrew F.; Foreman, Michael G. G.

    1994-01-01

    Altimetric data from the TOPEX/POSEIDON mission will be used for studies of global ocean circulation and marine geophysics. However, it is first necessary to remove the ocean tides, which are aliased in the raw data. The tides are constrained by the two distinct types of information: the hydrodynamic equations which the tidal fields of elevations and velocities must satisfy, and direct observational data from tide gauges and satellite altimetry. Here we develop and apply a generalized inverse method, which allows us to combine rationally all of this information into global tidal fields best fitting both the data and the dynamics, in a least squares sense. The resulting inverse solution is a sum of the direct solution to the astronomically forced Laplace tidal equations and a linear combination of the representers for the data functionals. The representer functions (one for each datum) are determined by the dynamical equations, and by our prior estimates of the statistics or errors in these equations. Our major task is a direct numerical calculation of these representers. This task is computationally intensive, but well suited to massively parallel processing. By calculating the representers we reduce the full (infinite dimensional) problem to a relatively low-dimensional problem at the outset, allowing full control over the conditioning and hence the stability of the inverse solution. With the representers calculated we can easily update our model as additional TOPEX/POSEIDON data become available. As an initial illustration we invert harmonic constants from a set of 80 open-ocean tide gauges. We then present a practical scheme for direct inversion of TOPEX/POSEIDON crossover data. We apply this method to 38 cycles of geophysical data records (GDR) data, computing preliminary global estimates of the four principal tidal constituents, M(sub 2), S(sub 2), K(sub 1) and O(sub 1). The inverse solution yields tidal fields which are simultaneously smoother, and in better agreement with altimetric and ground truth data, than previously proposed tidal models. Relative to the 'default' tidal corrections provided with the TOPEX/POSEIDON GDR, the inverse solution reduces crossover difference variances significantly (approximately 20-30%), even though only a small number of free parameters (approximately equal to 1000) are actually fit to the crossover data.

  6. Super-resolution Doppler beam sharpening method using fast iterative adaptive approach-based spectral estimation

    NASA Astrophysics Data System (ADS)

    Mao, Deqing; Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu

    2018-01-01

    Doppler beam sharpening (DBS) is a critical technology for airborne radar ground mapping in forward-squint region. In conventional DBS technology, the narrow-band Doppler filter groups formed by fast Fourier transform (FFT) method suffer from low spectral resolution and high side lobe levels. The iterative adaptive approach (IAA), based on the weighted least squares (WLS), is applied to the DBS imaging applications, forming narrower Doppler filter groups than the FFT with lower side lobe levels. Regrettably, the IAA is iterative, and requires matrix multiplication and inverse operation when forming the covariance matrix, its inverse and traversing the WLS estimate for each sampling point, resulting in a notably high computational complexity for cubic time. We propose a fast IAA (FIAA)-based super-resolution DBS imaging method, taking advantage of the rich matrix structures of the classical narrow-band filtering. First, we formulate the covariance matrix via the FFT instead of the conventional matrix multiplication operation, based on the typical Fourier structure of the steering matrix. Then, by exploiting the Gohberg-Semencul representation, the inverse of the Toeplitz covariance matrix is computed by the celebrated Levinson-Durbin (LD) and Toeplitz-vector algorithm. Finally, the FFT and fast Toeplitz-vector algorithm are further used to traverse the WLS estimates based on the data-dependent trigonometric polynomials. The method uses the Hermitian feature of the echo autocorrelation matrix R to achieve its fast solution and uses the Toeplitz structure of R to realize its fast inversion. The proposed method enjoys a lower computational complexity without performance loss compared with the conventional IAA-based super-resolution DBS imaging method. The results based on simulations and measured data verify the imaging performance and the operational efficiency.

  7. Exact solution of a model DNA-inversion genetic switch with orientational control.

    PubMed

    Visco, Paolo; Allen, Rosalind J; Evans, Martin R

    2008-09-12

    DNA inversion is an important mechanism by which bacteria and bacteriophage switch reversibly between phenotypic states. In such switches, the orientation of a short DNA element is flipped by a site-specific recombinase enzyme. We propose a simple model for a DNA-inversion switch in which recombinase production is dependent on the switch state (orientational control). Our model is inspired by the fim switch in E. coli. We present an exact analytical solution of the chemical master equation for the model switch, as well as stochastic simulations. Orientational control causes the switch to deviate from Poissonian behavior: the distribution of times in the on state shows a peak and successive flip times are correlated.

  8. Inverse boundary-layer theory and comparison with experiment

    NASA Technical Reports Server (NTRS)

    Carter, J. E.

    1978-01-01

    Inverse boundary layer computational procedures, which permit nonsingular solutions at separation and reattachment, are presented. In the first technique, which is for incompressible flow, the displacement thickness is prescribed; in the second technique, for compressible flow, a perturbation mass flow is the prescribed condition. The pressure is deduced implicitly along with the solution in each of these techniques. Laminar and turbulent computations, which are typical of separated flow, are presented and comparisons are made with experimental data. In both inverse procedures, finite difference techniques are used along with Newton iteration. The resulting procedure is no more complicated than conventional boundary layer computations. These separated boundary layer techniques appear to be well suited for complete viscous-inviscid interaction computations.

  9. The incomplete inverse and its applications to the linear least squares problem

    NASA Technical Reports Server (NTRS)

    Morduch, G. E.

    1977-01-01

    A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.

  10. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  11. Joint two dimensional inversion of gravity and magnetotelluric data using correspondence maps

    NASA Astrophysics Data System (ADS)

    Carrillo Lopez, J.; Gallardo, L. A.

    2016-12-01

    Inverse problems in Earth sciences are inherently non-unique. To improve models and reduce the number of solutions we need to provide extra information. In geological context, this information could be a priori information, for example, geological information, well log data, smoothness, or actually, information of measures of different kind of data. Joint inversion provides an approach to improve the solution and reduce the errors due to suppositions of each method. To do that, we need a link between two or more models. Some approaches have been explored successfully in recent years. For example, Gallardo and Meju (2003), Gallardo and Meju (2004, 2011), and Gallardo et. al. (2012) used the directions of properties to measure the similarity between models minimizing their cross gradients. In this work, we proposed a joint iterative inversion method that use spatial distribution of properties as a link. Correspondence maps could be better characterizing specific Earth systems due they consider the relation between properties. We implemented a code in Fortran to do a two dimensional inversion of magnetotelluric and gravity data, which are two of the standard methods in geophysical exploration. Synthetic tests show the advantages of joint inversion using correspondence maps against separate inversion. Finally, we applied this technique to magnetotelluric and gravity data in the geothermal zone located in Cerro Prieto, México.

  12. Computationally efficient control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne (Inventor)

    2001-01-01

    A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.

  13. C–IBI: Targeting cumulative coordination within an iterative protocol to derive coarse-grained models of (multi-component) complex fluids

    DOE PAGES

    de Oliveira, Tiago E.; Netz, Paulo A.; Kremer, Kurt; ...

    2016-05-03

    We present a coarse-graining strategy that we test for aqueous mixtures. The method uses pair-wise cumulative coordination as a target function within an iterative Boltzmann inversion (IBI) like protocol. We name this method coordination iterative Boltzmann inversion (C–IBI). While the underlying coarse-grained model is still structure based and, thus, preserves pair-wise solution structure, our method also reproduces solvation thermodynamics of binary and/or ternary mixtures. In addition, we observe much faster convergence within C–IBI compared to IBI. To validate the robustness, we apply C–IBI to study test cases of solvation thermodynamics of aqueous urea and a triglycine solvation in aqueous urea.

  14. Optimization method for an evolutional type inverse heat conduction problem

    NASA Astrophysics Data System (ADS)

    Deng, Zui-Cha; Yu, Jian-Ning; Yang, Liu

    2008-01-01

    This paper deals with the determination of a pair (q, u) in the heat conduction equation u_t-u_{xx}+q(x,t)u=0, with initial and boundary conditions u(x,0)=u_0(x),\\qquad u_x|_{x=0}=u_x|_{x=1}=0, from the overspecified data u(x, t) = g(x, t). By the time semi-discrete scheme, the problem is transformed into a sequence of inverse problems in which the unknown coefficients are purely space dependent. Based on the optimal control framework, the existence, uniqueness and stability of the solution (q, u) are proved. A necessary condition which is a couple system of a parabolic equation and parabolic variational inequality is deduced.

  15. On One-Dimensional Stretching Functions for Finite-Difference Calculations

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1980-01-01

    The class of one dimensional stretching function used in finite difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One is an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two sided stretching function, for which the arbitrary slopes at the two ends of the one dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent. The general two sided function has many applications in the construction of finite difference grids.

  16. On one-dimensional stretching functions for finite-difference calculations. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1983-01-01

    The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent. Previously announced in STAR as N80-25055

  17. Fate of Volatile Organic Compounds in Constructed Wastewater Treatment Wetlands

    USGS Publications Warehouse

    Keefe, S.H.; Barber, L.B.; Runkel, R.L.; Ryan, J.N.

    2004-01-01

    The fate of volatile organic compounds was evaluated in a wastewater-dependent constructed wetland near Phoenix, AZ, using field measurements and solute transport modeling. Numerically based volatilization rates were determined using inverse modeling techniques and hydraulic parameters established by sodium bromide tracer experiments. Theoretical volatilization rates were calculated from the two-film method incorporating physicochemical properties and environmental conditions. Additional analyses were conducted using graphically determined volatilization rates based on field measurements. Transport (with first-order removal) simulations were performed using a range of volatilization rates and were evaluated with respect to field concentrations. The inverse and two-film reactive transport simulations demonstrated excellent agreement with measured concentrations for 1,4-dichlorobenzene, tetrachloroethene, dichloromethane, and trichloromethane and fair agreement for dibromochloromethane, bromo-dichloromethane, and toluene. Wetland removal efficiencies from inlet to outlet ranged from 63% to 87% for target compounds.

  18. Implementation of one and three dimensional models for heat transfer coeffcient identification over the plate cooled by the circular water jets

    NASA Astrophysics Data System (ADS)

    Malinowski, Zbigniew; Cebo-Rudnicka, Agnieszka; Hadała, Beata; Szajding, Artur; Telejko, Tadeusz

    2017-10-01

    A cooling rate affects the mechanical properties of steel which strongly depend on microstructure evolution processes. The heat transfer boundary condition for the numerical simulation of steel cooling by water jets can be determined from the local one dimensional or from the three dimensional inverse solutions in space and time. In the present study the inconel plate has been heated to about 900 °C and then cooled by six circular water jets. The plate temperature has been measured by 30 thermocouples. The heat transfer coefficient and the heat flux distributions at the plate surface have been determined in time and space. The one dimensional solutions have given a local error to the heat transfer coefficient of about 35%. The three dimensional inverse solution has allowed reducing the local error to about 20%. The uncertainty test has confirmed that a better approximation of the heat transfer coefficient distribution over the cooled surface can be obtained even for limited number of thermocouples. In such a case it was necessary to constrain the inverse solution with the interpolated temperature sensors.

  19. Efficient Jacobian inversion for the control of simple robot manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1988-01-01

    Symbolic inversion of the Jacobian matrix for spherical wrist arms is investigated. It is shown that, taking advantage of the simple geometry of these arms, the closed-form solution of the system Q = J-1X, representing a transformation from task space to joint space, can be obtained very efficiently. The solutions for PUMA, Stanford, and a six-revolute-joint coplanar arm, along with all singular points, are presented. The solution for each joint variable is found as an explicit function of the singular points which provides a better insight into the effect of different singular points on the motion and force exertion of each individual joint. For the above arms, the computation cost of the solution is on the same order as the cost of forward kinematic solution and it is significantly reduced if forward kinematic solution is already obtained. A comparison with previous methods shows that this method is the most efficient to date.

  20. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donovan, Ellen M., E-mail: ellen.donovan@icr.ac.u; Ciurlionis, Laura; Fairfoul, Jamie

    Purpose: To establish planning solutions for a concomitant three-level radiation dose distribution to the breast using linear accelerator- or tomotherapy-based intensity-modulated radiotherapy (IMRT), for the U.K. Intensity Modulated and Partial Organ (IMPORT) High trial. Methods and Materials: Computed tomography data sets for 9 patients undergoing breast conservation surgery with implanted tumor bed gold markers were used to prepare three-level dose distributions encompassing the whole breast (36 Gy), partial breast (40 Gy), and tumor bed boost (48 or 53 Gy) treated concomitantly in 15 fractions within 3 weeks. Forward and inverse planned IMRT and tomotherapy were investigated as solutions. A standardmore » electron field was compared with a photon field arrangement encompassing the tumor bed boost volume. The out-of-field doses were measured for all methods. Results: Dose-volume constraints of volume >90% receiving 32.4 Gy and volume >95% receiving 50.4 Gy for the whole breast and tumor bed were achieved. The constraint of volume >90% receiving 36 Gy for the partial breast was fulfilled in the inverse IMRT and tomotherapy plans and in 7 of 9 cases of a forward planned IMRT distribution. An electron boost to the tumor bed was inadequate in 8 of 9 cases. The IMRT methods delivered a greater whole body dose than the standard breast tangents. A contralateral lung volume >2.5 Gy was increased in the inverse IMRT and tomotherapy plans, although it did not exceed the constraint. Conclusion: We have demonstrated a set of widely applicable solutions that fulfilled the stringent clinical trial requirements for the delivery of a concomitant three-level dose distribution to the breast.« less

  2. Oil core microcapsules by inverse gelation technique.

    PubMed

    Martins, Evandro; Renard, Denis; Davy, Joëlle; Marquis, Mélanie; Poncelet, Denis

    2015-01-01

    A promising technique for oil encapsulation in Ca-alginate capsules by inverse gelation was proposed by Abang et al. This method consists of emulsifying calcium chloride solution in oil and then adding it dropwise in an alginate solution to produce Ca-alginate capsules. Spherical capsules with diameters around 3 mm were produced by this technique, however the production of smaller capsules was not demonstrated. The objective of this study is to propose a new method of oil encapsulation in a Ca-alginate membrane by inverse gelation. The optimisation of the method leads to microcapsules with diameters around 500 μm. In a search of microcapsules with improved diffusion characteristics, the size reduction is an essential factor to broaden the applications in food, cosmetics and pharmaceuticals areas. This work contributes to a better understanding of the inverse gelation technique and allows the production of microcapsules with a well-defined shell-core structure.

  3. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  4. Use of Linear Prediction Uncertainty Analysis to Guide Conditioning of Models Simulating Surface-Water/Groundwater Interactions

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; White, J.; Doherty, J.

    2011-12-01

    Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.

  5. Derivation and application of an analytical rock displacement solution on rectangular cavern wall using the inverse mapping method.

    PubMed

    Gao, Mingzhong; Yu, Bin; Qiu, Zhiqiang; Yin, Xiangang; Li, Shengwei; Liu, Qiang

    2017-01-01

    Rectangular caverns are increasingly used in underground engineering projects, the failure mechanism of rectangular cavern wall rock is significantly different as a result of the cross-sectional shape and variations in wall stress distributions. However, the conventional computational method always results in a long-winded computational process and multiple displacement solutions of internal rectangular wall rock. This paper uses a Laurent series complex method to obtain a mapping function expression based on complex variable function theory and conformal transformation. This method is combined with the Schwarz-Christoffel method to calculate the mapping function coefficient and to determine the rectangular cavern wall rock deformation. With regard to the inverse mapping concept, the mapping relation between the polar coordinate system within plane ς and a corresponding unique plane coordinate point inside the cavern wall rock is discussed. The disadvantage of multiple solutions when mapping from the plane to the polar coordinate system is addressed. This theoretical formula is used to calculate wall rock boundary deformation and displacement field nephograms inside the wall rock for a given cavern height and width. A comparison with ANSYS numerical software results suggests that the theoretical solution and numerical solution exhibit identical trends, thereby demonstrating the method's validity. This method greatly improves the computing accuracy and reduces the difficulty in solving for cavern boundary and internal wall rock displacements. The proposed method provides a theoretical guide for controlling cavern wall rock deformation failure.

  6. Derivation and application of an analytical rock displacement solution on rectangular cavern wall using the inverse mapping method

    PubMed Central

    Gao, Mingzhong; Qiu, Zhiqiang; Yin, Xiangang; Li, Shengwei; Liu, Qiang

    2017-01-01

    Rectangular caverns are increasingly used in underground engineering projects, the failure mechanism of rectangular cavern wall rock is significantly different as a result of the cross-sectional shape and variations in wall stress distributions. However, the conventional computational method always results in a long-winded computational process and multiple displacement solutions of internal rectangular wall rock. This paper uses a Laurent series complex method to obtain a mapping function expression based on complex variable function theory and conformal transformation. This method is combined with the Schwarz-Christoffel method to calculate the mapping function coefficient and to determine the rectangular cavern wall rock deformation. With regard to the inverse mapping concept, the mapping relation between the polar coordinate system within plane ς and a corresponding unique plane coordinate point inside the cavern wall rock is discussed. The disadvantage of multiple solutions when mapping from the plane to the polar coordinate system is addressed. This theoretical formula is used to calculate wall rock boundary deformation and displacement field nephograms inside the wall rock for a given cavern height and width. A comparison with ANSYS numerical software results suggests that the theoretical solution and numerical solution exhibit identical trends, thereby demonstrating the method’s validity. This method greatly improves the computing accuracy and reduces the difficulty in solving for cavern boundary and internal wall rock displacements. The proposed method provides a theoretical guide for controlling cavern wall rock deformation failure. PMID:29155892

  7. Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin

    2016-04-01

    Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.

  8. Minimum relative entropy, Bayes and Kapur

    NASA Astrophysics Data System (ADS)

    Woodbury, Allan D.

    2011-04-01

    The focus of this paper is to illustrate important philosophies on inversion and the similarly and differences between Bayesian and minimum relative entropy (MRE) methods. The development of each approach is illustrated through the general-discrete linear inverse. MRE differs from both Bayes and classical statistical methods in that knowledge of moments are used as ‘data’ rather than sample values. MRE like Bayes, presumes knowledge of a prior probability distribution and produces the posterior pdf itself. MRE attempts to produce this pdf based on the information provided by new moments. It will use moments of the prior distribution only if new data on these moments is not available. It is important to note that MRE makes a strong statement that the imposed constraints are exact and complete. In this way, MRE is maximally uncommitted with respect to unknown information. In general, since input data are known only to within a certain accuracy, it is important that any inversion method should allow for errors in the measured data. The MRE approach can accommodate such uncertainty and in new work described here, previous results are modified to include a Gaussian prior. A variety of MRE solutions are reproduced under a number of assumed moments and these include second-order central moments. Various solutions of Jacobs & van der Geest were repeated and clarified. Menke's weighted minimum length solution was shown to have a basis in information theory, and the classic least-squares estimate is shown as a solution to MRE under the conditions of more data than unknowns and where we utilize the observed data and their associated noise. An example inverse problem involving a gravity survey over a layered and faulted zone is shown. In all cases the inverse results match quite closely the actual density profile, at least in the upper portions of the profile. The similar results to Bayes presented in are a reflection of the fact that the MRE posterior pdf, and its mean are constrained not by d=Gm but by its first moment E(d=Gm), a weakened form of the constraints. If there is no error in the data then one should expect a complete agreement between Bayes and MRE and this is what is shown. Similar results are shown when second moment data is available (e.g. posterior covariance equal to zero). But dissimilar results are noted when we attempt to derive a Bayesian like result from MRE. In the various examples given in this paper, the problems look similar but are, in the final analysis, not equal. The methods of attack are different and so are the results even though we have used the linear inverse problem as a common template.

  9. Open database of epileptic EEG with MRI and postoperational assessment of foci--a real world verification for the EEG inverse solutions.

    PubMed

    Zwoliński, Piotr; Roszkowski, Marcin; Zygierewicz, Jaroslaw; Haufe, Stefan; Nolte, Guido; Durka, Piotr J

    2010-12-01

    This paper introduces a freely accessible database http://eeg.pl/epi , containing 23 datasets from patients diagnosed with and operated on for drug-resistant epilepsy. This was collected as part of the clinical routine at the Warsaw Memorial Child Hospital. Each record contains (1) pre-surgical electroencephalography (EEG) recording (10-20 system) with inter-ictal discharges marked separately by an expert, (2) a full set of magnetic resonance imaging (MRI) scans for calculations of the realistic forward models, (3) structural placement of the epileptogenic zone, recognized by electrocorticography (ECoG) and post-surgical results, plotted on pre-surgical MRI scans in transverse, sagittal and coronal projections, (4) brief clinical description of each case. The main goal of this project is evaluation of possible improvements of localization of epileptic foci from the surface EEG recordings. These datasets offer a unique possibility for evaluating different EEG inverse solutions. We present preliminary results from a subset of these cases, including comparison of different schemes for the EEG inverse solution and preprocessing. We report also a finding which relates to the selective parametrization of single waveforms by multivariate matching pursuit, which is used in the preprocessing for the inverse solutions. It seems to offer a possibility of tracing the spatial evolution of seizures in time.

  10. Moment Tensor Inversion of the 1998 Aiquile Earthquake Using Long-period surface waves

    NASA Astrophysics Data System (ADS)

    Wang, H.

    2016-12-01

    On 22nd May 1998 at 04:49(GMT), an earthquake of magnitude Mw = 6.6 struck the Aiquile region of Bolivia, causing 105 deaths and significant damage to the nearby towns of Hoyadas and Pampa Grande. This was the largest shallow earthquake (15 km depth) in Bolivia in over 50 years, and was felt as far Sucre, approximately 100 km away. In this report, a centroid moment tensor (CMT) inversion is carried using body waves and surface waves from 1998 Aiquile earthquake with 1-D and 3-D earth models to obtain the source model parameters and moment tensor, which are the values will be subsequently compared against the Global Centroid Moment Tensor Catalog(GCMT). Also, the excitation kernels could be gained and synthetic data can be created with different earth models. The two method for calculating synthetic seismograms are SPECFEM3D Globe which is based on shear wave mantle model S40RTS and crustal model CRUST 2.0, and AxiSEM which is based on PREM 1-D earth Model. Within the report, the theory behind the CMT inversion was explained and the source parameters gained from the inversion can be used to reveal the tectonics of the source of this earthquake, these information could be helpful in assessing seismic hazard and overall tectonic regime of this region. Furthermore, results of synthetic seismograms and the solution of inversion are going to be used to assess two models.

  11. Inversed estimation of critical factors for controlling over-prediction of summertime tropospheric O3 over East Asia based of the combination of DDM sensitivity analysis and modeled Green's function method

    NASA Astrophysics Data System (ADS)

    Itahashi, S.; Yumimoto, K.; Uno, I.; Kim, S.

    2012-12-01

    Air quality studies based on the chemical transport model have been provided many important results for promoting our knowledge of air pollution phenomena, however, discrepancies between modeling results and observation data are still important issue to overcome. One of the concerning issue would be an over-prediction of summertime tropospheric ozone in remote area of Japan. This problem has been pointed out in the model comparison study of both regional scale (e.g., MICS-Asia) and global scale model (e.g., TH-FTAP). Several reasons for this issue can be listed as, (i) the modeled reproducibility on the penetration of clean oceanic air mass, (ii) correct estimation of the anthropogenic NOx / VOC emissions over East Asia, (iii) the chemical reaction scheme used in model simulation. In this study, we attempt to inverse estimation of some important chemical reactions based on the combining system of DDM (decoupled direct method) sensitivity analysis and modeled Green's function approach. The decoupled direct method (DDM) is an efficient and accurate way of performing sensitivity analysis to model inputs, calculates sensitivity coefficients representing the responsiveness of atmospheric chemical concentrations to perturbations in a model input or parameter. The inverse solutions with the Green's functions are given by a linear, least-squares method but are still robust against nonlinearities, To construct the response matrix (i.e., Green's functions), we can directly use the results of DDM sensitivity analysis. The solution of chemical reaction constants which have relatively large uncertainties are determined with constraints of observed ozone concentration data over the remote area in Japan. Our inversed estimation demonstrated that the underestimation of reaction constant to produce HNO3 (NO2 + OH + M → HNO3 + M) in SAPRC99 chemical scheme, and the inversed results indicated the +29.0 % increment to this reaction. This estimation has good agreement when compared with the CB4 and CB5, and also to the SAPRC07 estimation. For the NO2 photolysis rates, 49.4 % reduction was pronounced. This result indicates the importance of heavy aerosol effect for the change of photolysis rate must be incorporated in the numerical study.

  12. A systematic linear space approach to solving partially described inverse eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Hu, Sau-Lon James; Li, Haujun

    2008-06-01

    Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.

  13. Data inversion immune to cycle-skipping using AWI

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Umpleby, A.; Yao, G.; Morgan, J. V.

    2014-12-01

    Over the last decade, 3D Full Waveform Inversion (FWI) has become a standard model-building tool in exploration seismology, especially in oil and gas applications -thanks to the high quality (spatial density of sources and receivers) datasets acquired by the industry. FWI provides superior quantitative images than its travel-time counterparts (travel-time based inversion methods) because it aims to match all the information in the observations instead of a severely restricted subset of them, namely picked arrivals.The downside is that the solution space explored by FWI has a high number of local minima, and since the solution is restricted to local optimization methods (due to the objective function evaluation cost), the success of the inversion is subject to starting within the basin of attraction of the global minimum.Local minima can exist for a wide variety of reasons, and it seems unlikely that a formulation of the problem that can eliminate all of them -by defining the optimization problem in a form that results in a monotonic objective function- exist. However, a significant amount of local minima are created by the definition of data misfit. In its standard formulation FWI compares observed data (field data) with predicted data (generated with a synthetic model) by subtracting one from the other, and the objective function is defined as some norm of this difference. The combination of this criteria and the fact that seismic data is oscillatory produces the well-known phenomenon of cycle-skipping, where model updates try to match nearest cycles from one dataset to the other.In order to avoid cycle-skipping we propose a different comparison between observed and predicted data, based on Wiener filters, which exploits the fact that the "identity" Wiener filter is a spike at zero lag. This gives rise to a new objective function without cycle-skipped related local minima, and therefore suppress the need of accurate starting models or low frequencies in the data. This new technique, called Adaptive Waveform Inversion (AWI) appears always superior to conventional FWI.

  14. Kinematics of a New High Precision Three Degree-of-Freedom Parallel Manipulator

    NASA Technical Reports Server (NTRS)

    Tahmasebi, Farhad

    2005-01-01

    Closed-form direct and inverse kinematics of a new three degree-of-freedom (DOF) parallel manipulator with inextensible limbs and base-mounted actuators are presented. The manipulator has higher resolution and precision than the existing three DOF mechanisms with extensible limbs. Since all of the manipulator actuators are base-mounted; higher payload capacity, smaller actuator sizes, and lower power dissipation can be obtained. The manipulator is suitable for alignment applications where only tip, tilt, and piston motions are significant. The direct kinematics of the manipulator is reduced to solving an eighth-degree polynomial in the square of tangent of half-angle between one of the limbs and the base plane. Hence, there are at most sixteen assembly configurations for the manipulator. In addition, it is shown that the sixteen solutions are eight pairs of reflected configurations with respect to the base plane. Numerical examples for the direct and inverse kinematics of the manipulator are also presented.

  15. Mixed linear-nonlinear fault slip inversion: Bayesian inference of model, weighting, and smoothing parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, J.; Johnson, K. M.

    2009-12-01

    Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress drop.

  16. Fully Nonlinear Modeling and Analysis of Precision Membranes

    NASA Technical Reports Server (NTRS)

    Pai, P. Frank; Young, Leyland G.

    2003-01-01

    High precision membranes are used in many current space applications. This paper presents a fully nonlinear membrane theory with forward and inverse analyses of high precision membrane structures. The fully nonlinear membrane theory is derived from Jaumann strains and stresses, exact coordinate transformations, the concept of local relative displacements, and orthogonal virtual rotations. In this theory, energy and Newtonian formulations are fully correlated, and every structural term can be interpreted in terms of vectors. Fully nonlinear ordinary differential equations (ODES) governing the large static deformations of known axisymmetric membranes under known axisymmetric loading (i.e., forward problems) are presented as first-order ODES, and a method for obtaining numerically exact solutions using the multiple shooting procedure is shown. A method for obtaining the undeformed geometry of any axisymmetric membrane with a known inflated geometry and a known internal pressure (i.e., inverse problems) is also derived. Numerical results from forward analysis are verified using results in the literature, and results from inverse analysis are verified using known exact solutions and solutions from the forward analysis. Results show that the membrane theory and the proposed numerical methods for solving nonlinear forward and inverse membrane problems are accurate.

  17. On the existence of a solution to a quasilinear elliptic system of the Lane, Emden and Fowler type

    NASA Astrophysics Data System (ADS)

    Covei, Dragoş-Pǎtru

    2012-11-01

    In this article, we give an algorithm to obtain the existence of a solution for a quasilinear elliptic system. Our result is new and is based on a recent work of [R.J. Biezuner, J. Brown, G. Ercole and E.M. Martins, Computing the first eigenpair of the p-Laplacian via inverse iteration of sublinear supersolutions, J. Sci. Computation, 2011]. Such problems appear in boundary layer phenomena for viscous fluids, the equilibrium configuration of mass in a spherical cloud of gas, thermal explosion as well as in others applications.

  18. Method of Harmonic Balance in Full-Scale-Model Tests of Electrical Devices

    NASA Astrophysics Data System (ADS)

    Gorbatenko, N. I.; Lankin, A. M.; Lankin, M. V.

    2017-01-01

    Methods for determining the weber-ampere characteristics of electrical devices, one of which is based on solution of direct problem of harmonic balance and the other on solution of inverse problem of harmonic balance by the method of full-scale-model tests, are suggested. The mathematical model of the device is constructed using the describing function and simplex optimization methods. The presented results of experimental applications of the method show its efficiency. The advantage of the method is the possibility of application for nondestructive inspection of electrical devices in the processes of their production and operation.

  19. Direct manipulation of wave amplitude and phase through inverse design of isotropic media

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Vial, B.; Horsley, S. A. R.; Philbin, T. G.; Hao, Y.

    2017-07-01

    In this article we propose a new design methodology allowing us to control both amplitude and phase of electromagnetic waves from a cylindrical incident wave. This results in isotropic materials and does not resort to transformation optics or its quasi-conformal approximations. Our method leads to two-dimensional isotropic, inhomogeneous material profiles of permittivity and permeability, to which a general class of scattering-free wave solutions arise. Our design is based on the separation of the complex wave solution into amplitude and phase. We give two types of examples to validate our methodology.

  20. Novel Dry-Type Glucose Sensor Based on a Metal-Oxide-Semiconductor Capacitor Structure with Horseradish Peroxidase + Glucose Oxidase Catalyzing Layer

    NASA Astrophysics Data System (ADS)

    Lin, Jing-Jenn; Wu, You-Lin; Hsu, Po-Yen

    2007-10-01

    In this paper, we present a novel dry-type glucose sensor based on a metal-oxide-semiconductor capacitor (MOSC) structure using SiO2 as a gate dielectric in conjunction with a horseradish peroxidase (HRP) + glucose oxidase (GOD) catalyzing layer. The tested glucose solution was dropped directly onto the window opened on the SiO2 layer, with a coating of HRP + GOD catalyzing layer on top of the gate dielectric. From the capacitance-voltage (C-V) characteristics of the sensor, we found that the glucose solution can induce an inversion layer on the silicon surface causing a gate leakage current flowing along the SiO2 surface. The gate current changes Δ I before and after the drop of glucose solution exhibits a near-linear relationship with increasing glucose concentration. The Δ I sensitivity is about 1.76 nA cm-2 M-1, and the current is quite stable 20 min after the drop of the glucose solution is tested.

  1. Vacuum solutions of five dimensional Einstein equations generated by inverse scattering method. II. Production of the black ring solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomizawa, Shinya; Nozawa, Masato

    2006-06-15

    We study vacuum solutions of five-dimensional Einstein equations generated by the inverse scattering method. We reproduce the black ring solution which was found by Emparan and Reall by taking the Euclidean Levi-Civita metric plus one-dimensional flat space as a seed. This transformation consists of two successive processes; the first step is to perform the three-solitonic transformation of the Euclidean Levi-Civita metric with one-dimensional flat space as a seed. The resulting metric is the Euclidean C-metric with extra one-dimensional flat space. The second is to perform the two-solitonic transformation by taking it as a new seed. Our result may serve asmore » a stepping stone to find new exact solutions in higher dimensions.« less

  2. Ultra-low density microcellular polymer foam and method

    DOEpatents

    Simandl, Ronald F.; Brown, John D.

    1996-01-01

    An ultra-low density, microcellular open-celled polymer foam and a method for making such foam. A polymer is dissolved in a heated solution consisting essentially of at least one solvent for the dissolution of the polymer in the heated solution and the phase inversion of the dissolved polymer to a liquid gel upon sufficient cooling of the heated solution. The heated solution is contained in a containment means provided with a nucleating promoting means having a relatively rough surface formed of fixed nucleating sites. The heated solution is cooled for a period of time sufficient to form a liquid gel of the polymer by phase inversion. From the gel, a porous foam having a density of less than about 12.0 mg/cm.sup.3 and open porosity provided by well interconnected strut morphology is formed.

  3. Ultra-low density microcellular polymer foam and method

    DOEpatents

    Simandl, R.F.; Brown, J.D.

    1996-03-19

    An ultra-low density, microcellular open-celled polymer foam and a method for making such foam are disclosed. A polymer is dissolved in a heated solution consisting essentially of at least one solvent for the dissolution of the polymer in the heated solution and the phase inversion of the dissolved polymer to a liquid gel upon sufficient cooling of the heated solution. The heated solution is contained in a containment means provided with a nucleating promoting means having a relatively rough surface formed of fixed nucleating sites. The heated solution is cooled for a period of time sufficient to form a liquid gel of the polymer by phase inversion. From the gel, a porous foam having a density of less than about 12.0 mg/cm{sup 3} and open porosity provided by well interconnected strut morphology is formed.

  4. Extended fault inversion with random slipmaps: A resolution test for the 2012 Mw 7.6 Nicoya, Costa Rica earthquake from a Popperian inversion strategy.

    NASA Astrophysics Data System (ADS)

    Ángel López Comino, José; Stich, Daniel; Ferreira, Ana M. G.; Morales Soto, José

    2015-04-01

    The inversion of seismic data for extended fault slip distributions provides us detailed models of earthquake sources. The validity of the solutions depends on the fit between observed and synthetic seismograms generated with the source model. However, there may exist more than one model that fit the data in a similar way, leading to a multiplicity of solutions. This underdetermined problem has been analyzed and studied by several authors, who agree that inverting for a single best model may become overly dependent on the details of the procedure. We have addressed this resolution problem by using a global search that scans the solutions domain using random slipmaps, applying a Popperian inversion strategy that involves the generation of a representative set of slip distributions. The proposed technique solves the forward problem for a large set of models calculating their corresponding synthetic seismograms. Then, we propose to perform extended fault inversion through falsification, that is, falsify inappropriate trial models that do not reproduce the data within a reasonable level of mismodelling. The remainder of surviving trial models forms our set of coequal solutions. Thereby the ambiguities that might exist can be detected by taking a look at the solutions, allowing for an efficient assessment of the resolution. The solution set may contain only members with similar slip distributions, or else uncover some fundamental ambiguity like, for example, different patterns of main slip patches or different patterns of rupture propagation. For a feasibility study, the proposed resolution test has been evaluated using teleseismic body wave recordings from the September 5th 2012 Nicoya, Costa Rica earthquake. Note that the inversion strategy can be applied to any type of seismic, geodetic or tsunami data for which we can handle the forward problem. A 2D von Karman distribution is used to describe the spectrum of heterogeneity in slipmaps, and we generate possible models by spectral synthesis for random phase, keeping the rake angle, rupture velocity and slip velocity function fixed. The 2012 Nicoya earthquake turns out to be relatively well constrained from 50 teleseismic waveforms. The solution set contains 252 out of 10.000 trial models with normalized L1-fit within 5 percent from the global minimum. The set includes only similar solutions -a single centred slip patch- with minor differences. Uncertainties are related to the details of the slip maximum, including the amount of peak slip (2m to 3.5m), as well as the characteristics of peripheral slip below 1 m. Synthetic tests suggest that slip patterns like Nicoya may be a fortunate case, while it may be more difficult to unambiguously reconstruct more distributed slip from teleseismic data.

  5. GRACE RL03-v2 monthly time series of solutions from CNES/GRGS

    NASA Astrophysics Data System (ADS)

    Lemoine, Jean-Michel; Bourgogne, Stéphane; Bruinsma, Sean; Gégout, Pascal; Reinquin, Franck; Biancale, Richard

    2015-04-01

    Based on GRACE GPS and KBR Level-1B.v2 data, as well as on LAGEOS-1/2 SLR data, CNES/GRGS has published in 2014 the third full re-iteration of its GRACE gravity field solutions. This monthly time series of solutions, named RL03-v1, complete to spherical harmonics degree/order 80, has displayed interesting performances in terms of spatial resolution and signal amplitude compared to JPL/GFZ/CSR RL05. This is due to a careful selection of the background models (FES2014 ocean tides, ECMWF ERA-interim (atmosphere) and TUGO (non IB-ocean) "dealiasing" models every 3 hours) and to the choice of an original method for gravity field inversion : truncated SVD. Identically to the previous CNES/GRGS releases, no additional filtering of the solutions is necessary before using them. Some problems have however been identified in CNES/GRGS RL03-v1: - an erroneous mass signal located in two small circular rings close to the Earth's poles, leading to the recommendation not to use RL03-v1 above 82° latitudes North and South; - a weakness in the sectorials due to an excessive downweighting of the GRACE GPS observations. These two problems have been understood and addressed, leading to the computation of a corrected time series of solutions, RL03-v2. The corrective steps have been: - to strengthen the determination of the very low degrees by adding Starlette and Stella SLR data to the normal equations; - to increase the weight of the GRACE GPS observations; - to adopt a two steps approach for the computation of the solutions: first a Choleski inversion for the low degrees, followed by a truncated SVD solution. The identification of these problems will be discussed and the performance of the new time series evaluated.

  6. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  7. Determination of Structural Parameters from EXAFS (Extended X-Ray Absorption Fine Structure): Application to Solutions and Catalysts.

    DTIC Science & Technology

    1984-05-23

    the disorder was accurately known. Inverse Transform To isolate the EAFS contribution due to a single feature in the Fourier transform, the inverse ...is associated with setting the "fold" components to 27 zero in r-space. An inverse transform (real part) of the major feature of the Fig. 4 Fourier...phase of the resulting inverse transform represents only any differences between the material being studied and the reference. This residual is

  8. A Solution of the System of Partial Differential Equations Which Describe the Propagation of Acoustic Pulses in Layered Fluid Media,

    DTIC Science & Technology

    transformed problem. Then using several changes of integration variables, the inverse transform is obtained by direct identification without recourse to the complex Laplace transform inversion integral. (Author)

  9. Systematic evaluation of sequential geostatistical resampling within MCMC for posterior sampling of near-surface geophysical inverse problems

    NASA Astrophysics Data System (ADS)

    Ruggeri, Paolo; Irving, James; Holliger, Klaus

    2015-08-01

    We critically examine the performance of sequential geostatistical resampling (SGR) as a model proposal mechanism for Bayesian Markov-chain-Monte-Carlo (MCMC) solutions to near-surface geophysical inverse problems. Focusing on a series of simple yet realistic synthetic crosshole georadar tomographic examples characterized by different numbers of data, levels of data error and degrees of model parameter spatial correlation, we investigate the efficiency of three different resampling strategies with regard to their ability to generate statistically independent realizations from the Bayesian posterior distribution. Quite importantly, our results show that, no matter what resampling strategy is employed, many of the examined test cases require an unreasonably high number of forward model runs to produce independent posterior samples, meaning that the SGR approach as currently implemented will not be computationally feasible for a wide range of problems. Although use of a novel gradual-deformation-based proposal method can help to alleviate these issues, it does not offer a full solution. Further, we find that the nature of the SGR is found to strongly influence MCMC performance; however no clear rule exists as to what set of inversion parameters and/or overall proposal acceptance rate will allow for the most efficient implementation. We conclude that although the SGR methodology is highly attractive as it allows for the consideration of complex geostatistical priors as well as conditioning to hard and soft data, further developments are necessary in the context of novel or hybrid MCMC approaches for it to be considered generally suitable for near-surface geophysical inversions.

  10. Using the sensors of shaft position for simulation of misalignments of shafting supports of turbounits

    NASA Astrophysics Data System (ADS)

    Kumenko, A. I.; Kostyukov, V. N.; Kuz'minykh, N. Yu.; Timin, A. V.; Boichenko, S. N.

    2017-09-01

    Examples of using the method developed for the earlier proposed concept of the monitoring system of the technical condition of a turbounit are presented. The solution methods of the inverse problem—the calculation of misalignments of supports based on the measurement results of positions of rotor pins in the borings of bearings during the operation of a turbounit—are demonstrated. The results of determination of static responses of supports at operation misalignments are presented. The examples of simulation and calculation of misalignments of supports are made for the three-bearing "high-pressure rotor-middle-pressure rotor" (HPR-MPR) system of a turbounit with 250 MW capacity and for 14-supporting shafting of a turbounit with 1000 MW capacity. The calculation results of coefficients of the stiffness matrix of shaftings and testing of methods for solving the inverse problem by modeling are presented. The high accuracy of the solution of the inverse problem at the inversion of the stiffness matrix of shafting used for determining the correcting centerings of rotors of multisupporting shafting is revealed. The stiffness matrix can be recommended to analyze the influence of displacements of one or several supports on changing the support responses of shafting of the turbounit during adjustment after assembling or repair. It is proposed to use the considered methods of evaluation of misalignments in the monitoring systems of changing the mutual position of supports and centerings of rotors by half-couplings of turbounits, especially for seismically dangerous regions and regions with increased sagging of foundations due to watering of soils.

  11. Computational inverse methods of heat source in fatigue damage problems

    NASA Astrophysics Data System (ADS)

    Chen, Aizhou; Li, Yuan; Yan, Bo

    2018-04-01

    Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.

  12. Resolution and Trade-offs in Finite Fault Inversions for Large Earthquakes Using Teleseismic Signals (Invited)

    NASA Astrophysics Data System (ADS)

    Lay, T.; Ammon, C. J.

    2010-12-01

    An unusually large number of widely distributed great earthquakes have occurred in the past six years, with extensive data sets of teleseismic broadband seismic recordings being available in near-real time for each event. Numerous research groups have implemented finite-fault inversions that utilize the rapidly accessible teleseismic recordings, and slip models are regularly determined and posted on websites for all major events. The source inversion validation project has already demonstrated that for events of all sizes there is often significant variability in models for a given earthquake. Some of these differences can be attributed to variations in data sets and procedures used for including signals with very different bandwidth and signal characteristics into joint inversions. Some differences can also be attributed to choice of velocity structure and data weighting. However, our experience is that some of the primary causes of solution variability involve rupture model parameterization and imposed kinematic constraints such as rupture velocity and subfault source time function description. In some cases it is viable to rapidly perform separate procedures such as teleseismic array back-projection or surface wave directivity analysis to reduce the uncertainties associated with rupture velocity, and it is possible to explore a range of subfault source parameterizations to place some constraints on which model features are robust. In general, many such tests are performed, but not fully described, with single model solutions being posted or published, with limited insight into solution confidence being conveyed. Using signals from recent great earthquakes in the Kuril Islands, Solomon Islands, Peru, Chile and Samoa, we explore issues of uncertainty and robustness of solutions that can be rapidly obtained by inversion of teleseismic signals. Formalizing uncertainty estimates remains a formidable undertaking and some aspects of that challenge will be addressed.

  13. Joint Inversion of Vp, Vs, and Resistivity at SAFOD

    NASA Astrophysics Data System (ADS)

    Bennington, N. L.; Zhang, H.; Thurber, C. H.; Bedrosian, P. A.

    2010-12-01

    Seismic and resistivity models at SAFOD have been derived from separate inversions that show significant spatial similarity between the main model features. Previous work [Zhang et al., 2009] used cluster analysis to make lithologic inferences from trends in the seismic and resistivity models. We have taken this one step further by developing a joint inversion scheme that uses the cross-gradient penalty function to achieve structurally similar Vp, Vs, and resistivity images that adequately fit the seismic and magnetotelluric MT data without forcing model similarity where none exists. The new inversion code, tomoDDMT, merges the seismic inversion code tomoDD [Zhang and Thurber, 2003] and the MT inversion code Occam2DMT [Constable et al., 1987; deGroot-Hedlin and Constable, 1990]. We are exploring the utility of the cross-gradients penalty function in improving models of fault-zone structure at SAFOD on the San Andreas Fault in the Parkfield, California area. Two different sets of end-member starting models are being tested. One set is the separately inverted Vp, Vs, and resistivity models. The other set consists of simple, geologically based block models developed from borehole information at the SAFOD drill site and a simplified version of features seen in geophysical models at Parkfield. For both starting models, our preliminary results indicate that the inversion produces a converging solution with resistivity, seismic, and cross-gradient misfits decreasing over successive iterations. We also compare the jointly inverted Vp, Vs, and resistivity models to borehole information from SAFOD to provide a "ground truth" comparison.

  14. Porosity Estimation By Artificial Neural Networks Inversion . Application to Algerian South Field

    NASA Astrophysics Data System (ADS)

    Eladj, Said; Aliouane, Leila; Ouadfeul, Sid-Ali

    2017-04-01

    One of the main geophysicist's current challenge is the discovery and the study of stratigraphic traps, this last is a difficult task and requires a very fine analysis of the seismic data. The seismic data inversion allows obtaining lithological and stratigraphic information for the reservoir characterization . However, when solving the inverse problem we encounter difficult problems such as: Non-existence and non-uniqueness of the solution add to this the instability of the processing algorithm. Therefore, uncertainties in the data and the non-linearity of the relationship between the data and the parameters must be taken seriously. In this case, the artificial intelligence techniques such as Artificial Neural Networks(ANN) is used to resolve this ambiguity, this can be done by integrating different physical properties data which requires a supervised learning methods. In this work, we invert the acoustic impedance 3D seismic cube using the colored inversion method, then, the introduction of the acoustic impedance volume resulting from the first step as an input of based model inversion method allows to calculate the Porosity volume using the Multilayer Perceptron Artificial Neural Network. Application to an Algerian South hydrocarbon field clearly demonstrate the power of the proposed processing technique to predict the porosity for seismic data, obtained results can be used for reserves estimation, permeability prediction, recovery factor and reservoir monitoring. Keywords: Artificial Neural Networks, inversion, non-uniqueness , nonlinear, 3D porosity volume, reservoir characterization .

  15. Electro-magneto interaction in fractional Green-Naghdi thermoelastic solid with a cylindrical cavity

    NASA Astrophysics Data System (ADS)

    Ezzat, M. A.; El-Bary, A. A.

    2018-01-01

    A unified mathematical model of Green-Naghdi's thermoelasticty theories (GN), based on fractional time-derivative of heat transfer is constructed. The model is applied to solve a one-dimensional problem of a perfect conducting unbounded body with a cylindrical cavity subjected to sinusoidal pulse heating in the presence of an axial uniform magnetic field. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Comparisons are made with the results predicted by the two theories. The effects of the fractional derivative parameter on thermoelastic fields for different theories are discussed.

  16. Recently Developed Formulations of the Inverse Problem in Acoustics and Electromagnetics

    DTIC Science & Technology

    1974-12-01

    solution for scattering by a sphere. The inverse transform of irs?(K) is calculated, this function yielding --y (x). Figure 4.2 is a graph of this...time or decays "sufficiently rapidly", then T+- o. In this case, we may let T -1 in (8.9) and obtain the inverse transform (k = w/c) of (5.6) as the

  17. Forward and Inverse Modeling of Self-potential. A Tomography of Groundwater Flow and Comparison Between Deterministic and Stochastic Inversion Methods

    NASA Astrophysics Data System (ADS)

    Quintero-Chavarria, E.; Ochoa Gutierrez, L. H.

    2016-12-01

    Applications of the Self-potential Method in the fields of Hydrogeology and Environmental Sciences have had significant developments during the last two decades with a strong use on groundwater flows identification. Although only few authors deal with the forward problem's solution -especially in geophysics literature- different inversion procedures are currently being developed but in most cases they are compared with unconventional groundwater velocity fields and restricted to structured meshes. This research solves the forward problem based on the finite element method using the St. Venant's Principle to transform a point dipole, which is the field generated by a single vector, into a distribution of electrical monopoles. Then, two simple aquifer models were generated with specific boundary conditions and head potentials, velocity fields and electric potentials in the medium were computed. With the model's surface electric potential, the inverse problem is solved to retrieve the source of electric potential (vector field associated to groundwater flow) using deterministic and stochastic approaches. The first approach was carried out by implementing a Tikhonov regularization with a stabilized operator adapted to the finite element mesh while for the second a hierarchical Bayesian model based on Markov chain Monte Carlo (McMC) and Markov Random Fields (MRF) was constructed. For all implemented methods, the result between the direct and inverse models was contrasted in two ways: 1) shape and distribution of the vector field, and 2) magnitude's histogram. Finally, it was concluded that inversion procedures are improved when the velocity field's behavior is considered, thus, the deterministic method is more suitable for unconfined aquifers than confined ones. McMC has restricted applications and requires a lot of information (particularly in potentials fields) while MRF has a remarkable response especially when dealing with confined aquifers.

  18. Coupled Hydrogeophysical Inversion and Hydrogeological Data Fusion

    NASA Astrophysics Data System (ADS)

    Cirpka, O. A.; Schwede, R. L.; Li, W.

    2012-12-01

    Tomographic geophysical monitoring methods give the opportunity to observe hydrogeological tests at higher spatial resolution than is possible with classical hydraulic monitoring tools. This has been demonstrated in a substantial number of studies in which electrical resistivity tomography (ERT) has been used to monitor salt-tracer experiments. It is now accepted that inversion of such data sets requires a fully coupled framework, explicitly accounting for the hydraulic processes (groundwater flow and solute transport), the relationship between solute and geophysical properties (petrophysical relationship such as Archie's law), and the governing equations of the geophysical surveying techniques (e.g., the Poisson equation) as consistent coupled system. These data sets can be amended with data from other - more direct - hydrogeological tests to infer the distribution of hydraulic aquifer parameters. In the inversion framework, meaningful condensation of data does not only contribute to inversion efficiency but also increases the stability of the inversion. In particular, transient concentration data themselves only weakly depend on hydraulic conductivity, and model improvement using gradient-based methods is only possible when a substantial agreement between measurements and model output already exists. The latter also holds when concentrations are monitored by ERT. Tracer arrival times, by contrast, show high sensitivity and a more monotonic dependence on hydraulic conductivity than concentrations themselves. Thus, even without using temporal-moment generating equations, inverting travel times rather than concentrations or related geoelectrical signals themselves is advantageous. We have applied this approach to concentrations measured directly or via ERT, and to heat-tracer data. We present a consistent inversion framework including temporal moments of concentrations, geoelectrical signals obtained during salt-tracer tests, drawdown data from hydraulic tomography and flowmeter measurements to identify mainly the hydraulic-conductivity distribution. By stating the inversion as geostatistical conditioning problem, we obtain parameter sets together with their correlated uncertainty. While we have applied the quasi-linear geostatistical approach as inverse kernel, other methods - such as ensemble Kalman methods - may suit the same purpose, particularly when many data points are to be included. In order to identify 3-D fields, discretized by about 50 million grid points, we use the high-performance-computing framework DUNE to solve the involved partial differential equations on midrange computer cluster. We have quantified the worth of different data types in these inference problems. In practical applications, the constitutive relationships between geophysical, thermal, and hydraulic properties can pose a problem, requiring additional inversion. However, not well constrained transient boundary conditions may put inversion efforts on larger (e.g. regional) scales even more into question. We envision that future hydrogeophysical inversion efforts will target boundary conditions, such as groundwater recharge rates, in conjunction with - or instead of - aquifer parameters. By this, the distinction between data assimilation and parameter estimation will gradually vanish.

  19. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  20. Exact finite element method analysis of viscoelastic tapered structures to transient loads

    NASA Technical Reports Server (NTRS)

    Spyrakos, Constantine Chris

    1987-01-01

    A general method is presented for determining the dynamic torsional/axial response of linear structures composed of either tapered bars or shafts to transient excitations. The method consists of formulating and solving the dynamic problem in the Laplace transform domain by the finite element method and obtaining the response by a numerical inversion of the transformed solution. The derivation of the torsional and axial stiffness matrices is based on the exact solution of the transformed governing equation of motion, and it consequently leads to the exact solution of the problem. The solution permits treatment of the most practical cases of linear tapered bars and shafts, and employs modeling of structures with only one element per member which reduces the number of degrees of freedom involved. The effects of external viscous or internal viscoelastic damping are also taken into account.

  1. Inversion of Zeeman polarization for solar magnetic field diagnostics

    NASA Astrophysics Data System (ADS)

    Derouich, M.

    2017-05-01

    The topic of magnetic field diagnostics with the Zeeman effect is currently vividly discussed. There are some testable inversion codes available to the spectropolarimetry community and their application allowed for a better understanding of the magnetism of the solar atmosphere. In this context, we propose an inversion technique associated with a new numerical code. The inversion procedure is promising and particularly successful for interpreting the Stokes profiles in quick and sufficiently precise way. In our inversion, we fit a part of each Stokes profile around a target wavelength, and then determine the magnetic field as a function of the wavelength which is equivalent to get the magnetic field as a function of the height of line formation. To test the performance of the new numerical code, we employed "hare and hound" approach by comparing an exact solution (called input) with the solution obtained by the code (called output). The precision of the code is also checked by comparing our results to the ones obtained with the HAO MERLIN code. The inversion code has been applied to synthetic Stokes profiles of the Na D1 line available in the literature. We investigated the limitations in recovering the input field in case of noisy data. As an application, we applied our inversion code to the polarization profiles of the Fe Iλ 6302.5 Å observed at IRSOL in Locarno.

  2. A recursive algorithm for the three-dimensional imaging of brain electric activity: Shrinking LORETA-FOCUSS.

    PubMed

    Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai

    2004-10-01

    Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.

  3. Colloidal inverse bicontinuous cubic membranes of block copolymers with tunable surface functional groups

    NASA Astrophysics Data System (ADS)

    La, Yunju; Park, Chiyoung; Shin, Tae Joo; Joo, Sang Hoon; Kang, Sebyung; Kim, Kyoung Taek

    2014-06-01

    Analogous to the complex membranes found in cellular organelles, such as the endoplasmic reticulum, the inverse cubic mesophases of lipids and their colloidal forms (cubosomes) possess internal networks of water channels arranged in crystalline order, which provide a unique nanospace for membrane-protein crystallization and guest encapsulation. Polymeric analogues of cubosomes formed by the direct self-assembly of block copolymers in solution could provide new polymeric mesoporous materials with a three-dimensionally organized internal maze of large water channels. Here we report the self-assembly of amphiphilic dendritic-linear block copolymers into polymer cubosomes in aqueous solution. The presence of precisely defined bulky dendritic blocks drives the block copolymers to form spontaneously highly curved bilayers in aqueous solution. This results in the formation of colloidal inverse bicontinuous cubic mesophases. The internal networks of water channels provide a high surface area with tunable surface functional groups that can serve as anchoring points for large guests such as proteins and enzymes.

  4. Effective one-dimensional approach to the source reconstruction problem of three-dimensional inverse optoacoustics

    NASA Astrophysics Data System (ADS)

    Stritzel, J.; Melchert, O.; Wollweber, M.; Roth, B.

    2017-09-01

    The direct problem of optoacoustic signal generation in biological media consists of solving an inhomogeneous three-dimensional (3D) wave equation for an initial acoustic stress profile. In contrast, the more defiant inverse problem requires the reconstruction of the initial stress profile from a proper set of observed signals. In this article, we consider an effectively 1D approach, based on the assumption of a Gaussian transverse irradiation source profile and plane acoustic waves, in which the effects of acoustic diffraction are described in terms of a linear integral equation. The respective inverse problem along the beam axis can be cast into a Volterra integral equation of the second kind for which we explore here efficient numerical schemes in order to reconstruct initial stress profiles from observed signals, constituting a methodical progress of computational aspects of optoacoustics. In this regard, we explore the validity as well as the limits of the inversion scheme via numerical experiments, with parameters geared toward actual optoacoustic problem instances. The considered inversion input consists of synthetic data, obtained in terms of the effectively 1D approach, and, more generally, a solution of the 3D optoacoustic wave equation. Finally, we also analyze the effect of noise and different detector-to-sample distances on the optoacoustic signal and the reconstructed pressure profiles.

  5. Whole head quantitative susceptibility mapping using a least-norm direct dipole inversion method.

    PubMed

    Sun, Hongfu; Ma, Yuhan; MacDonald, M Ethan; Pike, G Bruce

    2018-06-15

    A new dipole field inversion method for whole head quantitative susceptibility mapping (QSM) is proposed. Instead of performing background field removal and local field inversion sequentially, the proposed method performs dipole field inversion directly on the total field map in a single step. To aid this under-determined and ill-posed inversion process and obtain robust QSM images, Tikhonov regularization is implemented to seek the local susceptibility solution with the least-norm (LN) using the L-curve criterion. The proposed LN-QSM does not require brain edge erosion, thereby preserving the cerebral cortex in the final images. This should improve its applicability for QSM-based cortical grey matter measurement, functional imaging and venography of full brain. Furthermore, LN-QSM also enables susceptibility mapping of the entire head without the need for brain extraction, which makes QSM reconstruction more automated and less dependent on intermediate pre-processing methods and their associated parameters. It is shown that the proposed LN-QSM method reduced errors in a numerical phantom simulation, improved accuracy in a gadolinium phantom experiment, and suppressed artefacts in nine subjects, as compared to two-step and other single-step QSM methods. Measurements of deep grey matter and skull susceptibilities from LN-QSM are consistent with established reconstruction methods. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Using the ARTMO toolbox for automated retrieval of biophysical parameters through radiative transfer model inversion: Optimizing LUT-based inversion

    NASA Astrophysics Data System (ADS)

    Verrelst, J.; Rivera, J. P.; Leonenko, G.; Alonso, L.; Moreno, J.

    2012-04-01

    Radiative transfer (RT) modeling plays a key role for earth observation (EO) because it is needed to design EO instruments and to develop and test inversion algorithms. The inversion of a RT model is considered as a successful approach for the retrieval of biophysical parameters because of being physically-based and generally applicable. However, to the broader community this approach is considered as laborious because of its many processing steps and expert knowledge is required to realize precise model parameterization. We have recently developed a radiative transfer toolbox ARTMO (Automated Radiative Transfer Models Operator) with the purpose of providing in a graphical user interface (GUI) essential models and tools required for terrestrial EO applications such as model inversion. In short, the toolbox allows the user: i) to choose between various plant leaf and canopy RT models (e.g. models from the PROSPECT and SAIL family, FLIGHT), ii) to choose between spectral band settings of various air- and space-borne sensors or defining own sensor settings, iii) to simulate a massive amount of spectra based on a look up table (LUT) approach and storing it in a relational database, iv) to plot spectra of multiple models and compare them with measured spectra, and finally, v) to run model inversion against optical imagery given several cost options and accuracy estimates. In this work ARTMO was used to tackle some well-known problems related to model inversion. According to Hadamard conditions, mathematical models of physical phenomena are mathematically invertible if the solution of the inverse problem to be solved exists, is unique and depends continuously on data. This assumption is not always met because of the large number of unknowns and different strategies have been proposed to overcome this problem. Several of these strategies have been implemented in ARTMO and were here analyzed to optimize the inversion performance. Data came from the SPARC-2003 dataset, which was acquired on the agricultural test site Barrax, Spain. LUTs were created using the 4SAIL and FLIGHT models and were inverted against CHRIS data in order to retrieve maps of chlorophyll content (chl) and leaf area index (LAI). The following inversion steps have been optimized: 1. Cost function. The performances of about 50 different cost functions (i.e. minimum distance functions) were compared. Remarkably, in none of the studied cases the widely used root mean square error (RMSE) led to most accurate results. Depending on the retrieved parameter, more successful functions were: 'Sharma and Mittal', 'Shannońs entropy', 'Hellinger distance', 'Pearsońs chi-square'. 2. Gaussian noise. Earth observation data typically encompass a certain degree of noise due to errors related to radiometric and geometric processing. In all cases, adding 5% Gaussian noise to the simulated spectra led to more accurate retrievals as compared to without noise. 3. Average of multiple best solutions. Because multiple parameter combinations may lead to the same spectra, a way to overcome this problem is not searching for the top best match but for a percentage of best matches. Optimized retrievals were encountered when including an average of 7% (Chl) to 10% (LAI) top best matches. 4. Integration of estimates. The option is provided to integrate estimates of biochemical contents at the canopy level (e.g., total chlorophyll: Chl × LAI, or water: Cw × LAI), which can lead to increased robustness and accuracy. 5. Class-based inversion. This option is probably ARTMÓs most powerful feature as it allows model parameterization depending on the imagés land cover classes (e.g. different soil or vegetation types). Class-based inversion can lead to considerably improved accuracies compared to one generic class. Results suggest that 4SAIL and FLIGHT performed alike for Chl but not for LAI. While both models rely on the leaf model PROSPECT for Chl retrieval, their different nature (e.g. numerical vs. ray tracing) may cause that retrieval of structural parameters such as LAI differ. Finally, it should be noted that the whole analysis can be intuitively performed by the toolbox. ARTMO is freely available to the EO community for further development. Expressions of interest are welcome and should be directed to the corresponding author.

  7. A homologous series of regioselectively tetradeprotonated group 8 metallocenes: new inverse crown ring compounds synthesized via a mixed sodium-magnesium tris(diisopropylamide) synergic base.

    PubMed

    Andrikopoulos, Prokopis C; Armstrong, David R; Clegg, William; Gilfillan, Carly J; Hevia, Eva; Kennedy, Alan R; Mulvey, Robert E; O'Hara, Charles T; Parkinson, John A; Tooke, Duncan M

    2004-09-22

    Subjecting ferrocene, ruthenocene, or osmocene to the synergic amide base sodium-magnesium tris(diisopropylamido) affords a unique homologous series of metallocene derivatives of general formula [(M(C(5)H(3))(2))Na(4)Mg(4)(i-Pr(2)N)(8)] (where M = Fe (1), Ru (2), or Os (3)). X-ray crystallographic studies of 1-3 reveal a common molecular "inverse crown" structure comprising a 16-membered [(NaNMgN)(4)](4+) "host" ring and a metallocenetetraide [M(C(5)H(3))(2)](4-) "guest" core, the cleaved protons of which are lost selectively from the 1, 1', 3, and 3'-positions. Variable-temperature NMR spectroscopic studies indicate that 1, 2, and 3 each exist as two distinct interconverting conformers in arene solution, the rates of exchange of which have been calculated using coalescence and EXSY NMR measurements.

  8. Determination of Diffusion Coefficients in Cement-Based Materials: An Inverse Problem for the Nernst-Planck and Poisson Models

    NASA Astrophysics Data System (ADS)

    Szyszkiewicz-Warzecha, Krzysztof; Jasielec, Jerzy J.; Fausek, Janusz; Filipek, Robert

    2016-08-01

    Transport properties of ions have significant impact on the possibility of rebars corrosion thus the knowledge of a diffusion coefficient is important for reinforced concrete durability. Numerous tests for the determination of diffusion coefficients have been proposed but analysis of some of these tests show that they are too simplistic or even not valid. Hence, more rigorous models to calculate the coefficients should be employed. Here we propose the Nernst-Planck and Poisson equations, which take into account the concentration and electric potential field. Based on this model a special inverse method is presented for determination of a chloride diffusion coefficient. It requires the measurement of concentration profiles or flux on the boundary and solution of the NPP model to define the goal function. Finding the global minimum is equivalent to the determination of diffusion coefficients. Typical examples of the application of the presented method are given.

  9. Efficient Terahertz Wide-Angle NUFFT-Based Inverse Synthetic Aperture Imaging Considering Spherical Wavefront.

    PubMed

    Gao, Jingkun; Deng, Bin; Qin, Yuliang; Wang, Hongqiang; Li, Xiang

    2016-12-14

    An efficient wide-angle inverse synthetic aperture imaging method considering the spherical wavefront effects and suitable for the terahertz band is presented. Firstly, the echo signal model under spherical wave assumption is established, and the detailed wavefront curvature compensation method accelerated by 1D fast Fourier transform (FFT) is discussed. Then, to speed up the reconstruction procedure, the fast Gaussian gridding (FGG)-based nonuniform FFT (NUFFT) is employed to focus the image. Finally, proof-of-principle experiments are carried out and the results are compared with the ones obtained by the convolution back-projection (CBP) algorithm. The results demonstrate the effectiveness and the efficiency of the presented method. This imaging method can be directly used in the field of nondestructive detection and can also be used to provide a solution for the calculation of the far-field RCSs (Radar Cross Section) of targets in the terahertz regime.

  10. Joint inversion of satellite-detected tidal and magnetospheric signals constrains electrical conductivity and water content of the upper mantle and transition zone.

    PubMed

    Grayver, A V; Munch, F D; Kuvshinov, A V; Khan, A; Sabaka, T J; Tøffner-Clausen, L

    2017-06-28

    We present a new global electrical conductivity model of Earth's mantle. The model was derived by using a novel methodology, which is based on inverting satellite magnetic field measurements from different sources simultaneously. Specifically, we estimated responses of magnetospheric origin and ocean tidal magnetic signals from the most recent Swarm and CHAMP data. The challenging task of properly accounting for the ocean effect in the data was addressed through full three-dimensional solution of Maxwell's equations. We show that simultaneous inversion of magnetospheric and tidal magnetic signals results in a model with much improved resolution. Comparison with laboratory-based conductivity profiles shows that obtained models are compatible with a pyrolytic composition and a water content of 0.01 wt% and 0.1 wt% in the upper mantle and transition zone, respectively.

  11. New Approaches to Minimum-Energy Design of Integer- and Fractional-Order Perfect Control Algorithms

    NASA Astrophysics Data System (ADS)

    Hunek, Wojciech P.; Wach, Łukasz

    2017-10-01

    In this paper the new methods concerning the energy-based minimization of the perfect control inputs is presented. For that reason the multivariable integer- and fractional-order models are applied which can be used for describing a various real world processes. Up to now, the classical approaches have been used in forms of minimum-norm/least squares inverses. Notwithstanding, the above-mentioned tool do not guarantee the optimal control corresponding to optimal input energy. Therefore the new class of inversebased methods has been introduced, in particular the new σ- and H-inverse of nonsquare parameter and polynomial matrices. Thus a proposed solution remarkably outperforms the typical ones in systems where the control runs can be understood in terms of different physical quantities, for example heat and mass transfer, electricity etc. A simulation study performed in Matlab/Simulink environment confirms the big potential of the new energy-based approaches.

  12. Computational modelling of cellular level metabolism

    NASA Astrophysics Data System (ADS)

    Calvetti, D.; Heino, J.; Somersalo, E.

    2008-07-01

    The steady and stationary state inverse problems consist of estimating the reaction and transport fluxes, blood concentrations and possibly the rates of change of some of the concentrations based on data which are often scarce noisy and sampled over a population. The Bayesian framework provides a natural setting for the solution of this inverse problem, because a priori knowledge about the system itself and the unknown reaction fluxes and transport rates can compensate for the insufficiency of measured data, provided that the computational costs do not become prohibitive. This article identifies the computational challenges which have to be met when analyzing the steady and stationary states of multicompartment model for cellular metabolism and suggest stable and efficient ways to handle the computations. The outline of a computational tool based on the Bayesian paradigm for the simulation and analysis of complex cellular metabolic systems is also presented.

  13. Dose and detectability for a cone-beam C-arm CT system revisited

    PubMed Central

    Ganguly, Arundhuti; Yoon, Sungwon; Fahrig, Rebecca

    2010-01-01

    Purpose: The authors had previously published measurements of the detectability of disk-shaped contrast objects in images obtained from a C-arm CT system. A simple approach based on Rose’s criterion was used to scale the date, assuming the threshold for the smallest diameter detected should be inversely proportional to (dose)1∕2. A more detailed analysis based on recent theoretical modeling of C-arm CT images is presented in this work. Methods: The signal and noise propagations in a C-arm based CT system have been formulated by other authors using cascaded systems analysis. They established a relationship between detectability and the noise equivalent quanta. Based on this model, the authors obtained a relation between x-ray dose and the diameter of the smallest disks detected. A closed form solution was established by assuming no rebinning and no resampling of data, with low additive noise and using a ramp filter. For the case when no such assumptions were made, a numerically calculated solution using previously reported imaging and reconstruction parameters was obtained. The detection probabilities for a range of dose and kVp values had been measured previously. These probabilities were normalized to a single dose of 56.6 mGy using the Rose-criteria-based relation to obtain a universal curve. Normalizations based on the new numerically calculated relationship were compared to the measured results. Results: The theoretical and numerical calculations have similar results and predict the detected diameter size to be inversely proportional to (dose)1∕3 and (dose)1∕2.8, respectively. The normalized experimental curves and the associated universal plot using the new relation were not significantly different from those obtained using the Rose-criterion-based normalization. Conclusions: From numerical simulations, the authors found that the diameter of detected disks depends inversely on the cube root of the dose. For observer studies for disks larger than 4 mm, the cube root as well as square root relations appear to give similar results when used for normalization. PMID:20527560

  14. Moment tensor analysis of very shallow sources

    DOE PAGES

    Chiang, Andrea; Dreger, Douglas S.; Ford, Sean R.; ...

    2016-10-11

    An issue for moment tensor (MT) inversion of shallow seismic sources is that some components of the Green’s functions have vanishing amplitudes at the free surface, which can result in bias in the MT solution. The effects of the free surface on the stability of the MT method become important as we continue to investigate and improve the capabilities of regional full MT inversion for source–type identification and discrimination. It is important to understand free–surface effects on discriminating shallow explosive sources for nuclear monitoring purposes. It may also be important in natural systems that have very shallow seismicity, such asmore » volcanic and geothermal systems. We examine the effects of the free surface on the MT via synthetic testing and apply the MT–based discrimination method to three quarry blasts from the HUMMING ALBATROSS experiment. These shallow chemical explosions at ~10 m depth and recorded up to several kilometers distance represent rather severe source–station geometry in terms of free–surface effects. We show that the method is capable of recovering a predominantly explosive source mechanism, and the combined waveform and first–motion method enables the unique discrimination of these events. Furthermore, recovering the design yield using seismic moment estimates from MT inversion remains challenging, but we can begin to put error bounds on our moment estimates using the network sensitivity solution technique.« less

  15. Effective scattering coefficient of the cerebral spinal fluid in adult head models for diffuse optical imaging

    NASA Astrophysics Data System (ADS)

    Custo, Anna; Wells, William M., III; Barnett, Alex H.; Hillman, Elizabeth M. C.; Boas, David A.

    2006-07-01

    An efficient computation of the time-dependent forward solution for photon transport in a head model is a key capability for performing accurate inversion for functional diffuse optical imaging of the brain. The diffusion approximation to photon transport is much faster to simulate than the physically correct radiative transport equation (RTE); however, it is commonly assumed that scattering lengths must be much smaller than all system dimensions and all absorption lengths for the approximation to be accurate. Neither of these conditions is satisfied in the cerebrospinal fluid (CSF). Since line-of-sight distances in the CSF are small, of the order of a few millimeters, we explore the idea that the CSF scattering coefficient may be modeled by any value from zero up to the order of the typical inverse line-of-sight distance, or approximately 0.3 mm-1, without significantly altering the calculated detector signals or the partial path lengths relevant for functional measurements. We demonstrate this in detail by using a Monte Carlo simulation of the RTE in a three-dimensional head model based on clinical magnetic resonance imaging data, with realistic optode geometries. Our findings lead us to expect that the diffusion approximation will be valid even in the presence of the CSF, with consequences for faster solution of the inverse problem.

  16. Conventional and reciprocal approaches to the inverse dipole localization problem for N(20)-P (20) somatosensory evoked potentials.

    PubMed

    Finke, Stefan; Gulrajani, Ramesh M; Gotman, Jean; Savard, Pierre

    2013-01-01

    The non-invasive localization of the primary sensory hand area can be achieved by solving the inverse problem of electroencephalography (EEG) for N(20)-P(20) somatosensory evoked potentials (SEPs). This study compares two different mathematical approaches for the computation of transfer matrices used to solve the EEG inverse problem. Forward transfer matrices relating dipole sources to scalp potentials are determined via conventional and reciprocal approaches using individual, realistically shaped head models. The reciprocal approach entails calculating the electric field at the dipole position when scalp electrodes are reciprocally energized with unit current-scalp potentials are obtained from the scalar product of this electric field and the dipole moment. Median nerve stimulation is performed on three healthy subjects and single-dipole inverse solutions for the N(20)-P(20) SEPs are then obtained by simplex minimization and validated against the primary sensory hand area identified on magnetic resonance images. Solutions are presented for different time points, filtering strategies, boundary-element method discretizations, and skull conductivity values. Both approaches produce similarly small position errors for the N(20)-P(20) SEP. Position error for single-dipole inverse solutions is inherently robust to inaccuracies in forward transfer matrices but dependent on the overlapping activity of other neural sources. Significantly smaller time and storage requirements are the principal advantages of the reciprocal approach. Reduced computational requirements and similar dipole position accuracy support the use of reciprocal approaches over conventional approaches for N(20)-P(20) SEP source localization.

  17. Parallelized Three-Dimensional Resistivity Inversion Using Finite Elements And Adjoint State Methods

    NASA Astrophysics Data System (ADS)

    Schaa, Ralf; Gross, Lutz; Du Plessis, Jaco

    2015-04-01

    The resistivity method is one of the oldest geophysical exploration methods, which employs one pair of electrodes to inject current into the ground and one or more pairs of electrodes to measure the electrical potential difference. The potential difference is a non-linear function of the subsurface resistivity distribution described by an elliptic partial differential equation (PDE) of the Poisson type. Inversion of measured potentials solves for the subsurface resistivity represented by PDE coefficients. With increasing advances in multichannel resistivity acquisition systems (systems with more than 60 channels and full waveform recording are now emerging), inversion software require efficient storage and solver algorithms. We developed the finite element solver Escript, which provides a user-friendly programming environment in Python to solve large-scale PDE-based problems (see https://launchpad.net/escript-finley). Using finite elements, highly irregular shaped geology and topography can readily be taken into account. For the 3D resistivity problem, we have implemented the secondary potential approach, where the PDE is decomposed into a primary potential caused by the source current and the secondary potential caused by changes in subsurface resistivity. The primary potential is calculated analytically, and the boundary value problem for the secondary potential is solved using nodal finite elements. This approach removes the singularity caused by the source currents and provides more accurate 3D resistivity models. To solve the inversion problem we apply a 'first optimize then discretize' approach using the quasi-Newton scheme in form of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method (see Gross & Kemp 2013). The evaluation of the cost function requires the solution of the secondary potential PDE for each source current and the solution of the corresponding adjoint-state PDE for the cost function gradients with respect to the subsurface resistivity. The Hessian of the regularization term is used as preconditioner which requires an additional PDE solution in each iteration step. As it turns out, the relevant PDEs are naturally formulated in the finite element framework. Using the domain decomposition method provided in Escript, the inversion scheme has been parallelized for distributed memory computers with multi-core shared memory nodes. We show numerical examples from simple layered models to complex 3D models and compare with the results from other methods. The inversion scheme is furthermore tested on a field data example to characterise localised freshwater discharge in a coastal environment.. References: L. Gross and C. Kemp (2013) Large Scale Joint Inversion of Geophysical Data using the Finite Element Method in escript. ASEG Extended Abstracts 2013, http://dx.doi.org/10.1071/ASEG2013ab306

  18. Identification of subsurface structures using electromagnetic data and shape priors

    NASA Astrophysics Data System (ADS)

    Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond

    2015-03-01

    We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.

  19. The experimental identification of magnetorheological dampers and evaluation of their controllers

    NASA Astrophysics Data System (ADS)

    Metered, H.; Bonello, P.; Oyadiji, S. O.

    2010-05-01

    Magnetorheological (MR) fluid dampers are semi-active control devices that have been applied over a wide range of practical vibration control applications. This paper concerns the experimental identification of the dynamic behaviour of an MR damper and the use of the identified parameters in the control of such a damper. Feed-forward and recurrent neural networks are used to model both the direct and inverse dynamics of the damper. Training and validation of the proposed neural networks are achieved by using the data generated through dynamic tests with the damper mounted on a tensile testing machine. The validation test results clearly show that the proposed neural networks can reliably represent both the direct and inverse dynamic behaviours of an MR damper. The effect of the cylinder's surface temperature on both the direct and inverse dynamics of the damper is studied, and the neural network model is shown to be reasonably robust against significant temperature variation. The inverse recurrent neural network model is introduced as a damper controller and experimentally evaluated against alternative controllers proposed in the literature. The results reveal that the neural-based damper controller offers superior damper control. This observation and the added advantages of low-power requirement, extended service life of the damper and the minimal use of sensors, indicate that a neural-based damper controller potentially offers the most cost-effective vibration control solution among the controllers investigated.

  20. Controlled thermal sintering of a metal-metal oxide-carbon ternary composite with a multi-scale hollow nanostructure for use as an anode material in Li-ion batteries.

    PubMed

    Kim, Hwan Jin; Zhang, Kan; Choi, Jae-Man; Song, Min Sang; Park, Jong Hyeok

    2014-03-11

    We report a synthetic scheme for preparing a SnO2-Sn-carbon triad inverse opal porous material using the controlled sintering of Sn precursor-infiltrated polystyrene (PS) nanobead films. Because the uniform PS nanobead film, which can be converted into carbon via a sintering step, uptakes the precursor solution, the carbon can be uniformly distributed throughout the Sn-based anode material. Moreover, the partial carbonization of the PS nanobeads under a controlled Ar/oxygen environment not only produces a composite material with an inverse opal-like porous nanostructure but also converts the Sn precursor/PS into a SnO2-Sn-C triad electrode.

  1. A Strassen-Newton algorithm for high-speed parallelizable matrix inversion

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Ferguson, Helaman R. P.

    1988-01-01

    Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.

  2. A physiologically motivated sparse, compact, and smooth (SCS) approach to EEG source localization.

    PubMed

    Cao, Cheng; Akalin Acar, Zeynep; Kreutz-Delgado, Kenneth; Makeig, Scott

    2012-01-01

    Here, we introduce a novel approach to the EEG inverse problem based on the assumption that principal cortical sources of multi-channel EEG recordings may be assumed to be spatially sparse, compact, and smooth (SCS). To enforce these characteristics of solutions to the EEG inverse problem, we propose a correlation-variance model which factors a cortical source space covariance matrix into the multiplication of a pre-given correlation coefficient matrix and the square root of the diagonal variance matrix learned from the data under a Bayesian learning framework. We tested the SCS method using simulated EEG data with various SNR and applied it to a real ECOG data set. We compare the results of SCS to those of an established SBL algorithm.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Simonetto, Andrea

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less

  4. Research on the Optimization Method of Arm Movement in the Assembly Workshop Based on Ergonomics

    NASA Astrophysics Data System (ADS)

    Hu, X. M.; Qu, H. W.; Xu, H. J.; Yang, L.; Yu, C. C.

    2017-12-01

    In order to improve the work efficiency and comfortability, Ergonomics is used to research the work of the operator in the assembly workshop. An optimization algorithm of arm movement in the assembly workshop is proposed. In the algorithm, a mathematical model of arm movement is established based on multi rigid body movement model and D-H method. The solution of inverse kinematics equation on arm movement is solved through kinematics theory. The evaluation functions of each joint movement and the whole arm movement are given based on the comfortability of human body joint. The solution method of the optimal arm movement posture based on the evaluation functions is described. The software CATIA is used to verify that the optimal arm movement posture is valid in an example and the experimental result show the effectiveness of the algorithm.

  5. The Ellipticity Filter-A Proposed Solution to the Mixed Event Problem in Nuclear Seismic Discrimination

    DTIC Science & Technology

    1974-09-07

    ellipticity filter. The source waveforms are recreated by an inverse transform of those complex ampli- tudes associated with the same azimuth...terms of the three complex data points and the ellipticity. Having solved the equations for all frequency bins, the inverse transform of...Transform of those complex amplitudes associated with Source 1, yielding the signal a (t). Similarly, take the inverse Transform of all

  6. Application of viscous-inviscid interaction methods to transonic turbulent flows

    NASA Technical Reports Server (NTRS)

    Lee, D.; Pletcher, R. H.

    1986-01-01

    Two different viscous-inviscid interaction schemes were developed for the analysis of steady, turbulent, transonic, separated flows over axisymmetric bodies. The viscous and inviscid solutions are coupled through the displacement concept using a transpiration velocity approach. In the semi-inverse interaction scheme, the viscous and inviscid equations are solved in an explicitly separate manner and the displacement thickness distribution is iteratively updated by a simple coupling algorithm. In the simultaneous interaction method, local solutions of viscous and inviscid equations are treated simultaneously, and the displacement thickness is treated as an unknown and is obtained as a part of the solution through a global iteration procedure. The inviscid flow region is described by a direct finite-difference solution of a velocity potential equation in conservative form. The potential equation is solved on a numerically generated mesh by an approximate factorization (AF2) scheme in the semi-inverse interaction method and by a successive line overrelaxation (SLOR) scheme in the simultaneous interaction method. The boundary-layer equations are used for the viscous flow region. The continuity and momentum equations are solved inversely in a coupled manner using a fully implicit finite-difference scheme.

  7. The uniqueness of the solution of cone-like inversion models for halo CMEs

    NASA Astrophysics Data System (ADS)

    Zhao, X. P.

    2006-12-01

    Most of elliptic halo CMEs are believed to be formed by the Thompson scattering of the photospheric light by the 3-D cone-like shell of the CME plasma. To obtain the real propagation direction and angular width of the halo CMEs, such cone-like inversion models as the circular cone, the elliptic cone and the ice-cream cone models have been suggested recently. Because the number of given parameters that are used to characterize 2-D elliptic halo CMEs observed by one spacecraft are less than the number of unknown parameters that are used to characterize the 3-D elliptic cone model, the solution of the elliptic cone model is not unique. Since it is difficult to determine whether or not an observed halo CME is formed by an circular cone or elliptic cone shell, the solution of circular cone model may often be not unique too. To fix the problem of the uniqueness of the solution of various 3-D cone-like inversion models, this work tries to develop the algorithm for using the data from multi-spacecraft, such as the STEREO A and B, and the Solar Sentinels.

  8. Integrated Chassis Control of Active Front Steering and Yaw Stability Control Based on Improved Inverse Nyquist Array Method

    PubMed Central

    2014-01-01

    An integrated chassis control (ICC) system with active front steering (AFS) and yaw stability control (YSC) is introduced in this paper. The proposed ICC algorithm uses the improved Inverse Nyquist Array (INA) method based on a 2-degree-of-freedom (DOF) planar vehicle reference model to decouple the plant dynamics under different frequency bands, and the change of velocity and cornering stiffness were considered to calculate the analytical solution in the precompensator design so that the INA based algorithm runs well and fast on the nonlinear vehicle system. The stability of the system is guaranteed by dynamic compensator together with a proposed PI feedback controller. After the response analysis of the system on frequency domain and time domain, simulations under step steering maneuver were carried out using a 2-DOF vehicle model and a 14-DOF vehicle model by Matlab/Simulink. The results show that the system is decoupled and the vehicle handling and stability performance are significantly improved by the proposed method. PMID:24782676

  9. Integrated chassis control of active front steering and yaw stability control based on improved inverse nyquist array method.

    PubMed

    Zhu, Bing; Chen, Yizhou; Zhao, Jian

    2014-01-01

    An integrated chassis control (ICC) system with active front steering (AFS) and yaw stability control (YSC) is introduced in this paper. The proposed ICC algorithm uses the improved Inverse Nyquist Array (INA) method based on a 2-degree-of-freedom (DOF) planar vehicle reference model to decouple the plant dynamics under different frequency bands, and the change of velocity and cornering stiffness were considered to calculate the analytical solution in the precompensator design so that the INA based algorithm runs well and fast on the nonlinear vehicle system. The stability of the system is guaranteed by dynamic compensator together with a proposed PI feedback controller. After the response analysis of the system on frequency domain and time domain, simulations under step steering maneuver were carried out using a 2-DOF vehicle model and a 14-DOF vehicle model by Matlab/Simulink. The results show that the system is decoupled and the vehicle handling and stability performance are significantly improved by the proposed method.

  10. A Markov Chain Monte Carlo Inversion Approach For Inverting InSAR Data With Application To Subsurface CO2 Injection

    NASA Astrophysics Data System (ADS)

    Ramirez, A. L.; Foxall, W.

    2011-12-01

    Surface displacements caused by reservoir pressure perturbations resulting from CO2 injection can often be measured by geodetic methods such as InSAR, tilt and GPS. We have developed a Markov Chain Monte Carlo (MCMC) approach to invert surface displacements measured by InSAR to map the pressure distribution associated with CO2 injection at the In Salah Krechba field, Algeria. The MCMC inversion entails sampling the solution space by proposing a series of trial 3D pressure-plume models. In the case of In Salah, the range of allowable models is constrained by prior information provided by well and geophysical data for the reservoir and possible fluid pathways in the overburden, and injection pressures and volumes. Each trial pressure distribution source is run through a (mathematical) forward model to calculate a set of synthetic surface deformation data. The likelihood that a particular proposal represents the true source is determined from the fit of the calculated data to the InSAR measurements, and those having higher likelihoods are passed to the posterior distribution. This procedure is repeated over typically ~104 - 105 trials until the posterior distribution converges to a stable solution. The solution to each stochastic inversion is in the form of Bayesian posterior probability density function (pdf) over the range of the alternative models that are consistent with the measured data and prior information. Therefore, the solution provides not only the highest likelihood model but also a realistic estimate of the solution uncertainty. Our InSalah work considered three flow model alternatives: 1) The first model assumed that the CO2 saturation and fluid pressure changes were confined to the reservoir; 2) the second model allowed the perturbations to occur also in a damage zone inferred in the lower caprock from 3D seismic surveys; and 3) the third model allowed fluid pressure changes anywhere within the reservoir and overburden. Alternative (2) yielded optimal fits to the data in inversions of InSAR data collected in 2007. The results indicate that pressure changes developed near the injection well and then penetrated into the lower caprock along the postulated damage zone. As in many geophysical inverse problems, inversion of surface displacement data for subsurface sources of deformation is inherently uncertain and non-unique. We will also discuss the approach used to characterize solution uncertainty. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  11. Fast downscaled inverses for images compressed with M-channel lapped transforms.

    PubMed

    de Queiroz, R L; Eschbach, R

    1997-01-01

    Compressed images may be decompressed and displayed or printed using different devices at different resolutions. Full decompression and rescaling in space domain is a very expensive method. We studied downscaled inverses where the image is decompressed partially, and a reduced inverse transform is used to recover the image. In this fashion, fewer transform coefficients are used and the synthesis process is simplified. We studied the design of fast inverses, for a given forward transform. General solutions are presented for M-channel finite impulse response (FIR) filterbanks, of which block and lapped transforms are a subset. Designs of faster inverses are presented for popular block and lapped transforms.

  12. A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Zuo, B.; Hu, X.; Li, H.

    2011-12-01

    A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.

  13. Crustal Stress and Strain Distribution in Sicily (Southern Italy) from Joint Analysis of Seismicity and Geodetic Data

    NASA Astrophysics Data System (ADS)

    Presti, D.; Neri, G.; Aloisi, M.; Cannavo, F.; Orecchio, B.; Palano, M.; Siligato, G.; Totaro, C.

    2014-12-01

    An updated database of earthquake focal mechanisms is compiled for the Sicilian region (southern Italy) and surrounding off-shore areas where the Nubia-Eurasia convergence coexists with the very-slow residual rollback of the Ionian subducting slab. High-quality solutions selected from literature and catalogs have been integrated with new solutions estimated in the present work using the Cut And Paste (CAP) waveform inversion method. In the CAP algorithm (Zhao and Helmberger, 1994; Zhu and Helmberger, 1996), each waveform is broken up into Pnl and surface wave segments, which are weighted differently during the inversion procedure. Integration of the new solutions with the ones selected from literature and official catalogs led us to collect a database consisting exclusively of waveform inversion data relative to earthquakes with minimum magnitude 2.6. The seismicity and focal mechanism distributions have been compared with crustal motion and strain data coming from GNSS analyses. For this purpose GNSS-based observations collected over the investigated area by episodic measurements (1994-2013) as well as continuous monitoring (since 2006) were processed by the GAMIT/GLOBK software packages (Herring et al., 2010) following the approach described in Palano et al. (2011). To adequately investigate the crustal deformation pattern, the estimated GNSS velocities were aligned to a fixed Eurasian reference frame. The good agreement found between seismic and geodetic information contributes to better define seismotectonic domains characterized by different kinematics. Moving from the available geophysical information and from an early application of FEM algorithms, we have also started to investigate stress/strain fields in the crust of the study area including depth dependence and relationships with rupture of the main seismogenic structures.

  14. Localization Accuracy of Distributed Inverse Solutions for Electric and Magnetic Source Imaging of Interictal Epileptic Discharges in Patients with Focal Epilepsy.

    PubMed

    Heers, Marcel; Chowdhury, Rasheda A; Hedrich, Tanguy; Dubeau, François; Hall, Jeffery A; Lina, Jean-Marc; Grova, Christophe; Kobayashi, Eliane

    2016-01-01

    Distributed inverse solutions aim to realistically reconstruct the origin of interictal epileptic discharges (IEDs) from noninvasively recorded electroencephalography (EEG) and magnetoencephalography (MEG) signals. Our aim was to compare the performance of different distributed inverse solutions in localizing IEDs: coherent maximum entropy on the mean (cMEM), hierarchical Bayesian implementations of independent identically distributed sources (IID, minimum norm prior) and spatially coherent sources (COH, spatial smoothness prior). Source maxima (i.e., the vertex with the maximum source amplitude) of IEDs in 14 EEG and 19 MEG studies from 15 patients with focal epilepsy were analyzed. We visually compared their concordance with intracranial EEG (iEEG) based on 17 cortical regions of interest and their spatial dispersion around source maxima. Magnetic source imaging (MSI) maxima from cMEM were most often confirmed by iEEG (cMEM: 14/19, COH: 9/19, IID: 8/19 studies). COH electric source imaging (ESI) maxima co-localized best with iEEG (cMEM: 8/14, COH: 11/14, IID: 10/14 studies). In addition, cMEM was less spatially spread than COH and IID for ESI and MSI (p < 0.001 Bonferroni-corrected post hoc t test). Highest positive predictive values for cortical regions with IEDs in iEEG could be obtained with cMEM for MSI and with COH for ESI. Additional realistic EEG/MEG simulations confirmed our findings. Accurate spatially extended sources, as found in cMEM (ESI and MSI) and COH (ESI) are desirable for source imaging of IEDs because this might influence surgical decision. Our simulations suggest that COH and IID overestimate the spatial extent of the generators compared to cMEM.

  15. Root finding in the complex plane for seismo-acoustic propagation scenarios with Green's function solutions.

    PubMed

    McCollom, Brittany A; Collis, Jon M

    2014-09-01

    A normal mode solution to the ocean acoustic problem of the Pekeris waveguide with an elastic bottom using a Green's function formulation for a compressional wave point source is considered. Analytic solutions to these types of waveguide propagation problems are strongly dependent on the eigenvalues of the problem; these eigenvalues represent horizontal wavenumbers, corresponding to propagating modes of energy. The eigenvalues arise as singularities in the inverse Hankel transform integral and are specified by roots to a characteristic equation. These roots manifest themselves as poles in the inverse transform integral and can be both subtle and difficult to determine. Following methods previously developed [S. Ivansson et al., J. Sound Vib. 161 (1993)], a root finding routine has been implemented using the argument principle. Using the roots to the characteristic equation in the Green's function formulation, full-field solutions are calculated for scenarios where an acoustic source lies in either the water column or elastic half space. Solutions are benchmarked against laboratory data and existing numerical solutions.

  16. Pattern-Based Inverse Modeling for Characterization of Subsurface Flow Models with Complex Geologic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.

    2017-12-01

    Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.

  17. Estimation of near-surface shear-wave velocity by inversion of Rayleigh waves

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Park, C.B.

    1999-01-01

    The shear-wave (S-wave) velocity of near-surface materials (soil, rocks, pavement) and its effect on seismic-wave propagation are of fundamental interest in many groundwater, engineering, and environmental studies. Rayleigh-wave phase velocity of a layered-earth model is a function of frequency and four groups of earth properties: P-wave velocity, S-wave velocity, density, and thickness of layers. Analysis of the Jacobian matrix provides a measure of dispersion-curve sensitivity to earth properties. S-wave velocities are the dominant influence on a dispersion curve in a high-frequency range (>5 Hz) followed by layer thickness. An iterative solution technique to the weighted equation proved very effective in the high-frequency range when using the Levenberg-Marquardt and singular-value decomposition techniques. Convergence of the weighted solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Synthetic examples demonstrated calculation efficiency and stability of inverse procedures. We verify our method using borehole S-wave velocity measurements.Iterative solutions to the weighted equation by the Levenberg-Marquardt and singular-value decomposition techniques are derived to estimate near-surface shear-wave velocity. Synthetic and real examples demonstrate the calculation efficiency and stability of the inverse procedure. The inverse results of the real example are verified by borehole S-wave velocity measurements.

  18. A problem with inverse time for a singularly perturbed integro-differential equation with diagonal degeneration of the kernel of high order

    NASA Astrophysics Data System (ADS)

    Bobodzhanov, A. A.; Safonov, V. F.

    2016-04-01

    We consider an algorithm for constructing asymptotic solutions regularized in the sense of Lomov (see [1], [2]). We show that such problems can be reduced to integro-differential equations with inverse time. But in contrast to known papers devoted to this topic (see, for example, [3]), in this paper we study a fundamentally new case, which is characterized by the absence, in the differential part, of a linear operator that isolates, in the asymptotics of the solution, constituents described by boundary functions and by the fact that the integral operator has kernel with diagonal degeneration of high order. Furthermore, the spectrum of the regularization operator A(t) (see below) may contain purely imaginary eigenvalues, which causes difficulties in the application of the methods of construction of asymptotic solutions proposed in the monograph [3]. Based on an analysis of the principal term of the asymptotics, we isolate a class of inhomogeneities and initial data for which the exact solution of the original problem tends to the limit solution (as \\varepsilon\\to+0) on the entire time interval under consideration, also including a boundary-layer zone (that is, we solve the so-called initialization problem). The paper is of a theoretical nature and is designed to lead to a greater understanding of the problems in the theory of singular perturbations. There may be applications in various applied areas where models described by integro-differential equations are used (for example, in elasticity theory, the theory of electrical circuits, and so on).

  19. A note on convergence of solutions of total variation regularized linear inverse problems

    NASA Astrophysics Data System (ADS)

    Iglesias, José A.; Mercier, Gwenael; Scherzer, Otmar

    2018-05-01

    In a recent paper by Chambolle et al (2017 Inverse Problems 33 015002) it was proven that if the subgradient of the total variation at the noise free data is not empty, the level-sets of the total variation denoised solutions converge to the level-sets of the noise free data with respect to the Hausdorff distance. The condition on the subgradient corresponds to the source condition introduced by Burger and Osher (2007 Multiscale Model. Simul. 6 365–95), who proved convergence rates results with respect to the Bregman distance under this condition. We generalize the result of Chambolle et al to total variation regularization of general linear inverse problems under such a source condition. As particular applications we present denoising in bounded and unbounded, convex and non convex domains, deblurring and inversion of the circular Radon transform. In all these examples the convergence result applies. Moreover, we illustrate the convergence behavior through numerical examples.

  20. Automatic 3D Moment tensor inversions for southern California earthquakes

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Tape, C.; Friberg, P.; Tromp, J.

    2008-12-01

    We present a new source mechanism (moment-tensor and depth) catalog for about 150 recent southern California earthquakes with Mw ≥ 3.5. We carefully select the initial solutions from a few available earthquake catalogs as well as our own preliminary 3D moment tensor inversion results. We pick useful data windows by assessing the quality of fits between the data and synthetics using an automatic windowing package FLEXWIN (Maggi et al 2008). We compute the source Fréchet derivatives of moment-tensor elements and depth for a recent 3D southern California velocity model inverted based upon finite-frequency event kernels calculated by the adjoint methods and a nonlinear conjugate gradient technique with subspace preconditioning (Tape et al 2008). We then invert for the source mechanisms and event depths based upon the techniques introduced by Liu et al 2005. We assess the quality of this new catalog, as well as the other existing ones, by computing the 3D synthetics for the updated 3D southern California model. We also plan to implement the moment-tensor inversion methods to automatically determine the source mechanisms for earthquakes with Mw ≥ 3.5 in southern California.

  1. Waveform-based Bayesian full moment tensor inversion and uncertainty determination for the induced seismicity in an oil/gas field

    NASA Astrophysics Data System (ADS)

    Gu, Chen; Marzouk, Youssef M.; Toksöz, M. Nafi

    2018-03-01

    Small earthquakes occur due to natural tectonic motions and are induced by oil and gas production processes. In many oil/gas fields and hydrofracking processes, induced earthquakes result from fluid extraction or injection. The locations and source mechanisms of these earthquakes provide valuable information about the reservoirs. Analysis of induced seismic events has mostly assumed a double-couple source mechanism. However, recent studies have shown a non-negligible percentage of non-double-couple components of source moment tensors in hydraulic fracturing events, assuming a full moment tensor source mechanism. Without uncertainty quantification of the moment tensor solution, it is difficult to determine the reliability of these source models. This study develops a Bayesian method to perform waveform-based full moment tensor inversion and uncertainty quantification for induced seismic events, accounting for both location and velocity model uncertainties. We conduct tests with synthetic events to validate the method, and then apply our newly developed Bayesian inversion approach to real induced seismicity in an oil/gas field in the sultanate of Oman—determining the uncertainties in the source mechanism and in the location of that event.

  2. Encapsulation of nanoclusters in dried gel materials via an inverse micelle/sol gel synthesis

    DOEpatents

    Martino, Anthony; Yamanaka, Stacey A.; Kawola, Jeffrey S.; Showalter, Steven K.; Loy, Douglas A.

    1998-01-01

    A dried gel material sterically entrapping nanoclusters of a catalytically active material and a process to make the material via an inverse micelle/sol-gel synthesis. A surfactant is mixed with an apolar solvent to form an inverse micelle solution. A salt of a catalytically active material, such as gold chloride, is added along with a silica gel precursor to the solution to form a mixture. To the mixture are then added a reducing agent for the purpose of reducing the gold in the gold chloride to atomic gold to form the nanoclusters and a condensing agent to form the gel which sterically entraps the nanoclusters. The nanoclusters are normally in the average size range of from 5-10 nm in diameter with a monodisperse size distribution.

  3. Error analysis applied to several inversion techniques used for the retrieval of middle atmospheric constituents from limb-scanning MM-wave spectroscopic measurements

    NASA Technical Reports Server (NTRS)

    Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.

    1992-01-01

    The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.

  4. Improving the Computational Effort of Set-Inversion-Based Prandial Insulin Delivery for Its Integration in Insulin Pumps

    PubMed Central

    León-Vargas, Fabian; Calm, Remei; Bondia, Jorge; Vehí, Josep

    2012-01-01

    Objective Set-inversion-based prandial insulin delivery is a new model-based bolus advisor for postprandial glucose control in type 1 diabetes mellitus (T1DM). It automatically coordinates the values of basal–bolus insulin to be infused during the postprandial period so as to achieve some predefined control objectives. However, the method requires an excessive computation time to compute the solution set of feasible insulin profiles, which impedes its integration into an insulin pump. In this work, a new algorithm is presented, which reduces computation time significantly and enables the integration of this new bolus advisor into current processing features of smart insulin pumps. Methods A new strategy was implemented that focused on finding the combined basal–bolus solution of interest rather than an extensive search of the feasible set of solutions. Analysis of interval simulations, inclusion of physiological assumptions, and search domain contractions were used. Data from six real patients with T1DM were used to compare the performance between the optimized and the conventional computations. Results In all cases, the optimized version yielded the basal–bolus combination recommended by the conventional method and in only 0.032% of the computation time. Simulations show that the mean number of iterations for the optimized computation requires approximately 3.59 s at 20 MHz processing power, in line with current features of smart pumps. Conclusions A computationally efficient method for basal–bolus coordination in postprandial glucose control has been presented and tested. The results indicate that an embedded algorithm within smart insulin pumps is now feasible. Nonetheless, we acknowledge that a clinical trial will be needed in order to justify this claim. PMID:23294789

  5. Inverse solution of ear-canal area function from reflectance

    PubMed Central

    Rasetshwane, Daniel M.; Neely, Stephen T.

    2011-01-01

    A number of acoustical applications require the transformation of acoustical quantities, such as impedance and pressure that are measured at the entrance of the ear canal, to quantities at the eardrum. This transformation often requires knowledge of the shape of the ear canal. Previous attempts to measure ear-canal area functions were either invasive, non-reproducible, or could only measure the area function up to a point mid-way along the canal. A method to determine the area function of the ear canal from measurements of acoustic impedance at the entrance of the ear canal is described. The method is based on a solution to the inverse problem in which measurements of impedance are used to calculate reflectance, which is then used to determine the area function of the canal. The mean ear-canal area function determined using this method is similar to mean ear-canal area functions measured by other researchers using different techniques. The advantage of the proposed method over previous methods is that it is non- invasive, fast, and reproducible. PMID:22225043

  6. Numerical Procedures for Inlet/Diffuser/Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Rubin, Stanley G.

    1998-01-01

    Two primitive variable, pressure based, flux-split, RNS/NS solution procedures for viscous flows are presented. Both methods are uniformly valid across the full Mach number range, Le., from the incompressible limit to high supersonic speeds. The first method is an 'optimized' version of a previously developed global pressure relaxation RNS procedure. Considerable reduction in the number of relatively expensive matrix inversion, and thereby in the computational time, has been achieved with this procedure. CPU times are reduced by a factor of 15 for predominantly elliptic flows (incompressible and low subsonic). The second method is a time-marching, 'linearized' convection RNS/NS procedure. The key to the efficiency of this procedure is the reduction to a single LU inversion at the inflow cross-plane. The remainder of the algorithm simply requires back-substitution with this LU and the corresponding residual vector at any cross-plane location. This method is not time-consistent, but has a convective-type CFL stability limitation. Both formulations are robust and provide accurate solutions for a variety of internal viscous flows to be provided herein.

  7. Efficient preconditioning of the electronic structure problem in large scale ab initio molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiffmann, Florian; VandeVondele, Joost, E-mail: Joost.VandeVondele@mat.ethz.ch

    2015-06-28

    We present an improved preconditioning scheme for electronic structure calculations based on the orbital transformation method. First, a preconditioner is developed which includes information from the full Kohn-Sham matrix but avoids computationally demanding diagonalisation steps in its construction. This reduces the computational cost of its construction, eliminating a bottleneck in large scale simulations, while maintaining rapid convergence. In addition, a modified form of Hotelling’s iterative inversion is introduced to replace the exact inversion of the preconditioner matrix. This method is highly effective during molecular dynamics (MD), as the solution obtained in earlier MD steps is a suitable initial guess. Filteringmore » small elements during sparse matrix multiplication leads to linear scaling inversion, while retaining robustness, already for relatively small systems. For system sizes ranging from a few hundred to a few thousand atoms, which are typical for many practical applications, the improvements to the algorithm lead to a 2-5 fold speedup per MD step.« less

  8. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    NASA Astrophysics Data System (ADS)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).

  9. Multiple Fan-Beam Optical Tomography: Modelling Techniques

    PubMed Central

    Rahim, Ruzairi Abdul; Chen, Leong Lai; San, Chan Kok; Rahiman, Mohd Hafiz Fazalul; Fea, Pang Jon

    2009-01-01

    This paper explains in detail the solution to the forward and inverse problem faced in this research. In the forward problem section, the projection geometry and the sensor modelling are discussed. The dimensions, distributions and arrangements of the optical fibre sensors are determined based on the real hardware constructed and these are explained in the projection geometry section. The general idea in sensor modelling is to simulate an artificial environment, but with similar system properties, to predict the actual sensor values for various flow models in the hardware system. The sensitivity maps produced from the solution of the forward problems are important in reconstructing the tomographic image. PMID:22291523

  10. Kinematics and design of a class of parallel manipulators

    NASA Astrophysics Data System (ADS)

    Hertz, Roger Barry

    1998-12-01

    This dissertation is concerned with the kinematic analysis and design of a class of three degree-of-freedom, spatial parallel manipulators. The class of manipulators is characterized by two platforms, between which are three legs, each possessing a succession of revolute, spherical, and revolute joints. The class is termed the "revolute-spherical-revolute" class of parallel manipulators. Two members of this class are examined. The first mechanism is a double-octahedral variable-geometry truss, and the second is termed a double tripod. The history the mechanisms is explored---the variable-geometry truss dates back to 1984, while predecessors of the double tripod mechanism date back to 1869. This work centers on the displacement analysis of these three-degree-of-freedom mechanisms. Two types of problem are solved: the forward displacement analysis (forward kinematics) and the inverse displacement analysis (inverse kinematics). The kinematic model of the class of mechanism is general in nature. A classification scheme for the revolute-spherical-revolute class of mechanism is introduced, which uses dominant geometric features to group designs into 8 different sub-classes. The forward kinematics problem is discussed: given a set of independently controllable input variables, solve for the relative position and orientation between the two platforms. For the variable-geometry truss, the controllable input variables are assumed to be the linear (prismatic) joints. For the double tripod, the controllable input variables are the three revolute joints adjacent to the base (proximal) platform. Multiple solutions are presented to the forward kinematics problem, indicating that there are many different positions (assemblies) that the manipulator can assume with equivalent inputs. For the double tripod these solutions can be expressed as a 16th degree polynomial in one unknown, while for the variable-geometry truss there exist two 16th degree polynomials, giving rise to 256 solutions. For special cases of the double tripod, the forward kinematics problem is shown to have a closed-form solution. Numerical examples are presented for the solution to the forward kinematics. A double tripod is presented that admits 16 unique and real forward kinematics solutions. Another example for a variable geometry truss is given that possesses 64 real solutions: 8 for each 16th order polynomial. The inverse kinematics problem is also discussed: given the relative position of the hand (end-effector), which is rigidly attached to one platform, solve for the independently controlled joint variables. Iterative solutions are proposed for both the variable-geometry truss and the double tripod. For special cases of both mechanisms, closed-form solutions are given. The practical problems of designing, building, and controlling a double-tripod manipulator are addressed. The resulting manipulator is a first-of-its kind prototype of a tapered (asymmetric) double-tripod manipulator. Real-time forward and inverse kinematics algorithms on an industrial robot controller is presented. The resulting performance of the prototype is impressive, since it was to achieve a maximum tool-tip speed of 4064 mm/s, maximum acceleration of 5 g, and a cycle time of 1.2 seconds for a typical pick-and-place pattern.

  11. Joint Inversion of Gravity and Gravity Tensor Data Using the Structural Index as Weighting Function Rate Decay

    NASA Astrophysics Data System (ADS)

    Ialongo, S.; Cella, F.; Fedi, M.; Florio, G.

    2011-12-01

    Most geophysical inversion problems are characterized by a number of data considerably higher than the number of the unknown parameters. This corresponds to solve highly underdetermined systems. To get a unique solution, a priori information must be therefore introduced. We here analyze the inversion of the gravity gradient tensor (GGT). Previous approaches to invert jointly or independently more gradient components are by Li (2001) proposing an algorithm using a depth weighting function and Zhdanov et alii (2004), providing a well focused inversion of gradient data. Both the methods give a much-improved solution compared with the minimum length solution, which is invariably shallow and not representative of the true source distribution. For very undetermined problems, this feature is due to the role of the depth weighting matrices used by both the methods. Recently, Cella and Fedi (2011) showed however that for magnetic and gravity data the depth weighting function has to be defined carefully, under a preliminary application of Euler Deconvolution or Depth from Extreme Point methods, yielding the appropriate structural index and then using it as the rate decay of the weighting function. We therefore propose to extend this last approach to invert jointly or independently the GGT tensor using the structural index as weighting function rate decay. In case of a joint inversion, gravity data can be added as well. This multicomponent case is also relevant because the simultaneous use of several components and gravity increase the number of data and reduce the algebraic ambiguity compared to the inversion of a single component. The reduction of such ambiguity was shown in Fedi et al, (2005) decisive to get an improved depth resolution in inverse problems, independently from any form of depth weighting function. The method is demonstrated to synthetic cases and applied to real cases, such as the Vredefort impact area (South Africa), characterized by a complex density distribution, well defining a central uplift area, ring structures and low density sediments. REFERENCES Cella F., and Fedi M., 2011, Inversion of potential field data using the structural index as weighting function rate decay, Geophysical Prospecting, doi: 10.1111/j.1365-2478.2011.00974.x Fedi M., Hansen P. C., and Paoletti V., 2005 Analysis of depth resolution in potential-field inversion. Geophysics, 70, NO. 6 Li, Y., 2001, 3-D inversion of gravity gradiometry data: 71st Annual Meeting, SEG, Expanded Abstracts, 1470-1473. Zhdanov, M. S., Ellis, R. G., and Mukherjee, S., 2004, Regularized focusing inversion of 3-D gravity tensor data: Geophysics, 69, 925-937.

  12. Open-ocean boundary conditions from interior data: Local and remote forcing of Massachusetts Bay

    USGS Publications Warehouse

    Bogden, P.S.; Malanotte-Rizzoli, P.; Signell, R.

    1996-01-01

    Massachusetts and Cape Cod Bays form a semienclosed coastal basin that opens onto the much larger Gulf of Maine. Subtidal circulation in the bay is driven by local winds and remotely driven flows from the gulf. The local-wind forced flow is estimated with a regional shallow water model driven by wind measurements. The model uses a gravity wave radiation condition along the open-ocean boundary. Results compare reasonably well with observed currents near the coast. In some offshore regions however, modeled flows are an order of magnitude less energetic than the data. Strong flows are observed even during periods of weak local wind forcing. Poor model-data comparisons are attributable, at least in part, to open-ocean boundary conditions that neglect the effects of remote forcing. Velocity measurements from within Massachusetts Bay are used to estimate the remotely forced component of the flow. The data are combined with shallow water dynamics in an inverse-model formulation that follows the theory of Bennett and McIntosh [1982], who considered tides. We extend their analysis to consider the subtidal response to transient forcing. The inverse model adjusts the a priori open-ocean boundary condition, thereby minimizing a combined measure of model-data misfit and boundary condition adjustment. A "consistency criterion" determines the optimal trade-off between the two. The criterion is based on a measure of plausibility for the inverse solution. The "consistent" inverse solution reproduces 56% of the average squared variation in the data. The local-wind-driven flow alone accounts for half of the model skill. The other half is attributable to remotely forced flows from the Gulf of Maine. The unexplained 44% comes from measurement errors and model errors that are not accounted for in the analysis. 

  13. Crustal velocity structure of central Gansu Province from regional seismic waveform inversion using firework algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yanyang; Wang, Yanbin; Zhang, Yuansheng

    2017-04-01

    The firework algorithm (FWA) is a novel swarm intelligence-based method recently proposed for the optimization of multi-parameter, nonlinear functions. Numerical waveform inversion experiments using a synthetic model show that the FWA performs well in both solution quality and efficiency. We apply the FWA in this study to crustal velocity structure inversion using regional seismic waveform data of central Gansu on the northeastern margin of the Qinghai-Tibet plateau. Seismograms recorded from the moment magnitude ( M W) 5.4 Minxian earthquake enable obtaining an average crustal velocity model for this region. We initially carried out a series of FWA robustness tests in regional waveform inversion at the same earthquake and station positions across the study region, inverting two velocity structure models, with and without a low-velocity crustal layer; the accuracy of our average inversion results and their standard deviations reveal the advantages of the FWA for the inversion of regional seismic waveforms. We applied the FWA across our study area using three component waveform data recorded by nine broadband permanent seismic stations with epicentral distances ranging between 146 and 437 km. These inversion results show that the average thickness of the crust in this region is 46.75 km, while thicknesses of the sedimentary layer, and the upper, middle, and lower crust are 3.15, 15.69, 13.08, and 14.83 km, respectively. Results also show that the P-wave velocities of these layers and the upper mantle are 4.47, 6.07, 6.12, 6.87, and 8.18 km/s, respectively.

  14. Dynamic Shape Reconstruction of Three-Dimensional Frame Structures Using the Inverse Finite Element Method

    NASA Technical Reports Server (NTRS)

    Gherlone, Marco; Cerracchio, Priscilla; Mattone, Massimiliano; Di Sciuva, Marco; Tessler, Alexander

    2011-01-01

    A robust and efficient computational method for reconstructing the three-dimensional displacement field of truss, beam, and frame structures, using measured surface-strain data, is presented. Known as shape sensing , this inverse problem has important implications for real-time actuation and control of smart structures, and for monitoring of structural integrity. The present formulation, based on the inverse Finite Element Method (iFEM), uses a least-squares variational principle involving strain measures of Timoshenko theory for stretching, torsion, bending, and transverse shear. Two inverse-frame finite elements are derived using interdependent interpolations whose interior degrees-of-freedom are condensed out at the element level. In addition, relationships between the order of kinematic-element interpolations and the number of required strain gauges are established. As an example problem, a thin-walled, circular cross-section cantilevered beam subjected to harmonic excitations in the presence of structural damping is modeled using iFEM; where, to simulate strain-gauge values and to provide reference displacements, a high-fidelity MSC/NASTRAN shell finite element model is used. Examples of low and high-frequency dynamic motion are analyzed and the solution accuracy examined with respect to various levels of discretization and the number of strain gauges.

  15. Preparation and Optical Properties of Spherical Inverse Opals by Liquid Phase Deposition Using Spherical Colloidal Crystals

    NASA Astrophysics Data System (ADS)

    Aoi, Y.; Tominaga, T.

    2013-03-01

    Titanium dioxide (TiO2) inverse opals in spherical shape were prepared by liquid phase deposition (LPD) using spherical colloidal crystals as templates. Spherical colloidal crystals were produced by ink-jet drying technique. Aqueous emulsion droplets that contain polystyrene latex particles were ejected into air and dried. Closely packed colloidal crystals with spherical shape were obtained. The obtained spherical colloidal crystals were used as templates for the LPD. The templates were dispersed in the deposition solution of the LPD, i.e. a mixed solution of ammonium hexafluorotitanate and boric acid and reacted for 4 h at 30 °C. After the LPD process, the interstitial spaces of the spherical colloidal crystals were completely filled with titanium oxide. Subsequent heat treatment resulted in removal of templates and spherical titanium dioxide inverse opals. The spherical shape of the template was retained. SEM observations indicated that the periodic ordered voids were surrounded by titanium dioxide. The optical reflectance spectra indicated that the optical properties of the spherical titanium dioxide inverse opals were due to Bragg diffractions from the ordered structure. Filling in the voids of the inverse opals with different solvents caused remarkable changes in the reflectance peak.

  16. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.

    2015-03-01

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  17. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning.

    PubMed

    Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W

    2015-03-21

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  18. FAST TRACK PAPER: Non-iterative multiple-attenuation methods: linear inverse solutions to non-linear inverse problems - II. BMG approximation

    NASA Astrophysics Data System (ADS)

    Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing

    2004-12-01

    The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.

  19. The boundary element method applied to 3D magneto-electro-elastic dynamic problems

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Markov, I. P.; Kuznetsov, Iu A.

    2017-11-01

    Due to the coupling properties, the magneto-electro-elastic materials possess a wide number of applications. They exhibit general anisotropic behaviour. Three-dimensional transient analyses of magneto-electro-elastic solids can hardly be found in the literature. 3D direct boundary element formulation based on the weakly-singular boundary integral equations in Laplace domain is presented in this work for solving dynamic linear magneto-electro-elastic problems. Integral expressions of the three-dimensional fundamental solutions are employed. Spatial discretization is based on a collocation method with mixed boundary elements. Convolution quadrature method is used as a numerical inverse Laplace transform scheme to obtain time domain solutions. Numerical examples are provided to illustrate the capability of the proposed approach to treat highly dynamic problems.

  20. An Extended Trajectory Mechanics Approach for Calculating the Path of a Pressure Transient: Derivation and Illustration

    NASA Astrophysics Data System (ADS)

    Vasco, D. W.

    2018-04-01

    Following an approach used in quantum dynamics, an exponential representation of the hydraulic head transforms the diffusion equation governing pressure propagation into an equivalent set of ordinary differential equations. Using a reservoir simulator to determine one set of dependent variables leaves a reduced set of equations for the path of a pressure transient. Unlike the current approach for computing the path of a transient, based on a high-frequency asymptotic solution, the trajectories resulting from this new formulation are valid for arbitrary spatial variations in aquifer properties. For a medium containing interfaces and layers with sharp boundaries, the trajectory mechanics approach produces paths that are compatible with travel time fields produced by a numerical simulator, while the asymptotic solution produces paths that bend too strongly into high permeability regions. The breakdown of the conventional asymptotic solution, due to the presence of sharp boundaries, has implications for model parameter sensitivity calculations and the solution of the inverse problem. For example, near an abrupt boundary, trajectories based on the asymptotic approach deviate significantly from regions of high sensitivity observed in numerical computations. In contrast, paths based on the new trajectory mechanics approach coincide with regions of maximum sensitivity to permeability changes.

  1. Workflows for Full Waveform Inversions

    NASA Astrophysics Data System (ADS)

    Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas

    2017-04-01

    Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.

  2. Parametrization study of the land multiparameter VTI elastic waveform inversion

    NASA Astrophysics Data System (ADS)

    He, W.; Plessix, R.-É.; Singh, S.

    2018-06-01

    Multiparameter inversion of seismic data remains challenging due to the trade-off between the different elastic parameters and the non-uniqueness of the solution. The sensitivity of the seismic data to a given subsurface elastic parameter depends on the source and receiver ray/wave path orientations at the subsurface point. In a high-frequency approximation, this is commonly analysed through the study of the radiation patterns that indicate the sensitivity of each parameter versus the incoming (from the source) and outgoing (to the receiver) angles. In practice, this means that the inversion result becomes sensitive to the choice of parametrization, notably because the null-space of the inversion depends on this choice. We can use a least-overlapping parametrization that minimizes the overlaps between the radiation patterns, in this case each parameter is only sensitive in a restricted angle domain, or an overlapping parametrization that contains a parameter sensitive to all angles, in this case overlaps between the radiation parameters occur. Considering a multiparameter inversion in an elastic vertically transverse isotropic medium and a complex land geological setting, we show that the inversion with the least-overlapping parametrization gives less satisfactory results than with the overlapping parametrization. The difficulties come from the complex wave paths that make difficult to predict the areas of sensitivity of each parameter. This shows that the parametrization choice should not only be based on the radiation pattern analysis but also on the angular coverage at each subsurface point that depends on geology and the acquisition layout.

  3. New Analysis of Solute Drag in AA5754 by Precise Determination of Point Defect Generation and the Orowan Relation

    NASA Astrophysics Data System (ADS)

    Diak, Brad J.; Penlington, Alex; Saimoto, Shig

    Serrated deformation in Al-Mg alloys creates problems that affect consumer product acceptability. This effect is usually attributed to the Portevin-LeChâtelier effect. In this study the inverse PLC effect due to solute drag on moving dislocations is examined in AA5754. The drag mechanism is dependent on the diffusivity of the solute which is in-turn dependent on the point defect evolution during deformation. Experimental determination of the parabolic James-Barnett drag profile by strain rate change experiments indicates the peak stress is centered at 1.5×10-9m/s, which requires a mechanical formation energy for vacancies of 0.4eV/at. A new slip-based constitutive relation was used to determine the evolution of vacancy volume fraction with deformation with strain, which is greater than the volume fraction of vacancies predicted by the solute drag profile.

  4. A Solution Framework for Environmental Characterization Problems

    EPA Science Inventory

    This paper describes experiences developing a grid-enabled framework for solving environmental inverse problems. The solution approach taken here couples environmental simulation models with global search methods and requires readily available computational resources of the grid ...

  5. Full-waveform inversion of GPR data for civil engineering applications

    NASA Astrophysics Data System (ADS)

    van der Kruk, Jan; Kalogeropoulos, Alexis; Hugenschmidt, Johannes; Klotzsche, Anja; Busch, Sebastian; Vereecken, Harry

    2014-05-01

    Conventional GPR ray-based techniques are often limited in their capability to image complex structures due to the pertaining approximations. Due to the increased computational power, it is becoming more easy to use modeling and inversion tools that explicitly take into account the detailed electromagnetic wave propagation characteristics. In this way, new civil engineering application avenues are opening up that enable an improved high resolution imaging of quantitative medium properties. In this contribution, we show recent developments that enable the full-waveform inversion of off-ground, on-ground and crosshole GPR data. For a successful inversion, a proper start model must be used that generates synthetic data that overlaps the measured data with at least half a wavelength. In addition, the GPR system must be calibrated such that an effective wavelet is obtained that encompasses the complexity of the GPR source and receiver antennas. Simple geometries such as horizontal layers can be described with a limited number of model parameters, which enable the use of a combined global and local search using the Simplex search algorithm. This approach has been implemented for the full-waveform inversion of off-ground and on-ground GPR data measured over horizontally layered media. In this way, an accurate 3D frequency domain forward model of Maxwell's equation can be used where the integral representation of the electric field is numerically evaluated. The full-waveform inversion (FWI) for a large number of unknowns uses gradient-based optimization methods where a 3D to 2D conversion is used to apply this method to experimental data. Off-ground GPR data, measured over homogeneous concrete specimens, were inverted using the full-waveform inversion. In contrast to traditional ray-based techniques we were able to obtain quantitative values for the permittivity and conductivity and in this way distinguish between moisture and chloride effects. For increasing chloride content increasing frequency-dependent conductivity values were obtained. The off-ground full-waveform inversion was extended to invert for positive and negative gradients in conductivity and the conductivity gradient direction could be correctly identified. Experimental specimen containing gradients were generated by exposing a concrete slab to controlled wetting-drying cycles using a saline solution. Full-waveform inversion of the measured data correctly identified the conductivity gradient direction which was confirmed by destructive analysis. On-ground CMP GPR data measured over a concrete layer overlying a metal plate show interfering multiple reflections, which indicates that the structure acts as a waveguide. Calculation of the phase-velocity spectrum shows the presence of several higher order modes. Whereas the dispersion inversion returns the thickness and layer height, the full-waveform inversion was also able to estimate quantitative conductivity values. This abstract is a contribution to COST Action TU1208

  6. Active and Passive Hydrologic Tomographic Surveys:A Revolution in Hydrology (Invited)

    NASA Astrophysics Data System (ADS)

    Yeh, T. J.

    2013-12-01

    Mathematical forward or inverse problems of flow through geological media always have unique solutions if necessary conditions are givens. Unique mathematical solutions to forward or inverse modeling of field problems are however always uncertain (an infinite number of possibilities) due to many reasons. They include non-representativeness of the governing equations, inaccurate necessary conditions, multi-scale heterogeneity, scale discrepancies between observation and model, noise and others. Conditional stochastic approaches, which derives the unbiased solution and quantifies the solution uncertainty, are therefore most appropriate for forward and inverse modeling of hydrological processes. Conditioning using non-redundant data sets reduces uncertainty. In this presentation, we explain non-redundant data sets in cross-hole aquifer tests, and demonstrate that active hydraulic tomographic survey (using man-made excitations) is a cost-effective approach to collect the same type but non-redundant data sets for reducing uncertainty in the inverse modeling. We subsequently show that including flux measurements (a piece of non-redundant data set) collected in the same well setup as in hydraulic tomography improves the estimated hydraulic conductivity field. We finally conclude with examples and propositions regarding how to collect and analyze data intelligently by exploiting natural recurrent events (river stage fluctuations, earthquakes, lightning, etc.) as energy sources for basin-scale passive tomographic surveys. The development of information fusion technologies that integrate traditional point measurements and active/passive hydrogeophysical tomographic surveys, as well as advances in sensor, computing, and information technologies may ultimately advance our capability of characterizing groundwater basins to achieve resolution far beyond the feat of current science and technology.

  7. PREFACE: The Second International Conference on Inverse Problems: Recent Theoretical Developments and Numerical Approaches

    NASA Astrophysics Data System (ADS)

    Cheng, Jin; Hon, Yiu-Chung; Seo, Jin Keun; Yamamoto, Masahiro

    2005-01-01

    The Second International Conference on Inverse Problems: Recent Theoretical Developments and Numerical Approaches was held at Fudan University, Shanghai from 16-21 June 2004. The first conference in this series was held at the City University of Hong Kong in January 2002 and it was agreed to hold the conference once every two years in a Pan-Pacific Asian country. The next conference is scheduled to be held at Hokkaido University, Sapporo, Japan in July 2006. The purpose of this series of biennial conferences is to establish and develop constant international collaboration, especially among the Pan-Pacific Asian countries. In recent decades, interest in inverse problems has been flourishing all over the globe because of both the theoretical interest and practical requirements. In particular, in Asian countries, one is witnessing remarkable new trends of research in inverse problems as well as the participation of many young talents. Considering these trends, the second conference was organized with the chairperson Professor Li Tat-tsien (Fudan University), in order to provide forums for developing research cooperation and to promote activities in the field of inverse problems. Because solutions to inverse problems are needed in various applied fields, we entertained a total of 92 participants at the second conference and arranged various talks which ranged from mathematical analyses to solutions of concrete inverse problems in the real world. This volume contains 18 selected papers, all of which have undergone peer review. The 18 papers are classified as follows: Surveys: four papers give reviews of specific inverse problems. Theoretical aspects: six papers investigate the uniqueness, stability, and reconstruction schemes. Numerical methods: four papers devise new numerical methods and their applications to inverse problems. Solutions to applied inverse problems: four papers discuss concrete inverse problems such as scattering problems and inverse problems in atmospheric sciences and oceanography. Last but not least is our gratitude. As editors we would like to express our sincere thanks to all the plenary and invited speakers, the members of the International Scientific Committee and the Advisory Board for the success of the conference, which has given rise to this present volume of selected papers. We would also like to thank Mr Wang Yanbo, Miss Wan Xiqiong and the graduate students at Fudan University for their effective work to make this conference a success. The conference was financially supported by the NFS of China, the Mathematical Center of Ministry of Education of China, E-Institutes of Shanghai Municipal Education Commission (No E03004) and Fudan University, Grant 15340027 from the Japan Society for the Promotion of Science, and Grant 15654015 from the Ministry of Education, Cultures, Sports and Technology.

  8. Analytical solutions for efficient interpretation of single-well push-pull tracer tests

    NASA Astrophysics Data System (ADS)

    Huang, Junqi; Christ, John A.; Goltz, Mark N.

    2010-08-01

    Single-well push-pull tracer tests have been used to characterize the extent, fate, and transport of subsurface contamination. Analytical solutions provide one alternative for interpreting test results. In this work, an exact analytical solution to two-dimensional equations describing the governing processes acting on a dissolved compound during a modified push-pull test (advection, longitudinal and transverse dispersion, first-order decay, and rate-limited sorption/partitioning in steady, divergent, and convergent flow fields) is developed. The coupling of this solution with inverse modeling to estimate aquifer parameters provides an efficient methodology for subsurface characterization. Synthetic data for single-well push-pull tests are employed to demonstrate the utility of the solution for determining (1) estimates of aquifer longitudinal and transverse dispersivities, (2) sorption distribution coefficients and rate constants, and (3) non-aqueous phase liquid (NAPL) saturations. Employment of the solution to estimate NAPL saturations based on partitioning and non-partitioning tracers is designed to overcome limitations of previous efforts by including rate-limited mass transfer. This solution provides a new tool for use by practitioners when interpreting single-well push-pull test results.

  9. Bayesian seismic tomography by parallel interacting Markov chains

    NASA Astrophysics Data System (ADS)

    Gesret, Alexandrine; Bottero, Alexis; Romary, Thomas; Noble, Mark; Desassis, Nicolas

    2014-05-01

    The velocity field estimated by first arrival traveltime tomography is commonly used as a starting point for further seismological, mineralogical, tectonic or similar analysis. In order to interpret quantitatively the results, the tomography uncertainty values as well as their spatial distribution are required. The estimated velocity model is obtained through inverse modeling by minimizing an objective function that compares observed and computed traveltimes. This step is often performed by gradient-based optimization algorithms. The major drawback of such local optimization schemes, beyond the possibility of being trapped in a local minimum, is that they do not account for the multiple possible solutions of the inverse problem. They are therefore unable to assess the uncertainties linked to the solution. Within a Bayesian (probabilistic) framework, solving the tomography inverse problem aims at estimating the posterior probability density function of velocity model using a global sampling algorithm. Markov chains Monte-Carlo (MCMC) methods are known to produce samples of virtually any distribution. In such a Bayesian inversion, the total number of simulations we can afford is highly related to the computational cost of the forward model. Although fast algorithms have been recently developed for computing first arrival traveltimes of seismic waves, the complete browsing of the posterior distribution of velocity model is hardly performed, especially when it is high dimensional and/or multimodal. In the latter case, the chain may even stay stuck in one of the modes. In order to improve the mixing properties of classical single MCMC, we propose to make interact several Markov chains at different temperatures. This method can make efficient use of large CPU clusters, without increasing the global computational cost with respect to classical MCMC and is therefore particularly suited for Bayesian inversion. The exchanges between the chains allow a precise sampling of the high probability zones of the model space while avoiding the chains to end stuck in a probability maximum. This approach supplies thus a robust way to analyze the tomography imaging uncertainties. The interacting MCMC approach is illustrated on two synthetic examples of tomography of calibration shots such as encountered in induced microseismic studies. On the second application, a wavelet based model parameterization is presented that allows to significantly reduce the dimension of the problem, making thus the algorithm efficient even for a complex velocity model.

  10. Goal driven kinematic simulation of flexible arm robot for space station missions

    NASA Technical Reports Server (NTRS)

    Janssen, P.; Choudry, A.

    1987-01-01

    Flexible arms offer a great degree of flexibility in maneuvering in the space environment. The problem of transporting an astronaut for extra-vehicular activity using a space station based flexible arm robot was studied. Inverse kinematic solutions of the multilink structure were developed. The technique is goal driven and can support decision making for configuration selection as required for stability and obstacle avoidance. Details of this technique and results are given.

  11. Optimization Parameters of Air-conditioning and Heat Insulation Systems of a Pressurized Cabins of Long-distance Airplanes

    NASA Astrophysics Data System (ADS)

    Gusev, Sergey A.; Nikolaev, Vladimir N.

    2018-01-01

    The method for determination of an aircraft compartment thermal condition, based on a mathematical model of a compartment thermal condition was developed. Development of solution techniques for solving heat exchange direct and inverse problems and for determining confidence intervals of parametric identification estimations was carried out. The required performance of air-conditioning, ventilation systems and heat insulation depth of crew and passenger cabins were received.

  12. The Role of Eigensolutions in Nonlinear Inverse Cavity-Flow-Theory. Revision.

    DTIC Science & Technology

    1985-06-10

    The method of Levi Civita is applied to an isolated fully cavitating body at zero cavitation number and adapted to the solution of the inverse...Eigensolutions in Nonlinear Inverse Cavity-Flow Theory [Revised] Abstract: The method of Levi Civita is applied to an isolated fully cavitating body at...problem is not thought * to present much of a challenge at zero cavitation number. In this case, - the classical method of Levi Civita [7] can be

  13. Review of the inverse scattering problem at fixed energy in quantum mechanics

    NASA Technical Reports Server (NTRS)

    Sabatier, P. C.

    1972-01-01

    Methods of solution of the inverse scattering problem at fixed energy in quantum mechanics are presented. Scattering experiments of a beam of particles at a nonrelativisitic energy by a target made up of particles are analyzed. The Schroedinger equation is used to develop the quantum mechanical description of the system and one of several functions depending on the relative distance of the particles. The inverse problem is the construction of the potentials from experimental measurements.

  14. Encapsulation of nanoclusters in dried gel materials via an inverse micelle/sol gel synthesis

    DOEpatents

    Martino, A.; Yamanaka, S.A.; Kawola, J.S.; Showalter, S.K.; Loy, D.A.

    1998-09-29

    A dried gel material sterically entrapping nanoclusters of a catalytically active material and a process to make the material via an inverse micelle/sol-gel synthesis are disclosed. A surfactant is mixed with an apolar solvent to form an inverse micelle solution. A salt of a catalytically active material, such as gold chloride, is added along with a silica gel precursor to the solution to form a mixture. To the mixture are then added a reducing agent for the purpose of reducing the gold in the gold chloride to atomic gold to form the nanoclusters and a condensing agent to form the gel which sterically entraps the nanoclusters. The nanoclusters are normally in the average size range of from 5--10 nm in diameter with a monodisperse size distribution. 1 fig.

  15. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of their slip distributions. Also, reliable solutions are generally obtained with data in a 30-minute window following the origin time, suggesting that a real-time system could obtain solutions in less than one hour following the origin time.

  16. Joint Model and Parameter Dimension Reduction for Bayesian Inversion Applied to an Ice Sheet Flow Problem

    NASA Astrophysics Data System (ADS)

    Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.

    2016-12-01

    Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.

  17. Analytically based forward and inverse models of fluvial landscape evolution during temporally continuous climatic and tectonic variations

    NASA Astrophysics Data System (ADS)

    Goren, Liran; Petit, Carole

    2017-04-01

    Fluvial channels respond to changing tectonic and climatic conditions by adjusting their patterns of erosion and relief. It is therefore expected that by examining these patterns, we can infer the tectonic and climatic conditions that shaped the channels. However, the potential interference between climatic and tectonic signals complicates this inference. Within the framework of the stream power model that describes incision rate of mountainous bedrock rivers, climate variability has two effects: it influences the erosive power of the river, causing local slope change, and it changes the fluvial response time that controls the rate at which tectonically and climatically induced slope breaks are communicated upstream. Because of this dual role, the fluvial response time during continuous climate change has so far been elusive, which hinders our understanding of environmental signal propagation and preservation in the fluvial topography. An analytic solution of the stream power model during general tectonic and climatic histories gives rise to a new definition of the fluvial response time. The analytic solution offers accurate predictions for landscape evolution that are hard to achieve with classical numerical schemes and thus can be used to validate and evaluate the accuracy of numerical landscape evolution models. The analytic solution together with the new definition of the fluvial response time allow inferring either the tectonic history or the climatic history from river long profiles by using simple linear inversion schemes. Analytic study of landscape evolution during periodic climate change reveals that high frequency (10-100 kyr) climatic oscillations with respect to the response time, such as Milankovitch cycles, are not expected to leave significant fingerprints in the upstream reaches of fluvial channels. Linear inversion schemes are applied to the Tinee river tributaries in the southern French Alps, where tributary long profiles are used to recover the incision rate history of the Tinee main trunk. Inversion results show periodic, high incision rate pulses, which are correlated with interglacial episodes. Similar incision rate histories are recovered for the past 100 kyr when assuming constant climatic conditions or periodic climatic oscillations, in agreement with theoretical predictions.

  18. Inverse axial mounting stiffness design for lithographic projection lenses.

    PubMed

    Wen-quan, Yuan; Hong-bo, Shang; Wei, Zhang

    2014-09-01

    In order to balance axial mounting stiffness of lithographic projection lenses and the image quality under dynamic working conditions, an easy inverse axial mounting stiffness design method is developed in this article. Imaging quality deterioration at the wafer under different axial vibration levels is analyzed. The desired image quality can be determined according to practical requirements, and axial vibrational tolerance of each lens is solved with the damped least-squares method. Based on adaptive interval adjustment, a binary search algorithm, and the finite element method, the axial mounting stiffness of each lens can be traveled in a large interval, and converges to a moderate numerical solution which makes the axial vibrational amplitude of the lens converge to its axial vibrational tolerance. Model simulation is carried out to validate the effectiveness of the method.

  19. Singular value decomposition for the truncated Hilbert transform

    NASA Astrophysics Data System (ADS)

    Katsevich, A.

    2010-11-01

    Starting from a breakthrough result by Gelfand and Graev, inversion of the Hilbert transform became a very important tool for image reconstruction in tomography. In particular, their result is useful when the tomographic data are truncated and one deals with an interior problem. As was established recently, the interior problem admits a stable and unique solution when some a priori information about the object being scanned is available. The most common approach to solving the interior problem is based on converting it to the Hilbert transform and performing analytic continuation. Depending on what type of tomographic data are available, one gets different Hilbert inversion problems. In this paper, we consider two such problems and establish singular value decomposition for the operators involved. We also propose algorithms for performing analytic continuation.

  20. The novel high-performance 3-D MT inverse solver

    NASA Astrophysics Data System (ADS)

    Kruglyakov, Mikhail; Geraskin, Alexey; Kuvshinov, Alexey

    2016-04-01

    We present novel, robust, scalable, and fast 3-D magnetotelluric (MT) inverse solver. The solver is written in multi-language paradigm to make it as efficient, readable and maintainable as possible. Separation of concerns and single responsibility concepts go through implementation of the solver. As a forward modelling engine a modern scalable solver extrEMe, based on contracting integral equation approach, is used. Iterative gradient-type (quasi-Newton) optimization scheme is invoked to search for (regularized) inverse problem solution, and adjoint source approach is used to calculate efficiently the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT responses, and supports massive parallelization. Moreover, different parallelization strategies implemented in the code allow optimal usage of available computational resources for a given problem statement. To parameterize an inverse domain the so-called mask parameterization is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to HPC Piz Daint (6th supercomputer in the world) demonstrate practically linear scalability of the code up to thousands of nodes.

  1. Black-hole solutions with scalar hair in Einstein-scalar-Gauss-Bonnet theories

    NASA Astrophysics Data System (ADS)

    Antoniou, G.; Bakopoulos, A.; Kanti, P.

    2018-04-01

    In the context of the Einstein-scalar-Gauss-Bonnet theory, with a general coupling function between the scalar field and the quadratic Gauss-Bonnet term, we investigate the existence of regular black-hole solutions with scalar hair. Based on a previous theoretical analysis, which studied the evasion of the old and novel no-hair theorems, we consider a variety of forms for the coupling function (exponential, even and odd polynomial, inverse polynomial, and logarithmic) that, in conjunction with the profile of the scalar field, satisfy a basic constraint. Our numerical analysis then always leads to families of regular, asymptotically flat black-hole solutions with nontrivial scalar hair. The solution for the scalar field and the profile of the corresponding energy-momentum tensor, depending on the value of the coupling constant, may exhibit a nonmonotonic behavior, an unusual feature that highlights the limitations of the existing no-hair theorems. We also determine and study in detail the scalar charge, horizon area, and entropy of our solutions.

  2. A thin-shock-layer solution for nonequilibrium, inviscid hypersonic flows in earth, Martian, and Venusian atmospheres

    NASA Technical Reports Server (NTRS)

    Grose, W. L.

    1971-01-01

    An approximate inverse solution is presented for the nonequilibrium flow in the inviscid shock layer about a vehicle in hypersonic flight. The method is based upon a thin-shock-layer approximation and has the advantage of being applicable to both subsonic and supersonic regions of the shock layer. The relative simplicity of the method makes it ideally suited for programming on a digital computer with a significant reduction in storage capacity and computing time required by other more exact methods. Comparison of nonequilibrium solutions for an air mixture obtained by the present method is made with solutions obtained by two other methods. Additional cases are presented for entry of spherical nose cones into representative Venusian and Martian atmospheres. A digital computer program written in FORTRAN language is presented that permits an arbitrary gas mixture to be employed in the solution. The effects of vibration, dissociation, recombination, electronic excitation, and ionization are included in the program.

  3. We introduce an algorithm for the simultaneous reconstruction of faults and slip fields. We prove that the minimum of a related regularized functional converges to the unique solution of the fault inverse problem. We consider a Bayesian approach. We use a parallel multi-core platform and we discuss techniques to save on computational time.

    NASA Astrophysics Data System (ADS)

    Volkov, D.

    2017-12-01

    We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.

  4. Inverse scattering transform and soliton solutions for square matrix nonlinear Schrödinger equations with non-zero boundary conditions

    NASA Astrophysics Data System (ADS)

    Prinari, Barbara; Demontis, Francesco; Li, Sitai; Horikis, Theodoros P.

    2018-04-01

    The inverse scattering transform (IST) with non-zero boundary conditions at infinity is developed for an m × m matrix nonlinear Schrödinger-type equation which, in the case m = 2, has been proposed as a model to describe hyperfine spin F = 1 spinor Bose-Einstein condensates with either repulsive interatomic interactions and anti-ferromagnetic spin-exchange interactions (self-defocusing case), or attractive interatomic interactions and ferromagnetic spin-exchange interactions (self-focusing case). The IST for this system was first presented by Ieda et al. (2007) , using a different approach. In our formulation, both the direct and the inverse problems are posed in terms of a suitable uniformization variable which allows to develop the IST on the standard complex plane, instead of a two-sheeted Riemann surface or the cut plane with discontinuities along the cuts. Analyticity of the scattering eigenfunctions and scattering data, symmetries, properties of the discrete spectrum, and asymptotics are derived. The inverse problem is posed as a Riemann-Hilbert problem for the eigenfunctions, and the reconstruction formula of the potential in terms of eigenfunctions and scattering data is provided. In addition, the general behavior of the soliton solutions is analyzed in detail in the 2 × 2 self-focusing case, including some special solutions not previously discussed in the literature.

  5. Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.

    NASA Astrophysics Data System (ADS)

    Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.

    2016-12-01

    Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.

  6. Limitations of polynomial chaos expansions in the Bayesian solution of inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Fei; Department of Mathematics, University of California, Berkeley; Morzfeld, Matthias, E-mail: mmo@math.lbl.gov

    2015-02-01

    Polynomial chaos expansions are used to reduce the computational cost in the Bayesian solutions of inverse problems by creating a surrogate posterior that can be evaluated inexpensively. We show, by analysis and example, that when the data contain significant information beyond what is assumed in the prior, the surrogate posterior can be very different from the posterior, and the resulting estimates become inaccurate. One can improve the accuracy by adaptively increasing the order of the polynomial chaos, but the cost may increase too fast for this to be cost effective compared to Monte Carlo sampling without a surrogate posterior.

  7. Analytical YORP torques model with an improved temperature distribution function

    NASA Astrophysics Data System (ADS)

    Breiter, S.; Vokrouhlický, D.; Nesvorný, D.

    2010-01-01

    Previous models of the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect relied either on the zero thermal conductivity assumption, or on the solutions of the heat conduction equations assuming an infinite body size. We present the first YORP solution accounting for a finite size and non-radial direction of the surface normal vectors in the temperature distribution. The new thermal model implies the dependence of the YORP effect in rotation rate on asteroids conductivity. It is shown that the effect on small objects does not scale as the inverse square of diameter, but rather as the first power of the inverse.

  8. Numerical Inverse Scattering for the Toda Lattice

    NASA Astrophysics Data System (ADS)

    Bilman, Deniz; Trogdon, Thomas

    2017-06-01

    We present a method to compute the inverse scattering transform (IST) for the famed Toda lattice by solving the associated Riemann-Hilbert (RH) problem numerically. Deformations for the RH problem are incorporated so that the IST can be evaluated in O(1) operations for arbitrary points in the ( n, t)-domain, including short- and long-time regimes. No time-stepping is required to compute the solution because ( n, t) appear as parameters in the associated RH problem. The solution of the Toda lattice is computed in long-time asymptotic regions where the asymptotics are not known rigorously.

  9. An eigenvalue approach for the automatic scaling of unknowns in model-based reconstructions: Application to real-time phase-contrast flow MRI.

    PubMed

    Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens

    2017-12-01

    The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Measuring the misfit between seismograms using an optimal transport distance: application to full waveform inversion

    NASA Astrophysics Data System (ADS)

    Métivier, L.; Brossier, R.; Mérigot, Q.; Oudet, E.; Virieux, J.

    2016-04-01

    Full waveform inversion using the conventional L2 distance to measure the misfit between seismograms is known to suffer from cycle skipping. An alternative strategy is proposed in this study, based on a measure of the misfit computed with an optimal transport distance. This measure allows to account for the lateral coherency of events within the seismograms, instead of considering each seismic trace independently, as is done generally in full waveform inversion. The computation of this optimal transport distance relies on a particular mathematical formulation allowing for the non-conservation of the total energy between seismograms. The numerical solution of the optimal transport problem is performed using proximal splitting techniques. Three synthetic case studies are investigated using this strategy: the Marmousi 2 model, the BP 2004 salt model, and the Chevron 2014 benchmark data. The results emphasize interesting properties of the optimal transport distance. The associated misfit function is less prone to cycle skipping. A workflow is designed to reconstruct accurately the salt structures in the BP 2004 model, starting from an initial model containing no information about these structures. A high-resolution P-wave velocity estimation is built from the Chevron 2014 benchmark data, following a frequency continuation strategy. This estimation explains accurately the data. Using the same workflow, full waveform inversion based on the L2 distance converges towards a local minimum. These results yield encouraging perspectives regarding the use of the optimal transport distance for full waveform inversion: the sensitivity to the accuracy of the initial model is reduced, the reconstruction of complex salt structure is made possible, the method is robust to noise, and the interpretation of seismic data dominated by reflections is enhanced.

  11. 3-D minimum-structure inversion of magnetotelluric data using the finite-element method and tetrahedral grids

    NASA Astrophysics Data System (ADS)

    Jahandari, H.; Farquharson, C. G.

    2017-11-01

    Unstructured grids enable representing arbitrary structures more accurately and with fewer cells compared to regular structured grids. These grids also allow more efficient refinements compared to rectilinear meshes. In this study, tetrahedral grids are used for the inversion of magnetotelluric (MT) data, which allows for the direct inclusion of topography in the model, for constraining an inversion using a wireframe-based geological model and for local refinement at the observation stations. A minimum-structure method with an iterative model-space Gauss-Newton algorithm for optimization is used. An iterative solver is employed for solving the normal system of equations at each Gauss-Newton step and the sensitivity matrix-vector products that are required by this solver are calculated using pseudo-forward problems. This method alleviates the need to explicitly form the Hessian or Jacobian matrices which significantly reduces the required computation memory. Forward problems are formulated using an edge-based finite-element approach and a sparse direct solver is used for the solutions. This solver allows saving and re-using the factorization of matrices for similar pseudo-forward problems within a Gauss-Newton iteration which greatly minimizes the computation time. Two examples are presented to show the capability of the algorithm: the first example uses a benchmark model while the second example represents a realistic geological setting with topography and a sulphide deposit. The data that are inverted are the full-tensor impedance and the magnetic transfer function vector. The inversions sufficiently recovered the models and reproduced the data, which shows the effectiveness of unstructured grids for complex and realistic MT inversion scenarios. The first example is also used to demonstrate the computational efficiency of the presented model-space method by comparison with its data-space counterpart.

  12. Ice Cores Dating With a New Inverse Method Taking Account of the Flow Modeling Errors

    NASA Astrophysics Data System (ADS)

    Lemieux-Dudon, B.; Parrenin, F.; Blayo, E.

    2007-12-01

    Deep ice cores extracted from Antarctica or Greenland recorded a wide range of past climatic events. In order to contribute to the Quaternary climate system understanding, the calculation of an accurate depth-age relationship is a crucial point. Up to now ice chronologies for deep ice cores estimated with inverse approaches are based on quite simplified ice-flow models that fail to reproduce flow irregularities and consequently to respect all available set of age markers. We describe in this paper, a new inverse method that takes into account the model uncertainty in order to circumvent the restrictions linked to the use of simplified flow models. This method uses first guesses on two flow physical entities, the ice thinning function and the accumulation rate and then identifies correction functions on both flow entities. We highlight two major benefits brought by this new method: first of all the ability to respect large set of observations and as a consequence, the feasibility to estimate a synchronized common ice chronology for several cores at the same time. This inverse approach relies on a bayesian framework. To respect the positive constraint on the searched correction functions, we assume lognormal probability distribution on one hand for the background errors, but also for one particular set of the observation errors. We test this new inversion method on three cores simultaneously (the two EPICA cores : DC and DML and the Vostok core) and we assimilate more than 150 observations (e.g.: age markers, stratigraphic links,...). We analyze the sensitivity of the solution with respect to the background information, especially the prior error covariance matrix. The confidence intervals based on the posterior covariance matrix calculation, are estimated on the correction functions and for the first time on the overall output chronologies.

  13. Development of an inverse heat conduction model and its application to determination of heat transfer coefficient during casting solidification

    NASA Astrophysics Data System (ADS)

    Zhang, Liqiang; Reilly, Carl; Li, Luoxing; Cockcroft, Steve; Yao, Lu

    2014-07-01

    The interfacial heat transfer coefficient (IHTC) is required for the accurate simulation of heat transfer in castings especially for near net-shape processes. The large number of factors influencing heat transfer renders quantification by theoretical means a challenge. Likewise experimental methods applied directly to temperature data collected from castings are also a challenge to interpret because of the transient nature of many casting processes. Inverse methods offer a solution and have been applied successfully to predict the IHTC in many cases. However, most inverse approaches thus far focus on use of in-mold temperature data, which may be a challenge to obtain in cases where the molds are water-cooled. Methods based on temperature data from the casting have the potential to be used however; the latent heat released during the solidification of the molten metal complicates the associated IHTC calculations. Furthermore, there are limits on the maximum distance the thermocouples can be placed from the interface under analysis. An inverse conduction based method have been developed, verified and applied successfully to temperature data collected from within an aluminum casting in proximity to the mold. A modified specific heat method was used to account for latent heat evolution in which the rate of change of fraction solid with temperature was held constant. An analysis conducted with the inverse model suggests that the thermocouples must be placed no more than 2 mm from the interface. The IHTC values calculated for an aluminum alloy casting were shown to vary from 1,200 to 6,200 Wm-2 K-1. Additionally, the characteristics of the time-varying IHTC have also been discussed.

  14. The inverse resonance problem for CMV operators

    NASA Astrophysics Data System (ADS)

    Weikard, Rudi; Zinchenko, Maxim

    2010-05-01

    We consider the class of CMV operators with super-exponentially decaying Verblunsky coefficients. For these we define the concept of a resonance. Then we prove the existence of Jost solutions and a uniqueness theorem for the inverse resonance problem: given the location of all resonances, taking multiplicities into account, the Verblunsky coefficients are uniquely determined.

  15. An inverse problem for a semilinear parabolic equation arising from cardiac electrophysiology

    NASA Astrophysics Data System (ADS)

    Beretta, Elena; Cavaterra, Cecilia; Cerutti, M. Cristina; Manzoni, Andrea; Ratti, Luca

    2017-10-01

    In this paper we develop theoretical analysis and numerical reconstruction techniques for the solution of an inverse boundary value problem dealing with the nonlinear, time-dependent monodomain equation, which models the evolution of the electric potential in the myocardial tissue. The goal is the detection of an inhomogeneity \

  16. Calculation of gravity and magnetic anomalies along profiles with end corrections and inverse solutions for density and magnetization

    USGS Publications Warehouse

    Cady, John W.

    1977-01-01

    A computer program is presented which performs, for one or more bodies, along a profile perpendicular to strike, both forward calculations for the magnetic and gravity anomaly fields and independent gravity and magnetic inverse calculations for density and susceptibility or remanent magnetization.

  17. A posteriori error estimates in voice source recovery

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  18. On computational experiments in some inverse problems of heat and mass transfer

    NASA Astrophysics Data System (ADS)

    Bilchenko, G. G.; Bilchenko, N. G.

    2016-11-01

    The results of mathematical modeling of effective heat and mass transfer on hypersonic aircraft permeable surfaces are considered. The physic-chemical processes (the dissociation and the ionization) in laminar boundary layer of compressible gas are appreciated. Some algorithms of control restoration are suggested for the interpolation and approximation statements of heat and mass transfer inverse problems. The differences between the methods applied for the problem solutions search for these statements are discussed. Both the algorithms are realized as programs. Many computational experiments were accomplished with the use of these programs. The parameters of boundary layer obtained by means of the A.A.Dorodnicyn's generalized integral relations method from solving the direct problems have been used to obtain the inverse problems solutions. Two types of blowing laws restoration for the inverse problem in interpolation statement are presented as the examples. The influence of the temperature factor on the blowing restoration is investigated. The different character of sensitivity of controllable parameters (the local heat flow and local tangent friction) respectively to step (discrete) changing of control (the blowing) and the switching point position is studied.

  19. Waveform inversion for 3-D earth structure using the Direct Solution Method implemented on vector-parallel supercomputer

    NASA Astrophysics Data System (ADS)

    Hara, Tatsuhiko

    2004-08-01

    We implement the Direct Solution Method (DSM) on a vector-parallel supercomputer and show that it is possible to significantly improve its computational efficiency through parallel computing. We apply the parallel DSM calculation to waveform inversion of long period (250-500 s) surface wave data for three-dimensional (3-D) S-wave velocity structure in the upper and uppermost lower mantle. We use a spherical harmonic expansion to represent lateral variation with the maximum angular degree 16. We find significant low velocities under south Pacific hot spots in the transition zone. This is consistent with other seismological studies conducted in the Superplume project, which suggests deep roots of these hot spots. We also perform simultaneous waveform inversion for 3-D S-wave velocity and Q structure. Since resolution for Q is not good, we develop a new technique in which power spectra are used as data for inversion. We find good correlation between long wavelength patterns of Vs and Q in the transition zone such as high Vs and high Q under the western Pacific.

  20. Spatial operator factorization and inversion of the manipulator mass matrix

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo; Kreutz-Delgado, Kenneth

    1992-01-01

    This paper advances two linear operator factorizations of the manipulator mass matrix. Embedded in the factorizations are many of the techniques that are regarded as very efficient computational solutions to inverse and forward dynamics problems. The operator factorizations provide a high-level architectural understanding of the mass matrix and its inverse, which is not visible in the detailed algorithms. They also lead to a new approach to the development of computer programs or organize complexity in robot dynamics.

  1. Identification of Scattering Mechanisms from Measured Impulse Response Signatures of Several Conducting Objects.

    DTIC Science & Technology

    1984-02-01

    conducting sphere 35 compared to inverse transform of exact solution. 4-5. Measured impulse response of a conducting 2:1 right 37 circular cylinder with...frequency domain. This is equivalent to multiplication in the time domain by the inverse transform of w(n), which is shown in Figure 3-1 for N=15. The...equivalent pulse width from 0.066 T for the rectangular window to 0.10 T for the Hanning window. The inverse transform of the Hanning window is shown

  2. Saddlepoint Approximations in Conditional Inference

    DTIC Science & Technology

    1990-06-11

    Then the inverse transform can be written as (%, Y) = (T, q(T, Z)) for some function q. When the transform is not one to one, the domain should be...general regularity conditions described at the beginning of this section hold and that the solution t1 in (9) exists. Denote the inverse transform by (X, Y...density hn(t 0 l z) are desired. Then the inverse transform (Y, ) = (T, q(T, Z)) exists and the variable v in the cumulant generating function K(u, v

  3. Inverse kinematics problem in robotics using neural networks

    NASA Technical Reports Server (NTRS)

    Choi, Benjamin B.; Lawrence, Charles

    1992-01-01

    In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.

  4. Modeling and inversion Matlab algorithms for resistivity, induced polarization and seismic data

    NASA Astrophysics Data System (ADS)

    Karaoulis, M.; Revil, A.; Minsley, B. J.; Werkema, D. D.

    2011-12-01

    M. Karaoulis (1), D.D. Werkema (3), A. Revil (1,2), A., B. Minsley (4), (1) Colorado School of Mines, Dept. of Geophysics, Golden, CO, USA. (2) ISTerre, CNRS, UMR 5559, Université de Savoie, Equipe Volcan, Le Bourget du Lac, France. (3) U.S. EPA, ORD, NERL, ESD, CMB, Las Vegas, Nevada, USA . (4) USGS, Federal Center, Lakewood, 10, 80225-0046, CO. Abstract We propose 2D and 3D forward modeling and inversion package for DC resistivity, time domain induced polarization (IP), frequency-domain IP, and seismic refraction data. For the resistivity and IP case, discretization is based on rectangular cells, where each cell has as unknown resistivity in the case of DC modelling, resistivity and chargeability in the time domain IP modelling, and complex resistivity in the spectral IP modelling. The governing partial-differential equations are solved with the finite element method, which can be applied to both real and complex variables that are solved for. For the seismic case, forward modeling is based on solving the eikonal equation using a second-order fast marching method. The wavepaths are materialized by Fresnel volumes rather than by conventional rays. This approach accounts for complicated velocity models and is advantageous because it considers frequency effects on the velocity resolution. The inversion can accommodate data at a single time step, or as a time-lapse dataset if the geophysical data are gathered for monitoring purposes. The aim of time-lapse inversion is to find the change in the velocities or resistivities of each model cell as a function of time. Different time-lapse algorithms can be applied such as independent inversion, difference inversion, 4D inversion, and 4D active time constraint inversion. The forward algorithms are benchmarked against analytical solutions and inversion results are compared with existing ones. The algorithms are packaged as Matlab codes with a simple Graphical User Interface. Although the code is parallelized for multi-core cpus, it is not as fast as machine code. In the case of large datasets, someone should consider transferring parts of the code to C or Fortran through mex files. This code is available through EPA's website on the following link http://www.epa.gov/esd/cmb/GeophysicsWebsite/index.html Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy.

  5. Full-wave Nonlinear Inverse Scattering for Acoustic and Electromagnetic Breast Imaging

    NASA Astrophysics Data System (ADS)

    Haynes, Mark Spencer

    Acoustic and electromagnetic full-wave nonlinear inverse scattering techniques are explored in both theory and experiment with the ultimate aim of noninvasively mapping the material properties of the breast. There is evidence that benign and malignant breast tissue have different acoustic and electrical properties and imaging these properties directly could provide higher quality images with better diagnostic certainty. In this dissertation, acoustic and electromagnetic inverse scattering algorithms are first developed and validated in simulation. The forward solvers and optimization cost functions are modified from traditional forms in order to handle the large or lossy imaging scenes present in ultrasonic and microwave breast imaging. An antenna model is then presented, modified, and experimentally validated for microwave S-parameter measurements. Using the antenna model, a new electromagnetic volume integral equation is derived in order to link the material properties of the inverse scattering algorithms to microwave S-parameters measurements allowing direct comparison of model predictions and measurements in the imaging algorithms. This volume integral equation is validated with several experiments and used as the basis of a free-space inverse scattering experiment, where images of the dielectric properties of plastic objects are formed without the use of calibration targets. These efforts are used as the foundation of a solution and formulation for the numerical characterization of a microwave near-field cavity-based breast imaging system. The system is constructed and imaging results of simple targets are given. Finally, the same techniques are used to explore a new self-characterization method for commercial ultrasound probes. The method is used to calibrate an ultrasound inverse scattering experiment and imaging results of simple targets are presented. This work has demonstrated the feasibility of quantitative microwave inverse scattering by way of a self-consistent characterization formalism, and has made headway in the same area for ultrasound.

  6. A new art code for tomographic interferometry

    NASA Technical Reports Server (NTRS)

    Tan, H.; Modarress, D.

    1987-01-01

    A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.

  7. 3-D linear inversion of gravity data: method and application to Basse-Terre volcanic island, Guadeloupe, Lesser Antilles

    NASA Astrophysics Data System (ADS)

    Barnoud, Anne; Coutant, Olivier; Bouligand, Claire; Gunawan, Hendra; Deroussi, Sébastien

    2016-04-01

    We use a Bayesian formalism combined with a grid node discretization for the linear inversion of gravimetric data in terms of 3-D density distribution. The forward modelling and the inversion method are derived from seismological inversion techniques in order to facilitate joint inversion or interpretation of density and seismic velocity models. The Bayesian formulation introduces covariance matrices on model parameters to regularize the ill-posed problem and reduce the non-uniqueness of the solution. This formalism favours smooth solutions and allows us to specify a spatial correlation length and to perform inversions at multiple scales. We also extract resolution parameters from the resolution matrix to discuss how well our density models are resolved. This method is applied to the inversion of data from the volcanic island of Basse-Terre in Guadeloupe, Lesser Antilles. A series of synthetic tests are performed to investigate advantages and limitations of the methodology in this context. This study results in the first 3-D density models of the island of Basse-Terre for which we identify: (i) a southward decrease of densities parallel to the migration of volcanic activity within the island, (ii) three dense anomalies beneath Petite Plaine Valley, Beaugendre Valley and the Grande-Découverte-Carmichaël-Soufrière Complex that may reflect the trace of former major volcanic feeding systems, (iii) shallow low-density anomalies in the southern part of Basse-Terre, especially around La Soufrière active volcano, Piton de Bouillante edifice and along the western coast, reflecting the presence of hydrothermal systems and fractured and altered rocks.

  8. The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook

    NASA Astrophysics Data System (ADS)

    Mai, P. M.

    2017-12-01

    Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.

  9. Comparison of adjoint and analytical Bayesian inversion methods for constraining Asian sources of carbon monoxide using satellite (MOPITT) measurements of CO columns

    NASA Astrophysics Data System (ADS)

    Kopacz, Monika; Jacob, Daniel J.; Henze, Daven K.; Heald, Colette L.; Streets, David G.; Zhang, Qiang

    2009-02-01

    We apply the adjoint of an atmospheric chemical transport model (GEOS-Chem CTM) to constrain Asian sources of carbon monoxide (CO) with 2° × 2.5° spatial resolution using Measurement of Pollution in the Troposphere (MOPITT) satellite observations of CO columns in February-April 2001. Results are compared to the more common analytical method for solving the same Bayesian inverse problem and applied to the same data set. The analytical method is more exact but because of computational limitations it can only constrain emissions over coarse regions. We find that the correction factors to the a priori CO emission inventory from the adjoint inversion are generally consistent with those of the analytical inversion when averaged over the large regions of the latter. The adjoint solution reveals fine-scale variability (cities, political boundaries) that the analytical inversion cannot resolve, for example, in the Indian subcontinent or between Korea and Japan, and some of that variability is of opposite sign which points to large aggregation errors in the analytical solution. Upward correction factors to Chinese emissions from the prior inventory are largest in central and eastern China, consistent with a recent bottom-up revision of that inventory, although the revised inventory also sees the need for upward corrections in southern China where the adjoint and analytical inversions call for downward correction. Correction factors for biomass burning emissions derived from the adjoint and analytical inversions are consistent with a recent bottom-up inventory on the basis of MODIS satellite fire data.

  10. Hydrologic Process Regularization for Improved Geoelectrical Monitoring of a Lab-Scale Saline Tracer Experiment

    NASA Astrophysics Data System (ADS)

    Oware, E. K.; Moysey, S. M.

    2016-12-01

    Regularization stabilizes the geophysical imaging problem resulting from sparse and noisy measurements that render solutions unstable and non-unique. Conventional regularization constraints are, however, independent of the physics of the underlying process and often produce smoothed-out tomograms with mass underestimation. Cascaded time-lapse (CTL) is a widely used reconstruction technique for monitoring wherein a tomogram obtained from the background dataset is employed as starting model for the inversion of subsequent time-lapse datasets. In contrast, a proper orthogonal decomposition (POD)-constrained inversion framework enforces physics-based regularization based upon prior understanding of the expected evolution of state variables. The physics-based constraints are represented in the form of POD basis vectors. The basis vectors are constructed from numerically generated training images (TIs) that mimic the desired process. The target can be reconstructed from a small number of selected basis vectors, hence, there is a reduction in the number of inversion parameters compared to the full dimensional space. The inversion involves finding the optimal combination of the selected basis vectors conditioned on the geophysical measurements. We apply the algorithm to 2-D lab-scale saline transport experiments with electrical resistivity (ER) monitoring. We consider two transport scenarios with one and two mass injection points evolving into unimodal and bimodal plume morphologies, respectively. The unimodal plume is consistent with the assumptions underlying the generation of the TIs, whereas bimodality in plume morphology was not conceptualized. We compare difference tomograms retrieved from POD with those obtained from CTL. Qualitative comparisons of the difference tomograms with images of their corresponding dye plumes suggest that POD recovered more compact plumes in contrast to those of CTL. While mass recovery generally deteriorated with increasing number of time-steps, POD outperformed CTL in terms of mass recovery accuracy rates. POD is computationally superior requiring only 2.5 mins to complete each inversion compared to 3 hours for CTL to do the same.

  11. Convergence analysis of surrogate-based methods for Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Yan, Liang; Zhang, Yuan-Xiang

    2017-12-01

    The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.

  12. Dynamic data integration and stochastic inversion of a confined aquifer

    NASA Astrophysics Data System (ADS)

    Wang, D.; Zhang, Y.; Irsa, J.; Huang, H.; Wang, L.

    2013-12-01

    Much work has been done in developing and applying inverse methods to aquifer modeling. The scope of this paper is to investigate the applicability of a new direct method for large inversion problems and to incorporate uncertainty measures in the inversion outcomes (Wang et al., 2013). The problem considered is a two-dimensional inverse model (50×50 grid) of steady-state flow for a heterogeneous ground truth model (500×500 grid) with two hydrofacies. From the ground truth model, decreasing number of wells (12, 6, 3) were sampled for facies types, based on which experimental indicator histograms and directional variograms were computed. These parameters and models were used by Sequential Indicator Simulation to generate 100 realizations of hydrofacies patterns in a 100×100 (geostatistical) grid, which were conditioned to the facies measurements at wells. These realizations were smoothed with Simulated Annealing, coarsened to the 50×50 inverse grid, before they were conditioned with the direct method to the dynamic data, i.e., observed heads and groundwater fluxes at the same sampled wells. A set of realizations of estimated hydraulic conductivities (Ks), flow fields, and boundary conditions were created, which centered on the 'true' solutions from solving the ground truth model. Both hydrofacies conductivities were computed with an estimation accuracy of ×10% (12 wells), ×20% (6 wells), ×35% (3 wells) of the true values. For boundary condition estimation, the accuracy was within × 15% (12 wells), 30% (6 wells), and 50% (3 wells) of the true values. The inversion system of equations was solved with LSQR (Paige et al, 1982), for which coordinate transform and matrix scaling preprocessor were used to improve the condition number (CN) of the coefficient matrix. However, when the inverse grid was refined to 100×100, Gaussian Noise Perturbation was used to limit the growth of the CN before the matrix solve. To scale the inverse problem up (i.e., without smoothing and coarsening and therefore reducing the associated estimation uncertainty), a parallel LSQR solver was written and verified. For the 50×50 grid, the parallel solver sped up the serial solution time by 14X using 4 CPUs (research on parallel performance and scaling is ongoing). A sensitivity analysis was conducted to examine the relation between the observed data and the inversion outcomes, where measurement errors of increasing magnitudes (i.e., ×1, 2, 5, 10% of the total head variation and up to ×2% of the total flux variation) were imposed on the observed data. Inversion results were stable but the accuracy of Ks and boundary estimation degraded with increasing errors, as expected. In particular, quality of the observed heads is critical to hydraulic head recovery, while quality of the observed fluxes plays a dominant role in K estimation. References: Wang, D., Y. Zhang, J. Irsa, H. Huang, and L. Wang (2013), Data integration and stochastic inversion of a confined aquifer with high performance computing, Advances in Water Resources, in preparation. Paige, C. C., and M. A. Saunders (1982), LSQR: an algorithm for sparse linear equations and sparse least squares, ACM Transactions on Mathematical Software, 8(1), 43-71.

  13. Improve earthquake hypocenter using adaptive simulated annealing inversion in regional tectonic, volcano tectonic, and geothermal observation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ry, Rexha Verdhora, E-mail: rexha.vry@gmail.com; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id

    Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment.more » We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.« less

  14. Vortex breakdown simulation

    NASA Technical Reports Server (NTRS)

    Hafez, M.; Ahmad, J.; Kuruvila, G.; Salas, M. D.

    1987-01-01

    In this paper, steady, axisymmetric inviscid, and viscous (laminar) swirling flows representing vortex breakdown phenomena are simulated using a stream function-vorticity-circulation formulation and two numerical methods. The first is based on an inverse iteration, where a norm of the solution is prescribed and the swirling parameter is calculated as a part of the output. The second is based on direct Newton iterations, where the linearized equations, for all the unknowns, are solved simultaneously by an efficient banded Gaussian elimination procedure. Several numerical solutions for inviscid and viscous flows are demonstrated, followed by a discussion of the results. Some improvements on previous work have been achieved: first order upwind differences are replaced by second order schemes, line relaxation procedure (with linear convergence rate) is replaced by Newton's iterations (which converge quadratically), and Reynolds numbers are extended from 200 up to 1000.

  15. Optimization Under Uncertainty for Electronics Cooling Design

    NASA Astrophysics Data System (ADS)

    Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.

    Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...

  16. Analytical inversions in remote sensing of particle size distributions. IV - Comparison of Fymat and Box-McKellar solutions in the anomalous diffraction approximation

    NASA Technical Reports Server (NTRS)

    Fymat, A. L.; Smith, C. B.

    1979-01-01

    It is shown that the inverse analytical solutions, provided separately by Fymat and Box-McKellar, for reconstructing particle size distributions from remote spectral transmission measurements under the anomalous diffraction approximation can be derived using a cosine and a sine transform, respectively. Sufficient conditions of validity of the two formulas are established. Their comparison shows that the former solution is preferable to the latter in that it requires less a priori information (knowledge of the particle number density is not needed) and has wider applicability. For gamma-type distributions, and either a real or a complex refractive index, explicit expressions are provided for retrieving the distribution parameters; such expressions are, interestingly, proportional to the geometric area of the polydispersion.

  17. Inverse gas chromatographic determination of solubility parameters of excipients.

    PubMed

    Adamska, Katarzyna; Voelkel, Adam

    2005-11-04

    The principle aim of this work was an application of inverse gas chromatography (IGC) for the estimation of solubility parameter for pharmaceutical excipients. The retention data of number of test solutes were used to calculate Flory-Huggins interaction parameter (chi1,2infinity) and than solubility parameter (delta2), corrected solubility parameter (deltaT) and its components (deltad, deltap, deltah) by using different procedures. The influence of different values of test solutes solubility parameter (delta1) over calculated values was estimated. The solubility parameter values obtained for all excipients from the slope, from Guillet and co-workers' procedure are higher than that obtained from components according Voelkel and Janas procedure. It was found that solubility parameter's value of the test solutes influences, but not significantly, values of solubility parameter of excipients.

  18. Reducing the uncertainty in the fidelity of seismic imaging results

    NASA Astrophysics Data System (ADS)

    Zhou, H. W.; Zou, Z.

    2017-12-01

    A key aspect in geoscientific inversion is quantifying the quality of the results. In seismic imaging, we must quantify the uncertainty of every imaging result based on field data, because data noise and methodology limitations may produce artifacts. Detection of artifacts is therefore an important aspect in uncertainty quantification in geoscientific inversion. Quantifying the uncertainty of seismic imaging solutions means assessing their fidelity, which defines the truthfulness of the imaged targets in terms of their resolution, position error and artifact. Key challenges to achieving the fidelity of seismic imaging include: (1) Difficulty to tell signal from artifact and noise; (2) Limitations in signal-to-noise ratio and seismic illumination; and (3) The multi-scale nature of the data space and model space. Most seismic imaging studies of the Earth's crust and mantle have employed inversion or modeling approaches. Though they are in opposite directions of mapping between the data space and model space, both inversion and modeling seek the best model to minimize the misfit in the data space, which unfortunately is not the output space. The fact that the selection and uncertainty of the output model are not judged in the output space has exacerbated the nonuniqueness problem for inversion and modeling. In contrast, the practice in exploration seismology has long established a two-fold approach of seismic imaging: Using velocity modeling building to establish the long-wavelength reference velocity models, and using seismic migration to map the short-wavelength reflectivity structures. Most interestingly, seismic migration maps the data into an output space called imaging space, where the output reflection images of the subsurface are formed based on an imaging condition. A good example is the reverse time migration, which seeks the reflectivity image as the best fit in the image space between the extrapolation of time-reversed waveform data and the prediction based on estimated velocity model and source parameters. I will illustrate the benefits of deciding the best output result in the output space for inversion, using examples from seismic imaging.

  19. Automated rapid finite fault inversion for megathrust earthquakes: Application to the Maule (2010), Iquique (2014) and Illapel (2015) great earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2016-04-01

    Rapid estimation of the spatial and temporal rupture characteristics of large megathrust earthquakes by finite fault inversion is important for disaster mitigation. For example, estimates of the spatio-temporal evolution of rupture can be used to evaluate population exposure to tsunami waves and ground shaking soon after the event by providing more accurate predictions than possible with point source approximations. In addition, rapid inversion results can reveal seismic source complexity to guide additional, more detailed subsequent studies. This work develops a method to rapidly estimate the slip distribution of megathrust events while reducing subjective parameter choices by automation. The method is simple yet robust and we show that it provides excellent preliminary rupture models as soon as 30 minutes for three great earthquakes in the South-American subduction zone. This may slightly change for other regions depending on seismic station coverage but method can be applied to any subduction region. The inversion is based on W-phase data since it is rapidly and widely available and of low amplitude which avoids clipping at close stations for large events. In addition, prior knowledge of the slab geometry (e.g. SLAB 1.0) is applied and rapid W-phase point source information (time delay and centroid location) is used to constrain the fault geometry and extent. Since the linearization by multiple time window (MTW) parametrization requires regularization, objective smoothing is achieved by the discrepancy principle in two fully automated steps. First, the residuals are estimated assuming unknown noise levels, and second, seeking a subsequent solution which fits the data to noise level. The MTW scheme is applied with positivity constraints and a solution is obtained by an efficient non-negative least squares solver. Systematic application of the algorithm to the Maule (2010), Iquique (2014) and Illapel (2015) events illustrates that rapid finite fault inversion with teleseismic data is feasible and provides meaningful results. The results for the three events show excellent data fits and are consistent with other solutions showing most of the slip occurring close to the trench for the Maule an Illapel events and some deeper slip for the Iquique event. Importantly, the Illapel source model predicts tsunami waveforms of close agreement with observed waveforms. Finally, we develop a new Bayesian approach to approximate uncertainties as part of the rapid inversion scheme with positivity constraints. Uncertainties are estimated by approximating the posterior distribution as a multivariate log-normal distribution. While solving for the posterior adds some additional computational cost, we illustrate that uncertainty estimation is important for meaningful interpretation of finite fault models.

  20. Efficient calculation of full waveform time domain inversion for electromagnetic problem using fictitious wave domain method and cascade decimation decomposition

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2016-12-01

    Recently, a full waveform time domain inverse solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion to solve simultaneously for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations, the ability to operate in areas of high levels of source signal spatial complexity, and non-stationarity. This goal would not be obtainable if one were to adopt the pure time domain solution for the inverse problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across a large frequency bandwidth. This means that for the forward simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a sensitivity matrix that is computationally burdensome to solve a model update. We have implemented a code that addresses this situation through the use of cascade decimation decomposition to reduce the size of the sensitivity matrix substantially, through quasi-equivalent time domain decomposition. We also use a fictitious wave domain method to speed up computation time of the forward simulation in the time domain. By combining these refinements, we have developed a full waveform joint source field/earth conductivity inverse modeling method. We found that cascade decimation speeds computations of the sensitivity matrices dramatically, keeping the solution close to that of the undecimated case. For example, for a model discretized into 2.6x105 cells, we obtain model updates in less than 1 hour on a 4U rack-mounted workgroup Linux server, which is a practical computational time for the inverse problem.

Top