DOE Office of Scientific and Technical Information (OSTI.GOV)
Haber, Eldad
2014-03-17
The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.
Inverse problems in quantum chemistry
NASA Astrophysics Data System (ADS)
Karwowski, Jacek
Inverse problems constitute a branch of applied mathematics with well-developed methodology and formalism. A broad family of tasks met in theoretical physics, in civil and mechanical engineering, as well as in various branches of medical and biological sciences has been formulated as specific implementations of the general theory of inverse problems. In this article, it is pointed out that a number of approaches met in quantum chemistry can (and should) be classified as inverse problems. Consequently, the methodology used in these approaches may be enriched by applying ideas and theorems developed within the general field of inverse problems. Several examples, including the RKR method for the construction of potential energy curves, determining parameter values in semiempirical methods, and finding external potentials for which the pertinent Schrödinger equation is exactly solvable, are discussed in detail.
Numerical methods for the inverse problem of density functional theory
Jensen, Daniel S.; Wasserman, Adam
2017-07-17
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Numerical methods for the inverse problem of density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Daniel S.; Wasserman, Adam
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Bayesian Inference in Satellite Gravity Inversion
NASA Technical Reports Server (NTRS)
Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Kim, Hyung Rae; Torony, B.; Mayer-Guerr, T.
2005-01-01
To solve a geophysical inverse problem means applying measurements to determine the parameters of the selected model. The inverse problem is formulated as the Bayesian inference. The Gaussian probability density functions are applied in the Bayes's equation. The CHAMP satellite gravity data are determined at the altitude of 400 kilometer altitude over the South part of the Pannonian basin. The model of interpretation is the right vertical cylinder. The parameters of the model are obtained from the minimum problem solved by the Simplex method.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.
2005-01-01
This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.
An inverse problem in thermal imaging
NASA Technical Reports Server (NTRS)
Bryan, Kurt; Caudill, Lester F., Jr.
1994-01-01
This paper examines uniqueness and stability results for an inverse problem in thermal imaging. The goal is to identify an unknown boundary of an object by applying a heat flux and measuring the induced temperature on the boundary of the sample. The problem is studied both in the case in which one has data at every point on the boundary of the region and the case in which only finitely many measurements are available. An inversion procedure is developed and used to study the stability of the inverse problem for various experimental configurations.
Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
NASA Astrophysics Data System (ADS)
Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
Computational inverse methods of heat source in fatigue damage problems
NASA Astrophysics Data System (ADS)
Chen, Aizhou; Li, Yuan; Yan, Bo
2018-04-01
Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.
Inverse kinematics of a dual linear actuator pitch/roll heliostat
NASA Astrophysics Data System (ADS)
Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh
2017-06-01
This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
Domain identification in impedance computed tomography by spline collocation method
NASA Technical Reports Server (NTRS)
Kojima, Fumio
1990-01-01
A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.
Butler, Troy; Graham, L.; Estep, D.; ...
2015-02-03
The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Inverse models: A necessary next step in ground-water modeling
Poeter, E.P.; Hill, M.C.
1997-01-01
Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.
Inverse kinematics problem in robotics using neural networks
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.; Lawrence, Charles
1992-01-01
In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.
Real Variable Inversion of Laplace Transforms: An Application in Plasma Physics.
ERIC Educational Resources Information Center
Bohn, C. L.; Flynn, R. W.
1978-01-01
Discusses the nature of Laplace transform techniques and explains an alternative to them: the Widder's real inversion. To illustrate the power of this new technique, it is applied to a difficult inversion: the problem of Landau damping. (GA)
A direct method for nonlinear ill-posed problems
NASA Astrophysics Data System (ADS)
Lakhal, A.
2018-02-01
We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.
Force sensing using 3D displacement measurements in linear elastic bodies
NASA Astrophysics Data System (ADS)
Feng, Xinzeng; Hui, Chung-Yuen
2016-07-01
In cell traction microscopy, the mechanical forces exerted by a cell on its environment is usually determined from experimentally measured displacement by solving an inverse problem in elasticity. In this paper, an innovative numerical method is proposed which finds the "optimal" traction to the inverse problem. When sufficient regularization is applied, we demonstrate that the proposed method significantly improves the widely used approach using Green's functions. Motivated by real cell experiments, the equilibrium condition of a slowly migrating cell is imposed as a set of equality constraints on the unknown traction. Our validation benchmarks demonstrate that the numeric solution to the constrained inverse problem well recovers the actual traction when the optimal regularization parameter is used. The proposed method can thus be applied to study general force sensing problems, which utilize displacement measurements to sense inaccessible forces in linear elastic bodies with a priori constraints.
The importance of coherence in inverse problems in optics
NASA Astrophysics Data System (ADS)
Ferwerda, H. A.; Baltes, H. P.; Glass, A. S.; Steinle, B.
1981-12-01
Current inverse problems of statistical optics are presented with a guide to relevant literature. The inverse problems are categorized into four groups, and the Van Cittert-Zernike theorem and its generalization are discussed. The retrieval of structural information from the far-zone degree of coherence and the time-averaged intensity distribution of radiation scattered by a superposition of random and periodic scatterers are also discussed. In addition, formulas for the calculation of far-zone properties are derived within the framework of scalar optics, and results are applied to two examples.
NASA Astrophysics Data System (ADS)
Uhlmann, Gunther
2008-07-01
This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology, Finland), Masahiro Yamamoto (University of Tokyo, Japan), Gunther Uhlmann (University of Washington) and Jun Zou (Chinese University of Hong Kong). IPIA is a recently formed organization that intends to promote the field of inverse problem at all levels. See http://www.inverse-problems.net/. IPIA awarded the first Calderón prize at the opening of the conference to Matti Lassas (see first article in the Proceedings). There was also a general meeting of IPIA during the workshop. This was probably the largest conference ever on IP with 350 registered participants. The program consisted of 18 invited speakers and the Calderón Prize Lecture given by Matti Lassas. Another integral part of the program was the more than 60 mini-symposia that covered a broad spectrum of the theory and applications of inverse problems, focusing on recent developments in medical imaging, seismic exploration, remote sensing, industrial applications, numerical and regularization methods in inverse problems. Another important related topic was image processing in particular the advances which have allowed for significant enhancement of widely used imaging techniques. For more details on the program see the web page: http://www.pims.math.ca/science/2007/07aip. These proceedings reflect the broad spectrum of topics covered in AIP 2007. The conference and these proceedings would not have happened without the contributions of many people. I thank all my fellow organizers, the invited speakers, the speakers and organizers of mini-symposia for making this an exciting and vibrant event. I also thank PIMS, NSF and MITACS for their generous financial support. I take this opportunity to thank the PIMS staff, particularly Ken Leung, for making the local arrangements. Also thanks are due to Stephen McDowall for his help in preparing the schedule of the conference and Xiaosheng Li for the help in preparing these proceedings. I also would like to thank the contributors of this volume and the referees. Finally, many thanks are due to Graham Douglas and Elaine Longden-Chapman for suggesting publication in Journal of Physics: Conference Series.
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
NASA Astrophysics Data System (ADS)
Cheng, Jin; Hon, Yiu-Chung; Seo, Jin Keun; Yamamoto, Masahiro
2005-01-01
The Second International Conference on Inverse Problems: Recent Theoretical Developments and Numerical Approaches was held at Fudan University, Shanghai from 16-21 June 2004. The first conference in this series was held at the City University of Hong Kong in January 2002 and it was agreed to hold the conference once every two years in a Pan-Pacific Asian country. The next conference is scheduled to be held at Hokkaido University, Sapporo, Japan in July 2006. The purpose of this series of biennial conferences is to establish and develop constant international collaboration, especially among the Pan-Pacific Asian countries. In recent decades, interest in inverse problems has been flourishing all over the globe because of both the theoretical interest and practical requirements. In particular, in Asian countries, one is witnessing remarkable new trends of research in inverse problems as well as the participation of many young talents. Considering these trends, the second conference was organized with the chairperson Professor Li Tat-tsien (Fudan University), in order to provide forums for developing research cooperation and to promote activities in the field of inverse problems. Because solutions to inverse problems are needed in various applied fields, we entertained a total of 92 participants at the second conference and arranged various talks which ranged from mathematical analyses to solutions of concrete inverse problems in the real world. This volume contains 18 selected papers, all of which have undergone peer review. The 18 papers are classified as follows: Surveys: four papers give reviews of specific inverse problems. Theoretical aspects: six papers investigate the uniqueness, stability, and reconstruction schemes. Numerical methods: four papers devise new numerical methods and their applications to inverse problems. Solutions to applied inverse problems: four papers discuss concrete inverse problems such as scattering problems and inverse problems in atmospheric sciences and oceanography. Last but not least is our gratitude. As editors we would like to express our sincere thanks to all the plenary and invited speakers, the members of the International Scientific Committee and the Advisory Board for the success of the conference, which has given rise to this present volume of selected papers. We would also like to thank Mr Wang Yanbo, Miss Wan Xiqiong and the graduate students at Fudan University for their effective work to make this conference a success. The conference was financially supported by the NFS of China, the Mathematical Center of Ministry of Education of China, E-Institutes of Shanghai Municipal Education Commission (No E03004) and Fudan University, Grant 15340027 from the Japan Society for the Promotion of Science, and Grant 15654015 from the Ministry of Education, Cultures, Sports and Technology.
PREFACE: Inverse Problems in Applied Sciences—towards breakthrough
NASA Astrophysics Data System (ADS)
Cheng, Jin; Iso, Yuusuke; Nakamura, Gen; Yamamoto, Masahiro
2007-06-01
These are the proceedings of the international conference `Inverse Problems in Applied Sciences—towards breakthrough' which was held at Hokkaido University, Sapporo, Japan on 3-7 July 2006 (http://coe.math.sci.hokudai.ac.jp/sympo/inverse/). There were 88 presentations and more than 100 participants, and we are proud to say that the conference was very successful. Nowadays, many new activities on inverse problems are flourishing at many centers of research around the world, and the conference has successfully gathered a world-wide variety of researchers. We believe that this volume contains not only main papers, but also conveys the general status of current research into inverse problems. This conference was the third biennial international conference on inverse problems, the core of which is the Pan-Pacific Asian area. The purpose of this series of conferences is to establish and develop constant international collaboration, especially among the Pan-Pacific Asian countries, and to lead the organization of activities concerning inverse problems centered in East Asia. The first conference was held at City University of Hong Kong in January 2002 and the second was held at Fudan University in June 2004. Following the preceding two successes, the third conference was organized in order to extend the scope of activities and build useful bridges to the next conference in Seoul in 2008. Therefore this third biennial conference was intended not only to establish collaboration and links between researchers in Asia and leading researchers worldwide in inverse problems but also to nurture interdisciplinary collaboration in theoretical fields such as mathematics, applied fields and evolving aspects of inverse problems. For these purposes, we organized tutorial lectures, serial lectures and a panel discussion as well as conference research presentations. This volume contains three lecture notes from the tutorial and serial lectures, and 22 papers. Especially at this flourishing time, it is necessary to carefully analyse the current status of inverse problems for further development. Thus we have opened with the panel discussion entitled `Future of Inverse Problems' with panelists: Professors J Cheng, H W Engl, V Isakov, R Kress, J-K Seo, G Uhlmann and the commentator: Elaine Longden-Chapman from IOP Publishing. The aims of the panel discussion were to examine the current research status from various viewpoints, to discuss how we can overcome any difficulties and how we can promote young researchers and open new possibilities for inverse problems such as industrial linkages. As one output, the panel discussion has triggered the organization of the Inverse Problems International Association (IPIA) which has led to its first international congress in the summer of 2007. Another remarkable outcome of the conference is, of course, the present volume: this is the very high quality online proceedings volume of Journal of Physics: Conference Series. Readers can see in these proceedings very well written tutorial lecture notes, and very high quality original research and review papers all of which show what was achieved by the time the conference was held. The electronic publication of the proceedings is a new way of publicizing the achievement of the conference. It has the advantage of wide circulation and cost reduction. We believe this is a most efficient method for our needs and purposes. We would like to take this opportunity to acknowledge all the people who helped to organize the conference. Guest Editors Jin Cheng, Fudan University, Shanghai, China Yuusuke Iso, Kyoto University, Kyoto, Japan Gen Nakamura, Hokkaido University, Sapporo, Japan Masahiro Yamamoto, University of Tokyo, Tokyo, Japan
EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.
Hadinia, M; Jafari, R; Soleimani, M
2016-06-01
This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.
3D first-arrival traveltime tomography with modified total variation regularization
NASA Astrophysics Data System (ADS)
Jiang, Wenbin; Zhang, Jie
2018-02-01
Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.
A systematic linear space approach to solving partially described inverse eigenvalue problems
NASA Astrophysics Data System (ADS)
Hu, Sau-Lon James; Li, Haujun
2008-06-01
Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.
Layer Stripping Solutions of Inverse Seismic Problems.
1985-03-21
problems--more so than has generally been recognized. The subject of this thesis is the theoretical development of the . layer-stripping methodology , and...medium varies sharply at each interface, which would be expected to cause difficulties for the algorithm, since it was designed for a smoothy varying... methodology was applied in a novel way. The inverse problem considered in this chapter was that of reconstructing a layered medium from measurement of its
The Role of Eigensolutions in Nonlinear Inverse Cavity-Flow-Theory. Revision.
1985-06-10
The method of Levi Civita is applied to an isolated fully cavitating body at zero cavitation number and adapted to the solution of the inverse...Eigensolutions in Nonlinear Inverse Cavity-Flow Theory [Revised] Abstract: The method of Levi Civita is applied to an isolated fully cavitating body at...problem is not thought * to present much of a challenge at zero cavitation number. In this case, - the classical method of Levi Civita [7] can be
Viscoelastic material inversion using Sierra-SD and ROL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, Timothy; Aquino, Wilkins; Ridzal, Denis
2014-11-01
In this report we derive frequency-domain methods for inverse characterization of the constitutive parameters of viscoelastic materials. The inverse problem is cast in a PDE-constrained optimization framework with efficient computation of gradients and Hessian vector products through matrix free operations. The abstract optimization operators for first and second derivatives are derived from first principles. Various methods from the Rapid Optimization Library (ROL) are tested on the viscoelastic inversion problem. The methods described herein are applied to compute the viscoelastic bulk and shear moduli of a foam block model, which was recently used in experimental testing for viscoelastic property characterization.
A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem
NASA Astrophysics Data System (ADS)
Park, Taehoon; Park, Won-Kwang
2015-09-01
Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation.
Total-variation based velocity inversion with Bregmanized operator splitting algorithm
NASA Astrophysics Data System (ADS)
Zand, Toktam; Gholami, Ali
2018-04-01
Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.
Electromagnetic Inverse Methods and Applications for Inhomogeneous Media Probing and Synthesis.
NASA Astrophysics Data System (ADS)
Xia, Jake Jiqing
The electromagnetic inverse scattering problems concerned in this thesis are to find unknown inhomogeneous permittivity and conductivity profiles in a medium from the scattering data. Both analytical and numerical methods are studied in the thesis. The inverse methods can be applied to geophysical medium probing, non-destructive testing, medical imaging, optical waveguide synthesis and material characterization. An introduction is given in Chapter 1. The first part of the thesis presents inhomogeneous media probing. The Riccati equation approach is discussed in Chapter 2 for a one-dimensional planar profile inversion problem. Two types of the Riccati equations are derived and distinguished. New renormalized formulae based inverting one specific type of the Riccati equation are derived. Relations between the inverse methods of Green's function, the Riccati equation and the Gel'fand-Levitan-Marchenko (GLM) theory are studied. In Chapter 3, the renormalized source-type integral equation (STIE) approach is formulated for inversion of cylindrically inhomogeneous permittivity and conductivity profiles. The advantages of the renormalized STIE approach are demonstrated in numerical examples. The cylindrical profile inversion problem has an application for borehole inversion. In Chapter 4 the renormalized STIE approach is extended to a planar case where the two background media are different. Numerical results have shown fast convergence. This formulation is applied to inversion of the underground soil moisture profiles in remote sensing. The second part of the thesis presents the synthesis problem of inhomogeneous dielectric waveguides using the electromagnetic inverse methods. As a particular example, the rational function representation of reflection coefficients in the GLM theory is used. The GLM method is reviewed in Chapter 5. Relations between modal structures and transverse reflection coefficients of an inhomogeneous medium are established in Chapter 6. A stratified medium model is used to derive the guidance condition and the reflection coefficient. Results obtained in Chapter 6 provide the physical foundation for applying the inverse methods for the waveguide design problem. In Chapter 7, a global guidance condition for continuously varying medium is derived using the Riccati equation. It is further shown that the discrete modes in an inhomogeneous medium have the same wave vectors as the poles of the transverse reflection coefficient. An example of synthesizing an inhomogeneous dielectric waveguide using a rational reflection coefficient is presented. A summary of the thesis is given in Chapter 8. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.).
Inversion methods for interpretation of asteroid lightcurves
NASA Technical Reports Server (NTRS)
Kaasalainen, Mikko; Lamberg, L.; Lumme, K.
1992-01-01
We have developed methods of inversion that can be used in the determination of the three-dimensional shape or the albedo distribution of the surface of a body from disk-integrated photometry, assuming the shape to be strictly convex. In addition to the theory of inversion methods, we have studied the practical aspects of the inversion problem and applied our methods to lightcurve data of 39 Laetitia and 16 Psyche.
Full-Physics Inverse Learning Machine for Satellite Remote Sensing Retrievals
NASA Astrophysics Data System (ADS)
Loyola, D. G.
2017-12-01
The satellite remote sensing retrievals are usually ill-posed inverse problems that are typically solved by finding a state vector that minimizes the residual between simulated data and real measurements. The classical inversion methods are very time-consuming as they require iterative calls to complex radiative-transfer forward models to simulate radiances and Jacobians, and subsequent inversion of relatively large matrices. In this work we present a novel and extremely fast algorithm for solving inverse problems called full-physics inverse learning machine (FP-ILM). The FP-ILM algorithm consists of a training phase in which machine learning techniques are used to derive an inversion operator based on synthetic data generated using a radiative transfer model (which expresses the "full-physics" component) and the smart sampling technique, and an operational phase in which the inversion operator is applied to real measurements. FP-ILM has been successfully applied to the retrieval of the SO2 plume height during volcanic eruptions and to the retrieval of ozone profile shapes from UV/VIS satellite sensors. Furthermore, FP-ILM will be used for the near-real-time processing of the upcoming generation of European Sentinel sensors with their unprecedented spectral and spatial resolution and associated large increases in the amount of data.
Time-reversal and Bayesian inversion
NASA Astrophysics Data System (ADS)
Debski, Wojciech
2017-04-01
Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.
2017-02-01
This study is focused on solving an inverse mathematical modelling problem for dynamical systems based on observation data and control inputs. The mathematical model is being searched in the form of a linear differential equation, which determines the system with multiple inputs and a single output, and a vector of the initial point coordinates. The described problem is complex and multimodal and for this reason the proposed evolutionary-based optimization technique, which is oriented on a dynamical system identification problem, was applied. To improve its performance an algorithm restart operator was implemented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ry, Rexha Verdhora, E-mail: rexha.vry@gmail.com; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id
Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment.more » We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.« less
Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.
2008-05-01
The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.
The Role of Eigensolutions in Nonlinear Inverse Cavity-Flow-Theory.
1983-01-25
ere, side if necessary and id.ntify hv hlock number) " The method of Levi Civita is applied to an isolated fully cavitating body at zero cavitation... Levi Civita is applied to an isolated fully cavitating body at zero cavitation number and adapted to the solution of the inverse problem in which one...case, the classical method of Levi Civita [71 can be applied to an isolated •Numbers in square brackets indicate citations in the references listed below
NASA Astrophysics Data System (ADS)
Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong
2018-05-01
In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.
Absolute mass scale calibration in the inverse problem of the physical theory of fireballs.
NASA Astrophysics Data System (ADS)
Kalenichenko, V. V.
A method of the absolute mass scale calibration is suggested for solving the inverse problem of the physical theory of fireballs. The method is based on the data on the masses of the fallen meteorites whose fireballs have been photographed in their flight. The method may be applied to those fireballs whose bodies have not experienced considerable fragmentation during their destruction in the atmosphere and have kept their form well enough. Statistical analysis of the inverse problem solution for a sufficiently representative sample makes it possible to separate a subsample of such fireballs. The data on the Lost City and Innisfree meteorites are used to obtain calibration coefficients.
Three-dimensional inversion of multisource array electromagnetic data
NASA Astrophysics Data System (ADS)
Tartaras, Efthimios
Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.
Louis, A. K.
2006-01-01
Many algorithms applied in inverse scattering problems use source-field systems instead of the direct computation of the unknown scatterer. It is well known that the resulting source problem does not have a unique solution, since certain parts of the source totally vanish outside of the reconstruction area. This paper provides for the two-dimensional case special sets of functions, which include all radiating and all nonradiating parts of the source. These sets are used to solve an acoustic inverse problem in two steps. The problem under discussion consists of determining an inhomogeneous obstacle supported in a part of a disc, from data, known for a subset of a two-dimensional circle. In a first step, the radiating parts are computed by solving a linear problem. The second step is nonlinear and consists of determining the nonradiating parts. PMID:23165060
Reconstruction of local perturbations in periodic surfaces
NASA Astrophysics Data System (ADS)
Lechleiter, Armin; Zhang, Ruming
2018-03-01
This paper concerns the inverse scattering problem to reconstruct a local perturbation in a periodic structure. Unlike the periodic problems, the periodicity for the scattered field no longer holds, thus classical methods, which reduce quasi-periodic fields in one periodic cell, are no longer available. Based on the Floquet-Bloch transform, a numerical method has been developed to solve the direct problem, that leads to a possibility to design an algorithm for the inverse problem. The numerical method introduced in this paper contains two steps. The first step is initialization, that is to locate the support of the perturbation by a simple method. This step reduces the inverse problem in an infinite domain into one periodic cell. The second step is to apply the Newton-CG method to solve the associated optimization problem. The perturbation is then approximated by a finite spline basis. Numerical examples are given at the end of this paper, showing the efficiency of the numerical method.
On computational experiments in some inverse problems of heat and mass transfer
NASA Astrophysics Data System (ADS)
Bilchenko, G. G.; Bilchenko, N. G.
2016-11-01
The results of mathematical modeling of effective heat and mass transfer on hypersonic aircraft permeable surfaces are considered. The physic-chemical processes (the dissociation and the ionization) in laminar boundary layer of compressible gas are appreciated. Some algorithms of control restoration are suggested for the interpolation and approximation statements of heat and mass transfer inverse problems. The differences between the methods applied for the problem solutions search for these statements are discussed. Both the algorithms are realized as programs. Many computational experiments were accomplished with the use of these programs. The parameters of boundary layer obtained by means of the A.A.Dorodnicyn's generalized integral relations method from solving the direct problems have been used to obtain the inverse problems solutions. Two types of blowing laws restoration for the inverse problem in interpolation statement are presented as the examples. The influence of the temperature factor on the blowing restoration is investigated. The different character of sensitivity of controllable parameters (the local heat flow and local tangent friction) respectively to step (discrete) changing of control (the blowing) and the switching point position is studied.
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
NASA Astrophysics Data System (ADS)
Jensen, Daniel; Wasserman, Adam; Baczewski, Andrew
The construction of approximations to the exchange-correlation potential for warm dense matter (WDM) is a topic of significant recent interest. In this work, we study the inverse problem of Kohn-Sham (KS) DFT as a means of guiding functional design at zero temperature and in WDM. Whereas the forward problem solves the KS equations to produce a density from a specified exchange-correlation potential, the inverse problem seeks to construct the exchange-correlation potential from specified densities. These two problems require different computational methods and convergence criteria despite sharing the same mathematical equations. We present two new inversion methods based on constrained variational and PDE-constrained optimization methods. We adapt these methods to finite temperature calculations to reveal the exchange-correlation potential's temperature dependence in WDM-relevant conditions. The different inversion methods presented are applied to both non-interacting and interacting model systems for comparison. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94.
NASA Astrophysics Data System (ADS)
Linde, N.; Vrugt, J. A.
2009-04-01
Geophysical models are increasingly used in hydrological simulations and inversions, where they are typically treated as an artificial data source with known uncorrelated "data errors". The model appraisal problem in classical deterministic linear and non-linear inversion approaches based on linearization is often addressed by calculating model resolution and model covariance matrices. These measures offer only a limited potential to assign a more appropriate "data covariance matrix" for future hydrological applications, simply because the regularization operators used to construct a stable inverse solution bear a strong imprint on such estimates and because the non-linearity of the geophysical inverse problem is not explored. We present a parallelized Markov Chain Monte Carlo (MCMC) scheme to efficiently derive the posterior spatially distributed radar slowness and water content between boreholes given first-arrival traveltimes. This method is called DiffeRential Evolution Adaptive Metropolis (DREAM_ZS) with snooker updater and sampling from past states. Our inverse scheme does not impose any smoothness on the final solution, and uses uniform prior ranges of the parameters. The posterior distribution of radar slowness is converted into spatially distributed soil moisture values using a petrophysical relationship. To benchmark the performance of DREAM_ZS, we first apply our inverse method to a synthetic two-dimensional infiltration experiment using 9421 traveltimes contaminated with Gaussian errors and 80 different model parameters, corresponding to a model discretization of 0.3 m × 0.3 m. After this, the method is applied to field data acquired in the vadose zone during snowmelt. This work demonstrates that fully non-linear stochastic inversion can be applied with few limiting assumptions to a range of common two-dimensional tomographic geophysical problems. The main advantage of DREAM_ZS is that it provides a full view of the posterior distribution of spatially distributed soil moisture, which is key to appropriately treat geophysical parameter uncertainty and infer hydrologic models.
NASA Astrophysics Data System (ADS)
Fukahata, Y.; Wright, T. J.
2006-12-01
We developed a method of geodetic data inversion for slip distribution on a fault with an unknown dip angle. When fault geometry is unknown, the problem of geodetic data inversion is non-linear. A common strategy for obtaining slip distribution is to first determine the fault geometry by minimizing the square misfit under the assumption of a uniform slip on a rectangular fault, and then apply the usual linear inversion technique to estimate a slip distribution on the determined fault. It is not guaranteed, however, that the fault determined under the assumption of a uniform slip gives the best fault geometry for a spatially variable slip distribution. In addition, in obtaining a uniform slip fault model, we have to simultaneously determine the values of the nine mutually dependent parameters, which is a highly non-linear, complicated process. Although the inverse problem is non-linear for cases with unknown fault geometries, the non-linearity of the problems is actually weak, when we can assume the fault surface to be flat. In particular, when a clear fault trace is observed on the EarthOs surface after an earthquake, we can precisely estimate the strike and the location of the fault. In this case only the dip angle has large ambiguity. In geodetic data inversion we usually need to introduce smoothness constraints in order to compromise reciprocal requirements for model resolution and estimation errors in a natural way. Strictly speaking, the inverse problem with smoothness constraints is also non-linear, even if the fault geometry is known. The non-linearity has been dissolved by introducing AkaikeOs Bayesian Information Criterion (ABIC), with which the optimal value of the relative weight of observed data to smoothness constraints is objectively determined. In this study, using ABIC in determining the optimal dip angle, we dissolved the non-linearity of the inverse problem. We applied the method to the InSAR data of the 1995 Dinar, Turkey earthquake and obtained a much shallower dip angle than before.
The inverse electroencephalography pipeline
NASA Astrophysics Data System (ADS)
Weinstein, David Michael
The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.
An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations
Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.
2016-01-01
We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach. PMID:28316360
NASA Technical Reports Server (NTRS)
Devasia, Santosh
1996-01-01
A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics is presented. This approach integrates stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics is used (1) to remove non-hyperbolicity which an obstruction to applying stable inversion techniques and (2) to reduce large pre-actuation time needed to apply stable inversion for near non-hyperbolic cases. The method is applied to an example helicopter hover control problem with near non-hyperbolic internal dynamic for illustrating the trade-off between exact tracking and reduction of pre-actuation time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
NASA Astrophysics Data System (ADS)
Hansen, T. M.; Cordua, K. S.
2017-12-01
Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.
NASA Astrophysics Data System (ADS)
Voznyuk, I.; Litman, A.; Tortel, H.
2015-08-01
A Quasi-Newton method for reconstructing the constitutive parameters of three-dimensional (3D) penetrable scatterers from scattered field measurements is presented. This method is adapted for handling large-scale electromagnetic problems while keeping the memory requirement and the time flexibility as low as possible. The forward scattering problem is solved by applying the finite-element tearing and interconnecting full-dual-primal (FETI-FDP2) method which shares the same spirit as the domain decomposition methods for finite element methods. The idea is to split the computational domain into smaller non-overlapping sub-domains in order to simultaneously solve local sub-problems. Various strategies are proposed in order to efficiently couple the inversion algorithm with the FETI-FDP2 method: a separation into permanent and non-permanent subdomains is performed, iterative solvers are favorized for resolving the interface problem and a marching-on-in-anything initial guess selection further accelerates the process. The computational burden is also reduced by applying the adjoint state vector methodology. Finally, the inversion algorithm is confronted to measurements extracted from the 3D Fresnel database.
NASA Astrophysics Data System (ADS)
Codd, A. L.; Gross, L.
2018-03-01
We present a new inversion method for Electrical Resistivity Tomography which, in contrast to established approaches, minimizes the cost function prior to finite element discretization for the unknown electric conductivity and electric potential. Minimization is performed with the Broyden-Fletcher-Goldfarb-Shanno method (BFGS) in an appropriate function space. BFGS is self-preconditioning and avoids construction of the dense Hessian which is the major obstacle to solving large 3-D problems using parallel computers. In addition to the forward problem predicting the measurement from the injected current, the so-called adjoint problem also needs to be solved. For this problem a virtual current is injected through the measurement electrodes and an adjoint electric potential is obtained. The magnitude of the injected virtual current is equal to the misfit at the measurement electrodes. This new approach has the advantage that the solution process of the optimization problem remains independent to the meshes used for discretization and allows for mesh adaptation during inversion. Computation time is reduced by using superposition of pole loads for the forward and adjoint problems. A smoothed aggregation algebraic multigrid (AMG) preconditioned conjugate gradient is applied to construct the potentials for a given electric conductivity estimate and for constructing a first level BFGS preconditioner. Through the additional reuse of AMG operators and coarse grid solvers inversion time for large 3-D problems can be reduced further. We apply our new inversion method to synthetic survey data created by the resistivity profile representing the characteristics of subsurface fluid injection. We further test it on data obtained from a 2-D surface electrode survey on Heron Island, a small tropical island off the east coast of central Queensland, Australia.
Atmospheric inverse modeling via sparse reconstruction
NASA Astrophysics Data System (ADS)
Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten
2017-10-01
Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.
Inversion of particle-size distribution from angular light-scattering data with genetic algorithms.
Ye, M; Wang, S; Lu, Y; Hu, T; Zhu, Z; Xu, Y
1999-04-20
A stochastic inverse technique based on a genetic algorithm (GA) to invert particle-size distribution from angular light-scattering data is developed. This inverse technique is independent of any given a priori information of particle-size distribution. Numerical tests show that this technique can be successfully applied to inverse problems with high stability in the presence of random noise and low susceptibility to the shape of distributions. It has also been shown that the GA-based inverse technique is more efficient in use of computing time than the inverse Monte Carlo method recently developed by Ligon et al. [Appl. Opt. 35, 4297 (1996)].
Scaling of plane-wave functions in statistically optimized near-field acoustic holography.
Hald, Jørgen
2014-11-01
Statistically Optimized Near-field Acoustic Holography (SONAH) is a Patch Holography method, meaning that it can be applied in cases where the measurement area covers only part of the source surface. The method performs projections directly in the spatial domain, avoiding the use of spatial discrete Fourier transforms and the associated errors. First, an inverse problem is solved using regularization. For each calculation point a multiplication must then be performed with two transfer vectors--one to get the sound pressure and the other to get the particle velocity. Considering SONAH based on sound pressure measurements, existing derivations consider only pressure reconstruction when setting up the inverse problem, so the evanescent wave amplification associated with the calculation of particle velocity is not taken into account in the regularized solution of the inverse problem. The present paper introduces a scaling of the applied plane wave functions that takes the amplification into account, and it is shown that the previously published virtual source-plane retraction has almost the same effect. The effectiveness of the different solutions is verified through a set of simulated measurements.
NASA Astrophysics Data System (ADS)
He, Xingyu; Tong, Ningning; Hu, Xiaowei
2018-01-01
Compressive sensing has been successfully applied to inverse synthetic aperture radar (ISAR) imaging of moving targets. By exploiting the block sparse structure of the target image, sparse solution for multiple measurement vectors (MMV) can be applied in ISAR imaging and a substantial performance improvement can be achieved. As an effective sparse recovery method, sparse Bayesian learning (SBL) for MMV involves a matrix inverse at each iteration. Its associated computational complexity grows significantly with the problem size. To address this problem, we develop a fast inverse-free (IF) SBL method for MMV. A relaxed evidence lower bound (ELBO), which is computationally more amiable than the traditional ELBO used by SBL, is obtained by invoking fundamental property for smooth functions. A variational expectation-maximization scheme is then employed to maximize the relaxed ELBO, and a computationally efficient IF-MSBL algorithm is proposed. Numerical results based on simulated and real data show that the proposed method can reconstruct row sparse signal accurately and obtain clear superresolution ISAR images. Moreover, the running time and computational complexity are reduced to a great extent compared with traditional SBL methods.
NASA Astrophysics Data System (ADS)
Luo, Y.; Nissen-Meyer, T.; Morency, C.; Tromp, J.
2008-12-01
Seismic imaging in the exploration industry is often based upon ray-theoretical migration techniques (e.g., Kirchhoff) or other ideas which neglect some fraction of the seismic wavefield (e.g., wavefield continuation for acoustic-wave first arrivals) in the inversion process. In a companion paper we discuss the possibility of solving the full physical forward problem (i.e., including visco- and poroelastic, anisotropic media) using the spectral-element method. With such a tool at hand, we can readily apply the adjoint method to tomographic inversions, i.e., iteratively improving an initial 3D background model to fit the data. In the context of this inversion process, we draw connections between kernels in adjoint tomography and basic imaging principles in migration. We show that the images obtained by migration are nothing but particular kinds of adjoint kernels (mainly density kernels). Migration is basically a first step in the iterative inversion process of adjoint tomography. We apply the approach to basic 2D problems involving layered structures, overthrusting faults, topography, salt domes, and poroelastic regions.
Remarks on a financial inverse problem by means of Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Cuomo, Salvatore; Di Somma, Vittorio; Sica, Federica
2017-10-01
Estimating the price of a barrier option is a typical inverse problem. In this paper we present a numerical and statistical framework for a market with risk-free interest rate and a risk asset, described by a Geometric Brownian Motion (GBM). After approximating the risk asset with a numerical method, we find the final option price by following an approach based on sequential Monte Carlo methods. All theoretical results are applied to the case of an option whose underlying is a real stock.
Joint Geophysical Inversion With Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelievre, P. G.; Bijani, R.; Farquharson, C. G.
2015-12-01
Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.
Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gary D. Egbert
2007-03-22
The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to themore » full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before approaching more computationally cumbersome three-dimensional problems.« less
Variable-permittivity linear inverse problem for the H(sub z)-polarized case
NASA Technical Reports Server (NTRS)
Moghaddam, M.; Chew, W. C.
1993-01-01
The H(sub z)-polarized inverse problem has rarely been studied before due to the complicated way in which the unknown permittivity appears in the wave equation. This problem is equivalent to the acoustic inverse problem with variable density. We have recently reported the solution to the nonlinear variable-permittivity H(sub z)-polarized inverse problem using the Born iterative method. Here, the linear inverse problem is solved for permittivity (epsilon) and permeability (mu) using a different approach which is an extension of the basic ideas of diffraction tomography (DT). The key to solving this problem is to utilize frequency diversity to obtain the required independent measurements. The receivers are assumed to be in the far field of the object, and plane wave incidence is also assumed. It is assumed that the scatterer is weak, so that the Born approximation can be used to arrive at a relationship between the measured pressure field and two terms related to the spatial Fourier transform of the two unknowns, epsilon and mu. The term involving permeability corresponds to monopole scattering and that for permittivity to dipole scattering. Measurements at several frequencies are used and a least squares problem is solved to reconstruct epsilon and mu. It is observed that the low spatial frequencies in the spectra of epsilon and mu produce inaccuracies in the results. Hence, a regularization method is devised to remove this problem. Several results are shown. Low contrast objects for which the above analysis holds are used to show that good reconstructions are obtained for both permittivity and permeability after regularization is applied.
An accurate, fast, and scalable solver for high-frequency wave propagation
NASA Astrophysics Data System (ADS)
Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.
2017-12-01
In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and in parallel. We demonstrate that this produces an even more effective and parallelizable preconditioner for a single right-hand side. As before, additional speed can be gained by pipelining several right-hand-sides.
NASA Astrophysics Data System (ADS)
Wang, Feiyan; Morten, Jan Petter; Spitzer, Klaus
2018-05-01
In this paper, we present a recently developed anisotropic 3-D inversion framework for interpreting controlled-source electromagnetic (CSEM) data in the frequency domain. The framework integrates a high-order finite-element forward operator and a Gauss-Newton inversion algorithm. Conductivity constraints are applied using a parameter transformation. We discretize the continuous forward and inverse problems on unstructured grids for a flexible treatment of arbitrarily complex geometries. Moreover, an unstructured mesh is more desirable in comparison to a single rectilinear mesh for multisource problems because local grid refinement will not significantly influence the mesh density outside the region of interest. The non-uniform spatial discretization facilitates parametrization of the inversion domain at a suitable scale. For a rapid simulation of multisource EM data, we opt to use a parallel direct solver. We further accelerate the inversion process by decomposing the entire data set into subsets with respect to frequencies (and transmitters if memory requirement is affordable). The computational tasks associated with each data subset are distributed to different processes and run in parallel. We validate the scheme using a synthetic marine CSEM model with rough bathymetry, and finally, apply it to an industrial-size 3-D data set from the Troll field oil province in the North Sea acquired in 2008 to examine its robustness and practical applicability.
A note on convergence of solutions of total variation regularized linear inverse problems
NASA Astrophysics Data System (ADS)
Iglesias, José A.; Mercier, Gwenael; Scherzer, Otmar
2018-05-01
In a recent paper by Chambolle et al (2017 Inverse Problems 33 015002) it was proven that if the subgradient of the total variation at the noise free data is not empty, the level-sets of the total variation denoised solutions converge to the level-sets of the noise free data with respect to the Hausdorff distance. The condition on the subgradient corresponds to the source condition introduced by Burger and Osher (2007 Multiscale Model. Simul. 6 365–95), who proved convergence rates results with respect to the Bregman distance under this condition. We generalize the result of Chambolle et al to total variation regularization of general linear inverse problems under such a source condition. As particular applications we present denoising in bounded and unbounded, convex and non convex domains, deblurring and inversion of the circular Radon transform. In all these examples the convergence result applies. Moreover, we illustrate the convergence behavior through numerical examples.
Round-off errors in cutting plane algorithms based on the revised simplex procedure
NASA Technical Reports Server (NTRS)
Moore, J. E.
1973-01-01
This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.
Group-theoretic models of the inversion process in bacterial genomes.
Egri-Nagy, Attila; Gebhardt, Volker; Tanaka, Mark M; Francis, Andrew R
2014-07-01
The variation in genome arrangements among bacterial taxa is largely due to the process of inversion. Recent studies indicate that not all inversions are equally probable, suggesting, for instance, that shorter inversions are more frequent than longer, and those that move the terminus of replication are less probable than those that do not. Current methods for establishing the inversion distance between two bacterial genomes are unable to incorporate such information. In this paper we suggest a group-theoretic framework that in principle can take these constraints into account. In particular, we show that by lifting the problem from circular permutations to the affine symmetric group, the inversion distance can be found in polynomial time for a model in which inversions are restricted to acting on two regions. This requires the proof of new results in group theory, and suggests a vein of new combinatorial problems concerning permutation groups on which group theorists will be needed to collaborate with biologists. We apply the new method to inferring distances and phylogenies for published Yersinia pestis data.
The inverse problem of the calculus of variations for discrete systems
NASA Astrophysics Data System (ADS)
Barbero-Liñán, María; Farré Puiggalí, Marta; Ferraro, Sebastián; Martín de Diego, David
2018-05-01
We develop a geometric version of the inverse problem of the calculus of variations for discrete mechanics and constrained discrete mechanics. The geometric approach consists of using suitable Lagrangian and isotropic submanifolds. We also provide a transition between the discrete and the continuous problems and propose variationality as an interesting geometric property to take into account in the design and computer simulation of numerical integrators for constrained systems. For instance, nonholonomic mechanics is generally non variational but some special cases admit an alternative variational description. We apply some standard nonholonomic integrators to such an example to study which ones conserve this property.
Absolute calibration of the mass scale in the inverse problem of the physical theory of fireballs
NASA Astrophysics Data System (ADS)
Kalenichenko, V. V.
1992-08-01
A method of the absolute calibration of the mass scale is proposed for solving the inverse problem of the physical theory of fireballs. The method is based on data on the masses of fallen meteorites whose fireballs have been photographed in flight. The method can be applied to fireballs whose bodies have not experienced significant fragmentation during their flight in the atmosphere and have kept their shape relatively well. Data on the Lost City and Innisfree meteorites are used to calculate the calibration coefficients.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.; Park, C.B.
2005-01-01
In a set of two papers we study the inverse problem of refraction travel times. The purpose of this work is to use the study as a basis for development of more sophisticated methods for finding more reliable solutions to the inverse problem of refraction travel times, which is known to be nonunique. The first paper, "Types of Geophysical Nonuniqueness through Minimization," emphasizes the existence of different forms of nonuniqueness in the realm of inverse geophysical problems. Each type of nonuniqueness requires a different type and amount of a priori information to acquire a reliable solution. Based on such coupling, a nonuniqueness classification is designed. Therefore, since most inverse geophysical problems are nonunique, each inverse problem must be studied to define what type of nonuniqueness it belongs to and thus determine what type of a priori information is necessary to find a realistic solution. The second paper, "Quantifying Refraction Nonuniqueness Using a Three-layer Model," serves as an example of such an approach. However, its main purpose is to provide a better understanding of the inverse refraction problem by studying the type of nonuniqueness it possesses. An approach for obtaining a realistic solution to the inverse refraction problem is planned to be offered in a third paper that is in preparation. The main goal of this paper is to redefine the existing generalized notion of nonuniqueness and a priori information by offering a classified, discriminate structure. Nonuniqueness is often encountered when trying to solve inverse problems. However, possible nonuniqueness diversity is typically neglected and nonuniqueness is regarded as a whole, as an unpleasant "black box" and is approached in the same manner by applying smoothing constraints, damping constraints with respect to the solution increment and, rarely, damping constraints with respect to some sparse reference information about the true parameters. In practice, when solving geophysical problems different types of nonuniqueness exist, and thus there are different ways to solve the problems. Nonuniqueness is usually regarded as due to data error, assuming the true geology is acceptably approximated by simple mathematical models. Compounding the nonlinear problems, geophysical applications routinely exhibit exact-data nonuniqueness even for models with very few parameters adding to the nonuniqueness due to data error. While nonuniqueness variations have been defined earlier, they have not been linked to specific use of a priori information necessary to resolve each case. Four types of nonuniqueness, typical for minimization problems are defined with the corresponding methods for inclusion of a priori information to find a realistic solution without resorting to a non-discriminative approach. The above-developed stand-alone classification is expected to be helpful when solving any geophysical inverse problems. ?? Birkha??user Verlag, Basel, 2005.
On uncertainty quantification in hydrogeology and hydrogeophysics
NASA Astrophysics Data System (ADS)
Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud
2017-12-01
Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.
A fast direct solver for boundary value problems on locally perturbed geometries
NASA Astrophysics Data System (ADS)
Zhang, Yabin; Gillman, Adrianna
2018-03-01
Many applications including optimal design and adaptive discretization techniques involve solving several boundary value problems on geometries that are local perturbations of an original geometry. This manuscript presents a fast direct solver for boundary value problems that are recast as boundary integral equations. The idea is to write the discretized boundary integral equation on a new geometry as a low rank update to the discretized problem on the original geometry. Using the Sherman-Morrison formula, the inverse can be expressed in terms of the inverse of the original system applied to the low rank factors and the right hand side. Numerical results illustrate for problems where perturbation is localized the fast direct solver is three times faster than building a new solver from scratch.
Data matching for free-surface multiple attenuation by multidimensional deconvolution
NASA Astrophysics Data System (ADS)
van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald
2012-09-01
A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.
Computation of forces from deformed visco-elastic biological tissues
NASA Astrophysics Data System (ADS)
Muñoz, José J.; Amat, David; Conte, Vito
2018-04-01
We present a least-squares based inverse analysis of visco-elastic biological tissues. The proposed method computes the set of contractile forces (dipoles) at the cell boundaries that induce the observed and quantified deformations. We show that the computation of these forces requires the regularisation of the problem functional for some load configurations that we study here. The functional measures the error of the dynamic problem being discretised in time with a second-order implicit time-stepping and in space with standard finite elements. We analyse the uniqueness of the inverse problem and estimate the regularisation parameter by means of an L-curved criterion. We apply the methodology to a simple toy problem and to an in vivo set of morphogenetic deformations of the Drosophila embryo.
2D Inversion of Transient Electromagnetic Method (TEM)
NASA Astrophysics Data System (ADS)
Bortolozo, Cassiano Antonio; Luís Porsani, Jorge; Acácio Monteiro dos Santos, Fernando
2017-04-01
A new methodology was developed for 2D inversion of Transient Electromagnetic Method (TEM). The methodology consists in the elaboration of a set of routines in Matlab code for modeling and inversion of TEM data and the determination of the most efficient field array for the problem. In this research, the 2D TEM modeling uses the finite differences discretization. To solve the inversion problem, were applied an algorithm based on Marquardt technique, also known as Ridge Regression. The algorithm is stable and efficient and it is widely used in geoelectrical inversion problems. The main advantage of 1D survey is the rapid data acquisition in a large area, but in regions with two-dimensional structures or that need more details, is essential to use two-dimensional interpretation methodologies. For an efficient field acquisition we used in an innovative form the fixed-loop array, with a square transmitter loop (200m x 200m) and 25m spacing between the sounding points. The TEM surveys were conducted only inside the transmitter loop, in order to not deal with negative apparent resistivity values. Although it is possible to model the negative values, it makes the inversion convergence more difficult. Therefore the methodology described above has been developed in order to achieve maximum optimization of data acquisition. Since it is necessary only one transmitter loop disposition in the surface for each series of soundings inside the loop. The algorithms were tested with synthetic data and the results were essential to the interpretation of the results with real data and will be useful in future situations. With the inversion of the real data acquired over the Paraná Sedimentary Basin (PSB) was successful realized a 2D TEM inversion. The results indicate a robust geoelectrical characterization for the sedimentary and crystalline aquifers in the PSB. Therefore, using a new and relevant approach for 2D TEM inversion, this research effectively contributed to map the most promising regions for groundwater exploration. In addition, there was the development of new geophysical software that can be applied as an important tool for many geological/hydrogeological applications and educational purposes.
Individual differences in children's understanding of inversion and arithmetical skill.
Gilmore, Camilla K; Bryant, Peter
2006-06-01
Background and aims. In order to develop arithmetic expertise, children must understand arithmetic principles, such as the inverse relationship between addition and subtraction, in addition to learning calculation skills. We report two experiments that investigate children's understanding of the principle of inversion and the relationship between their conceptual understanding and arithmetical skills. A group of 127 children from primary schools took part in the study. The children were from 2 age groups (6-7 and 8-9 years). Children's accuracy on inverse and control problems in a variety of presentation formats and in canonical and non-canonical forms was measured. Tests of general arithmetic ability were also administered. Children consistently performed better on inverse than control problems, which indicates that they could make use of the inverse principle. Presentation format affected performance: picture presentation allowed children to apply their conceptual understanding flexibly regardless of the problem type, while word problems restricted their ability to use their conceptual knowledge. Cluster analyses revealed three subgroups with different profiles of conceptual understanding and arithmetical skill. Children in the 'high ability' and 'low ability' groups showed conceptual understanding that was in-line with their arithmetical skill, whilst a 3rd group of children had more advanced conceptual understanding than arithmetical skill. The three subgroups may represent different points along a single developmental path or distinct developmental paths. The discovery of the existence of the three groups has important consequences for education. It demonstrates the importance of considering the pattern of individual children's conceptual understanding and problem-solving skills.
Inversion for the driving forces of plate tectonics
NASA Technical Reports Server (NTRS)
Richardson, R. M.
1983-01-01
Inverse modeling techniques have been applied to the problem of determining the roles of various forces that may drive and resist plate tectonic motions. Separate linear inverse problems have been solved to find the best fitting pole of rotation for finite element grid point velocities and to find the best combination of force models to fit the observed relative plate velocities for the earth's twelve major plates using the generalized inverse operator. Variance-covariance data on plate motion have also been included. Results emphasize the relative importance of ridge push forces in the driving mechanism. Convergent margin forces are smaller by at least a factor of two, and perhaps by as much as a factor of twenty. Slab pull, apparently, is poorly transmitted to the surface plate as a driving force. Drag forces at the base of the plate are smaller than ridge push forces, although the sign of the force remains in question.
Hesford, Andrew J.; Chew, Weng C.
2010-01-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438
The 2-D magnetotelluric inverse problem solved with optimization
NASA Astrophysics Data System (ADS)
van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven
2011-02-01
The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.
Guidance of Nonlinear Nonminimum-Phase Dynamic Systems
NASA Technical Reports Server (NTRS)
Devasia, Santosh
1996-01-01
The research work has advanced the inversion-based guidance theory for: systems with non-hyperbolic internal dynamics; systems with parameter jumps; and systems where a redesign of the output trajectory is desired. A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics was developed. This approach integrated stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics was used (a) to remove non-hyperbolicity which is an obstruction to applying stable inversion techniques and (b) to reduce large preactuation times needed to apply stable inversion for near non-hyperbolic cases. The method was applied to an example helicopter hover control problem with near non-hyperbolic internal dynamics for illustrating the trade-off between exact tracking and reduction of preactuation time. Future work will extend these results to guidance of nonlinear non-hyperbolic systems. The exact output tracking problem for systems with parameter jumps was considered. Necessary and sufficient conditions were derived for the elimination of switching-introduced output transient. While previous works had studied this problem by developing a regulator that maintains exact tracking through parameter jumps (switches), such techniques are, however, only applicable to minimum-phase systems. In contrast, our approach is also applicable to nonminimum-phase systems and leads to bounded but possibly non-causal solutions. In addition, for the case when the reference trajectories are generated by an exosystem, we developed an exact-tracking controller which could be written in a feedback form. As in standard regulator theory, we also obtained a linear map from the states of the exosystem to the desired system state, which was defined via a matrix differential equation.
Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique
NASA Astrophysics Data System (ADS)
Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi
2013-09-01
According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.
Recovery of time-dependent volatility in option pricing model
NASA Astrophysics Data System (ADS)
Deng, Zui-Cha; Hon, Y. C.; Isakov, V.
2016-11-01
In this paper we investigate an inverse problem of determining the time-dependent volatility from observed market prices of options with different strikes. Due to the non linearity and sparsity of observations, an analytical solution to the problem is generally not available. Numerical approximation is also difficult to obtain using most of the existing numerical algorithms. Based on our recent theoretical results, we apply the linearisation technique to convert the problem into an inverse source problem from which recovery of the unknown volatility function can be achieved. Two kinds of strategies, namely, the integral equation method and the Landweber iterations, are adopted to obtain the stable numerical solution to the inverse problem. Both theoretical analysis and numerical examples confirm that the proposed approaches are effective. The work described in this paper was partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region (Project No. CityU 101112) and grants from the NNSF of China (Nos. 11261029, 11461039), and NSF grants DMS 10-08902 and 15-14886 and by Emylou Keith and Betty Dutcher Distinguished Professorship at the Wichita State University (USA).
NASA Astrophysics Data System (ADS)
Arsenault, Louis-François; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.
2017-11-01
We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved.
Structural damage identification using an enhanced thermal exchange optimization algorithm
NASA Astrophysics Data System (ADS)
Kaveh, A.; Dadras, A.
2018-03-01
The recently developed optimization algorithm-the so-called thermal exchange optimization (TEO) algorithm-is enhanced and applied to a damage detection problem. An offline parameter tuning approach is utilized to set the internal parameters of the TEO, resulting in the enhanced heat transfer optimization (ETEO) algorithm. The damage detection problem is defined as an inverse problem, and ETEO is applied to a wide range of structures. Several scenarios with noise and noise-free modal data are tested and the locations and extents of damages are identified with good accuracy.
An l1-TV algorithm for deconvolution with salt and pepper noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohlberg, Brendt; Rodriguez, Paul
2008-01-01
There has recently been considerable interest in applying Total Variation with an {ell}{sup 1} data fidelity term to the denoising of images subject to salt and pepper noise, but the extension of this formulation to more general problems, such as deconvolution, has received little attention, most probably because most efficient algorithms for {ell}{sup 1}-TV denoising can not handle more general inverse problems. We apply the Iteratively Reweighted Norm algorithm to this problem, and compare performance with an alternative algorithm based on the Mumford-Shah functional.
Variational approach to direct and inverse problems of atmospheric pollution studies
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey
2016-04-01
We present the development of a variational approach for solving interrelated problems of atmospheric hydrodynamics and chemistry concerning air pollution transport and transformations. The proposed approach allows us to carry out complex studies of different-scale physical and chemical processes using the methods of direct and inverse modeling [1-3]. We formulate the problems of risk/vulnerability and uncertainty assessment, sensitivity studies, variational data assimilation procedures [4], etc. A computational technology of constructing consistent mathematical models and methods of their numerical implementation is based on the variational principle in the weak constraint formulation specifically designed to account for uncertainties in models and observations. Algorithms for direct and inverse modeling are designed with the use of global and local adjoint problems. Implementing the idea of adjoint integrating factors provides unconditionally monotone and stable discrete-analytic approximations for convection-diffusion-reaction problems [5,6]. The general framework is applied to the direct and inverse problems for the models of transport and transformation of pollutants in Siberian and Arctic regions. The work has been partially supported by the RFBR grant 14-01-00125 and RAS Presidium Program I.33P. References: 1. V. Penenko, A.Baklanov, E. Tsvetova and A. Mahura . Direct and inverse problems in a variational concept of environmental modeling //Pure and Applied Geoph.(2012) v.169: 447-465. 2. V. V. Penenko, E. A. Tsvetova, and A. V. Penenko Development of variational approach for direct and inverse problems of atmospheric hydrodynamics and chemistry, Izvestiya, Atmospheric and Oceanic Physics, 2015, Vol. 51, No. 3, p. 311-319, DOI: 10.1134/S0001433815030093. 3. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Methods based on the joint use of models and observational data in the framework of variational approach to forecasting weather and atmospheric composition quality// Russian meteorology and hydrology, V. 40, Issue: 6, Pages: 365-373, DOI: 10.3103/S1068373915060023. 4. A.V. Penenko and V.V. Penenko. Direct data assimilation method for convection-diffusion models based on splitting scheme. Computational technologies, 19(4):69-83, 2014. 5. V.V. Penenko, E.A. Tsvetova, A.V. Penenko Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, 2014, V.67, Issue 12, Pages 2240-2256, DOI:10.1016/j.camwa.2014.04.004 6. V.V. Penenko, E.A. Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220, DOI 10.1134/S199542391303004X
A general approach to regularizing inverse problems with regional data using Slepian wavelets
NASA Astrophysics Data System (ADS)
Michel, Volker; Simons, Frederik J.
2017-12-01
Slepian functions are orthogonal function systems that live on subdomains (for example, geographical regions on the Earth’s surface, or bandlimited portions of the entire spectrum). They have been firmly established as a useful tool for the synthesis and analysis of localized (concentrated or confined) signals, and for the modeling and inversion of noise-contaminated data that are only regionally available or only of regional interest. In this paper, we consider a general abstract setup for inverse problems represented by a linear and compact operator between Hilbert spaces with a known singular-value decomposition (svd). In practice, such an svd is often only given for the case of a global expansion of the data (e.g. on the whole sphere) but not for regional data distributions. We show that, in either case, Slepian functions (associated to an arbitrarily prescribed region and the given compact operator) can be determined and applied to construct a regularization for the ill-posed regional inverse problem. Moreover, we describe an algorithm for constructing the Slepian basis via an algebraic eigenvalue problem. The obtained Slepian functions can be used to derive an svd for the combination of the regionalizing projection and the compact operator. As a result, standard regularization techniques relying on a known svd become applicable also to those inverse problems where the data are regionally given only. In particular, wavelet-based multiscale techniques can be used. An example for the latter case is elaborated theoretically and tested on two synthetic numerical examples.
NASA Astrophysics Data System (ADS)
Wang, Jun; Wang, Yang; Zeng, Hui
2016-01-01
A key issue to address in synthesizing spatial data with variable-support in spatial analysis and modeling is the change-of-support problem. We present an approach for solving the change-of-support and variable-support data fusion problems. This approach is based on geostatistical inverse modeling that explicitly accounts for differences in spatial support. The inverse model is applied here to produce both the best predictions of a target support and prediction uncertainties, based on one or more measurements, while honoring measurements. Spatial data covering large geographic areas often exhibit spatial nonstationarity and can lead to computational challenge due to the large data size. We developed a local-window geostatistical inverse modeling approach to accommodate these issues of spatial nonstationarity and alleviate computational burden. We conducted experiments using synthetic and real-world raster data. Synthetic data were generated and aggregated to multiple supports and downscaled back to the original support to analyze the accuracy of spatial predictions and the correctness of prediction uncertainties. Similar experiments were conducted for real-world raster data. Real-world data with variable-support were statistically fused to produce single-support predictions and associated uncertainties. The modeling results demonstrate that geostatistical inverse modeling can produce accurate predictions and associated prediction uncertainties. It is shown that the local-window geostatistical inverse modeling approach suggested offers a practical way to solve the well-known change-of-support problem and variable-support data fusion problem in spatial analysis and modeling.
Toward 2D and 3D imaging of magnetic nanoparticles using EPR measurements.
Coene, A; Crevecoeur, G; Leliaert, J; Dupré, L
2015-09-01
Magnetic nanoparticles (MNPs) are an important asset in many biomedical applications. An effective working of these applications requires an accurate knowledge of the spatial MNP distribution. A promising, noninvasive, and sensitive technique to visualize MNP distributions in vivo is electron paramagnetic resonance (EPR). Currently only 1D MNP distributions can be reconstructed. In this paper, the authors propose extending 1D EPR toward 2D and 3D using computer simulations to allow accurate imaging of MNP distributions. To find the MNP distribution belonging to EPR measurements, an inverse problem needs to be solved. The solution of this inverse problem highly depends on the stability of the inverse problem. The authors adapt 1D EPR imaging to realize the imaging of multidimensional MNP distributions. Furthermore, the authors introduce partial volume excitation in which only parts of the volume are imaged to increase stability of the inverse solution and to speed up the measurements. The authors simulate EPR measurements of different 2D and 3D MNP distributions and solve the inverse problem. The stability is evaluated by calculating the condition measure and by comparing the actual MNP distribution to the reconstructed MNP distribution. Based on these simulations, the authors define requirements for the EPR system to cope with the added dimensions. Moreover, the authors investigate how EPR measurements should be conducted to improve the stability of the associated inverse problem and to increase reconstruction quality. The approach used in 1D EPR can only be employed for the reconstruction of small volumes in 2D and 3D EPRs due to numerical instability of the inverse solution. The authors performed EPR measurements of increasing cylindrical volumes and evaluated the condition measure. This showed that a reduction of the inherent symmetry in the EPR methodology is necessary. By reducing the symmetry of the EPR setup, quantitative images of larger volumes can be obtained. The authors found that, by selectively exciting parts of the volume, the authors could increase the reconstruction quality even further while reducing the amount of measurements. Additionally, the inverse solution of this activation method degrades slower for increasing volumes. Finally, the methodology was applied to noisy EPR measurements: using the reduced EPR setup's symmetry and the partial activation method, an increase in reconstruction quality of ≈ 80% can be seen with a speedup of the measurements with 10%. Applying the aforementioned requirements to the EPR setup and stabilizing the EPR measurements showed a tremendous increase in noise robustness, thereby making EPR a valuable method for quantitative imaging of multidimensional MNP distributions.
NASA Astrophysics Data System (ADS)
Kountouris, Panagiotis; Gerbig, Christoph; Rödenbeck, Christian; Karstens, Ute; Koch, Thomas Frank; Heimann, Martin
2018-03-01
Atmospheric inversions are widely used in the optimization of surface carbon fluxes on a regional scale using information from atmospheric CO2 dry mole fractions. In many studies the prior flux uncertainty applied to the inversion schemes does not directly reflect the true flux uncertainties but is used to regularize the inverse problem. Here, we aim to implement an inversion scheme using the Jena inversion system and applying a prior flux error structure derived from a model-data residual analysis using high spatial and temporal resolution over a full year period in the European domain. We analyzed the performance of the inversion system with a synthetic experiment, in which the flux constraint is derived following the same residual analysis but applied to the model-model mismatch. The synthetic study showed a quite good agreement between posterior and true
fluxes on European, country, annual and monthly scales. Posterior monthly and country-aggregated fluxes improved their correlation coefficient with the known truth
by 7 % compared to the prior estimates when compared to the reference, with a mean correlation of 0.92. The ratio of the SD between the posterior and reference and between the prior and reference was also reduced by 33 % with a mean value of 1.15. We identified temporal and spatial scales on which the inversion system maximizes the derived information; monthly temporal scales at around 200 km spatial resolution seem to maximize the information gain.
Inverse problems in 1D hemodynamics on systemic networks: a sequential approach.
Lombardi, D
2014-02-01
In this work, a sequential approach based on the unscented Kalman filter is applied to solve inverse problems in 1D hemodynamics, on a systemic network. For instance, the arterial stiffness is estimated by exploiting cross-sectional area and mean speed observations in several locations of the arteries. The results are compared with those ones obtained by estimating the pulse wave velocity and the Moens-Korteweg formula. In the last section, a perspective concerning the identification of the terminal models parameters and peripheral circulation (modeled by a Windkessel circuit) is presented. Copyright © 2013 John Wiley & Sons, Ltd.
Pseudo 2D elastic waveform inversion for attenuation in the near surface
NASA Astrophysics Data System (ADS)
Wang, Yue; Zhang, Jie
2017-08-01
Seismic waveform propagation could be significantly affected by heterogeneities in the near surface zone (0 m-500 m depth). As a result, it is important to obtain as much near surface information as possible. Seismic attenuation, characterized by QP and QS factors, may affect seismic waveform in both phase and amplitude; however, it is rarely estimated and applied to the near surface zone for seismic data processing. Applying a 1D elastic full waveform modelling program, we demonstrate that such effects cannot be overlooked in the waveform computation if the value of the Q factor is lower than approximately 100. Further, we develop a pseudo 2D elastic waveform inversion method in the common midpoint (CMP) domain that jointly inverts early arrivals for QP and surface waves for QS. In this method, although the forward problem is in 1D, by applying 2D model regularization, we obtain 2D QP and QS models through simultaneous inversion. A cross-gradient constraint between the QP and Qs models is applied to ensure structural consistency of the 2D inversion results. We present synthetic examples and a real case study from an oil field in China.
Preview-Based Stable-Inversion for Output Tracking
NASA Technical Reports Server (NTRS)
Zou, Qing-Ze; Devasia, Santosh
1999-01-01
Stable Inversion techniques can be used to achieve high-accuracy output tracking. However, for nonminimum phase systems, the inverse is non-causal - hence the inverse has to be pre-computed using a pre-specified desired-output trajectory. This requirement for pre-specification of the desired output restricts the use of inversion-based approaches to trajectory planning problems (for nonminimum phase systems). In the present article, it is shown that preview information of the desired output can be used to achieve online inversion-based output tracking of linear systems. The amount of preview-time needed is quantified in terms of the tracking error and the internal dynamics of the system (zeros of the system). The methodology is applied to the online output tracking of a flexible structure and experimental results are presented.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
The inverse problems of wing panel manufacture processes
NASA Astrophysics Data System (ADS)
Oleinikov, A. I.; Bormotin, K. S.
2013-12-01
It is shown that inverse problems of steady-state creep bending of plates in both the geometrically linear and nonlinear formulations can be represented in a variational formulation. Steady-state values of the obtained functionals corresponding to the solutions of the problems of inelastic deformation and springback are determined by applying a finite element procedure to the functionals. Optimal laws of creep deformation are formulated using the criterion of minimizing damage in the functionals of the inverse problems. The formulated problems are reduced to the problems solved by the finite element method using MSC.Marc software. Currently, forming of light metals poses tremendous challenges due to their low ductility at room temperature and their unusual deformation characteristics at hot-cold work: strong asymmetry between tensile and compressive behavior, and a very pronounced anisotropy. We used the constitutive models of steady-state creep of initially transverse isotropy structural materials the kind of the stress state has influence. The paper gives basics of the developed computer-aided system of design, modeling, and electronic simulation targeting the processes of manufacture of wing integral panels. The modeling results can be used to calculate the die tooling, determine the panel processibility, and control panel rejection in the course of forming.
An alternative empirical likelihood method in missing response problems and causal inference.
Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao
2016-11-30
Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
An Inverse Problem Formulation Methodology for Stochastic Models
2010-05-02
form the surveillance data Infection control measures were implemented in the form of health care worker hand - hygiene before and after patients contact...manuscript derives from our interest in understanding the spread of infectious diseases in particular, nosocomial infections , in order to prevent major...given by the inverse of the parameter of the exponential distribution. A hand - hygiene policy applied to health care workers on isolated VRE colonized
Computational analysis for biodegradation of exogenously depolymerizable polymer
NASA Astrophysics Data System (ADS)
Watanabe, M.; Kawai, F.
2018-03-01
This study shows that microbial growth and decay in a biodegradation process of exogenously depolymerizable polymer are controlled by consumption of monomer units. Experimental outcomes for residual polymer were incorporated in inverse analysis for a degradation rate. The Gauss-Newton method was applied to an inverse problem for two parameter values associated with the microbial population. A biodegradation process of polyethylene glycol was analyzed numerically, and numerical outcomes were obtained.
NASA Astrophysics Data System (ADS)
Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.
2015-12-01
Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.
2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion
NASA Astrophysics Data System (ADS)
Brossier, R.; Virieux, J.; Operto, S.
2008-12-01
Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.
NASA Astrophysics Data System (ADS)
Gross, Lutz; Altinay, Cihan; Fenwick, Joel; Smith, Troy
2014-05-01
The program package escript has been designed for solving mathematical modeling problems using python, see Gross et al. (2013). Its development and maintenance has been funded by the Australian Commonwealth to provide open source software infrastructure for the Australian Earth Science community (recent funding by the Australian Geophysical Observing System EIF (AGOS) and the AuScope Collaborative Research Infrastructure Scheme (CRIS)). The key concepts of escript are based on the terminology of spatial functions and partial differential equations (PDEs) - an approach providing abstraction from the underlying spatial discretization method (i.e. the finite element method (FEM)). This feature presents a programming environment to the user which is easy to use even for complex models. Due to the fact that implementations are independent from data structures simulations are easily portable across desktop computers and scalable compute clusters without modifications to the program code. escript has been successfully applied in a variety of applications including modeling mantel convection, melting processes, volcanic flow, earthquakes, faulting, multi-phase flow, block caving and mineralization (see Poulet et al. 2013). The recent escript release (see Gross et al. (2013)) provides an open framework for solving joint inversion problems for geophysical data sets (potential field, seismic and electro-magnetic). The strategy bases on the idea to formulate the inversion problem as an optimization problem with PDE constraints where the cost function is defined by the data defect and the regularization term for the rock properties, see Gross & Kemp (2013). This approach of first-optimize-then-discretize avoids the assemblage of the - in general- dense sensitivity matrix as used in conventional approaches where discrete programming techniques are applied to the discretized problem (first-discretize-then-optimize). In this paper we will discuss the mathematical framework for inversion and appropriate solution schemes in escript. We will also give a brief introduction into escript's open framework for defining and solving geophysical inversion problems. Finally we will show some benchmark results to demonstrate the computational scalability of the inversion method across a large number of cores and compute nodes in a parallel computing environment. References: - L. Gross et al. (2013): Escript Solving Partial Differential Equations in Python Version 3.4, The University of Queensland, https://launchpad.net/escript-finley - L. Gross and C. Kemp (2013) Large Scale Joint Inversion of Geophysical Data using the Finite Element Method in escript. ASEG Extended Abstracts 2013, http://dx.doi.org/10.1071/ASEG2013ab306 - T. Poulet, L. Gross, D. Georgiev, J. Cleverley (2012): escript-RT: Reactive transport simulation in Python using escript, Computers & Geosciences, Volume 45, 168-176. http://dx.doi.org/10.1016/j.cageo.2011.11.005.
NASA Astrophysics Data System (ADS)
Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.
2016-12-01
Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.
Invisibility problem in acoustics, electromagnetism and heat transfer. Inverse design method
NASA Astrophysics Data System (ADS)
Alekseev, G.; Tokhtina, A.; Soboleva, O.
2017-10-01
Two approaches (direct design and inverse design methods) for solving problems of designing devices providing invisibility of material bodies of detection using different physical fields - electromagnetic, acoustic and static are discussed. The second method is applied for solving problems of designing cloaking devices for the 3D stationary thermal scattering model. Based on this method the design problems under study are reduced to respective control problems. The material parameters (radial and tangential heat conductivities) of the inhomogeneous anisotropic medium filling the thermal cloak and the density of auxiliary heat sources play the role of controls. A unique solvability of direct thermal scattering problem in the Sobolev space is proved and the new estimates of solutions are established. Using these results, the solvability of control problem is proved and the optimality system is derived. Based on analysis of optimality system, the stability estimates of optimal solutions are established and numerical algorithms for solving particular thermal cloaking problem are proposed.
Three-dimensional imaging of buried objects in very lossy earth by inversion of VETEM data
Cui, T.J.; Aydiner, A.A.; Chew, W.C.; Wright, D.L.; Smith, D.V.
2003-01-01
The very early time electromagnetic system (VETEM) is an efficient tool for the detection of buried objects in very lossy earth, which allows a deeper penetration depth compared to the ground-penetrating radar. In this paper, the inversion of VETEM data is investigated using three-dimensional (3-D) inverse scattering techniques, where multiple frequencies are applied in the frequency range from 0-5 MHz. For small and moderately sized problems, the Born approximation and/or the Born iterative method have been used with the aid of the singular value decomposition and/or the conjugate gradient method in solving the linearized integral equations. For large-scale problems, a localized 3-D inversion method based on the Born approximation has been proposed for the inversion of VETEM data over a large measurement domain. Ways to process and to calibrate the experimental VETEM data are discussed to capture the real physics of buried objects. Reconstruction examples using synthesized VETEM data and real-world VETEM data are given to test the validity and efficiency of the proposed approach.
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
Optimization, Monotonicity and the Determination of Nash Equilibria — An Algorithmic Analysis
NASA Astrophysics Data System (ADS)
Lozovanu, D.; Pickl, S. W.; Weber, G.-W.
2004-08-01
This paper is concerned with the optimization of a nonlinear time-discrete model exploiting the special structure of the underlying cost game and the property of inverse matrices. The costs are interlinked by a system of linear inequalities. It is shown that, if the players cooperate, i.e., minimize the sum of all the costs, they achieve a Nash equilibrium. In order to determine Nash equilibria, the simplex method can be applied with respect to the dual problem. An introduction into the TEM model and its relationship to an economic Joint Implementation program is given. The equivalence problem is presented. The construction of the emission cost game and the allocation problem is explained. The assumption of inverse monotony for the matrices leads to a new result in the area of such allocation problems. A generalization of such problems is presented.
Source localization in electromyography using the inverse potential problem
NASA Astrophysics Data System (ADS)
van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.
2011-02-01
We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting.
Neural network explanation using inversion.
Saad, Emad W; Wunsch, Donald C
2007-01-01
An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.
Using Inverse Problem Methods with Surveillance Data in Pneumococcal Vaccination
Sutton, Karyn L.; Banks, H. T.; Castillo-Chavez, Carlos
2010-01-01
The design and evaluation of epidemiological control strategies is central to public health policy. While inverse problem methods are routinely used in many applications, this remains an area in which their use is relatively rare, although their potential impact is great. We describe methods particularly relevant to epidemiological modeling at the population level. These methods are then applied to the study of pneumococcal vaccination strategies as a relevant example which poses many challenges common to other infectious diseases. We demonstrate that relevant yet typically unknown parameters may be estimated, and show that a calibrated model may used to assess implemented vaccine policies through the estimation of parameters if vaccine history is recorded along with infection and colonization information. Finally, we show how one might determine an appropriate level of refinement or aggregation in the age-structured model given age-stratified observations. These results illustrate ways in which the collection and analysis of surveillance data can be improved using inverse problem methods. PMID:20209093
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon
2017-01-01
Obtaining measurements of flight environments on ablative heat shields is both critical for spacecraft development and extremely challenging due to the harsh heating environment and surface recession. Thermocouples installed several millimeters below the surface are commonly used to measure the heat shield temperature response, but an ill-posed inverse heat conduction problem must be solved to reconstruct the surface heating environment from these measurements. Ablation can contribute substantially to the measurement response making solutions to the inverse problem strongly dependent on the recession model, which is often poorly characterized. To enable efficient surface reconstruction for recession model sensitivity analysis, a method for decoupling the surface recession evaluation from the inverse heat conduction problem is presented. The decoupled method is shown to provide reconstructions of equivalent accuracy to the traditional coupled method but with substantially reduced computational effort. These methods are applied to reconstruct the environments on the Mars Science Laboratory heat shield using diffusion limit and kinetically limited recession models.
NASA Astrophysics Data System (ADS)
Schumacher, F.; Friederich, W.
2015-12-01
We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion.
Nonlinear functional approximation with networks using adaptive neurons
NASA Technical Reports Server (NTRS)
Tawel, Raoul
1992-01-01
A novel mathematical framework for the rapid learning of nonlinear mappings and topological transformations is presented. It is based on allowing the neuron's parameters to adapt as a function of learning. This fully recurrent adaptive neuron model (ANM) has been successfully applied to complex nonlinear function approximation problems such as the highly degenerate inverse kinematics problem in robotics.
Parameterizations for ensemble Kalman inversion
NASA Astrophysics Data System (ADS)
Chada, Neil K.; Iglesias, Marco A.; Roininen, Lassi; Stuart, Andrew M.
2018-05-01
The use of ensemble methods to solve inverse problems is attractive because it is a derivative-free methodology which is also well-adapted to parallelization. In its basic iterative form the method produces an ensemble of solutions which lie in the linear span of the initial ensemble. Choice of the parameterization of the unknown field is thus a key component of the success of the method. We demonstrate how both geometric ideas and hierarchical ideas can be used to design effective parameterizations for a number of applied inverse problems arising in electrical impedance tomography, groundwater flow and source inversion. In particular we show how geometric ideas, including the level set method, can be used to reconstruct piecewise continuous fields, and we show how hierarchical methods can be used to learn key parameters in continuous fields, such as length-scales, resulting in improved reconstructions. Geometric and hierarchical ideas are combined in the level set method to find piecewise constant reconstructions with interfaces of unknown topology.
Controlling bridging and pinching with pixel-based mask for inverse lithography
NASA Astrophysics Data System (ADS)
Kobelkov, Sergey; Tritchkov, Alexander; Han, JiWan
2016-03-01
Inverse Lithography Technology (ILT) has become a viable computational lithography candidate in recent years as it can produce mask output that results in process latitude and CD control in the fab that is hard to match with conventional OPC/SRAF insertion approaches. An approach to solving the inverse lithography problem as a nonlinear, constrained minimization problem over a domain mask pixels was suggested in the paper by Y. Granik "Fast pixel-based mask optimization for inverse lithography" in 2006. The present paper extends this method to satisfy bridging and pinching constraints imposed on print contours. Namely, there are suggested objective functions expressing penalty for constraints violations, and their minimization with gradient descent methods is considered. This approach has been tested with an ILT-based Local Printability Enhancement (LPTM) tool in an automated flow to eliminate hotspots that can be present on the full chip after conventional SRAF placement/OPC and has been applied in 14nm, 10nm node production, single and multiple-patterning flows.
NASA Astrophysics Data System (ADS)
Baronian, Vahan; Bourgeois, Laurent; Chapuis, Bastien; Recoquillay, Arnaud
2018-07-01
This paper presents an application of the linear sampling method to ultrasonic non destructive testing of an elastic waveguide. In particular, the NDT context implies that both the solicitations and the measurements are located on the surface of the waveguide and are given in the time domain. Our strategy consists in using a modal formulation of the linear sampling method at multiple frequencies, such modal formulation being justified theoretically in Bourgeois et al (2011 Inverse Problems 27 055001) for rigid obstacles and in Bourgeois and Lunéville (2013 Inverse Problems 29 025017) for cracks. Our strategy requires the inversion of some emission and reception matrices which deserve some special attention due to potential ill-conditioning. The feasibility of our method is proved with the help of artificial data as well as real data.
Applications of the JARS method to study levee sites in southern Texas and southern New Mexico
Ivanov, J.; Miller, R.D.; Xia, J.; Dunbar, J.B.
2007-01-01
We apply the joint analysis of refractions with surface waves (JARS) method to several sites and compare its results to traditional refraction-tomography methods in efforts of finding a more realistic solution to the inverse refraction-traveltime problem. The JARS method uses a reference model, derived from surface-wave shear-wave velocity estimates, as a constraint. In all of the cases JARS estimates appear more realistic than those from the conventional refraction-tomography methods. As a result, we consider, the JARS algorithm as the preferred method for finding solutions to the inverse refraction-tomography problems. ?? 2007 Society of Exploration Geophysicists.
Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.
NASA Astrophysics Data System (ADS)
Sirota, Dmitry; Ivanov, Vadim
2017-11-01
Any mining operations influence stability of natural and technogenic massifs are the reason of emergence of the sources of differences of mechanical tension. These sources generate a quasistationary electric field with a Newtonian potential. The paper reviews the method of determining the shape and size of a flat source field with this kind of potential. This common problem meets in many fields of mining: geological exploration mineral resources, ore deposits, control of mining by underground method, determining coal self-heating source, localization of the rock crack's sources and other applied problems of practical physics. This problems are ill-posed and inverse and solved by converting to Fredholm-Uryson integral equation of the first kind. This equation will be solved by A.N. Tikhonov regularization method.
Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations
NASA Astrophysics Data System (ADS)
Zhi, Longxiao; Gu, Hanming
2018-03-01
The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor series expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain the P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion doesn't need certain assumptions and can estimate more parameters simultaneously. It has a better applicability. Meanwhile, by using the generalized linear method, the inversion is easily implemented and its calculation cost is small. We use the theoretical model to generate synthetic seismic records to test and analyze the influence of random noise. The results can prove the availability and anti-noise-interference ability of our method. We also apply the inversion to actual field data and prove the feasibility of our method in actual situation.
NASA Technical Reports Server (NTRS)
Shkarayev, S.; Krashantisa, R.; Tessler, A.
2004-01-01
An important and challenging technology aimed at the next generation of aerospace vehicles is that of structural health monitoring. The key problem is to determine accurately, reliably, and in real time the applied loads, stresses, and displacements experienced in flight, with such data establishing an information database for structural health monitoring. The present effort is aimed at developing a finite element-based methodology involving an inverse formulation that employs measured surface strains to recover the applied loads, stresses, and displacements in an aerospace vehicle in real time. The computational procedure uses a standard finite element model (i.e., "direct analysis") of a given airframe, with the subsequent application of the inverse interpolation approach. The inverse interpolation formulation is based on a parametric approximation of the loading and is further constructed through a least-squares minimization of calculated and measured strains. This procedure results in the governing system of linear algebraic equations, providing the unknown coefficients that accurately define the load approximation. Numerical simulations are carried out for problems involving various levels of structural approximation. These include plate-loading examples and an aircraft wing box. Accuracy and computational efficiency of the proposed method are discussed in detail. The experimental validation of the methodology by way of structural testing of an aircraft wing is also discussed.
NASA Astrophysics Data System (ADS)
Park, Won-Kwang
2015-02-01
Multi-frequency subspace migration imaging techniques are usually adopted for the non-iterative imaging of unknown electromagnetic targets, such as cracks in concrete walls or bridges and anti-personnel mines in the ground, in the inverse scattering problems. It is confirmed that this technique is very fast, effective, robust, and can not only be applied to full- but also to limited-view inverse problems if a suitable number of incidents and corresponding scattered fields are applied and collected. However, in many works, the application of such techniques is heuristic. With the motivation of such heuristic application, this study analyzes the structure of the imaging functional employed in the subspace migration imaging technique in two-dimensional full- and limited-view inverse scattering problems when the unknown targets are arbitrary-shaped, arc-like perfectly conducting cracks located in the two-dimensional homogeneous space. In contrast to the statistical approach based on statistical hypothesis testing, our approach is based on the fact that the subspace migration imaging functional can be expressed by a linear combination of the Bessel functions of integer order of the first kind. This is based on the structure of the Multi-Static Response (MSR) matrix collected in the far-field at nonzero frequency in either Transverse Magnetic (TM) mode (Dirichlet boundary condition) or Transverse Electric (TE) mode (Neumann boundary condition). The investigation of the expression of imaging functionals gives us certain properties of subspace migration and explains why multi-frequency enhances imaging resolution. In particular, we carefully analyze the subspace migration and confirm some properties of imaging when a small number of incident fields are applied. Consequently, we introduce a weighted multi-frequency imaging functional and confirm that it is an improved version of subspace migration in TM mode. Various results of numerical simulations performed on the far-field data affected by large amounts of random noise are similar to the analytical results derived in this study, and they provide a direction for future studies.
NASA Astrophysics Data System (ADS)
Bassrei, A.; Terra, F. A.; Santos, E. T.
2007-12-01
Inverse problems in Applied Geophysics are usually ill-posed. One way to reduce such deficiency is through derivative matrices, which are a particular case of a more general family that receive the name regularization. The regularization by derivative matrices has an input parameter called regularization parameter, which choice is already a problem. It was suggested in the 1970's a heuristic approach later called L-curve, with the purpose to provide the optimum regularization parameter. The L-curve is a parametric curve, where each point is associated to a λ parameter. In the horizontal axis one represents the error between the observed data and the calculated one and in the vertical axis one represents the product between the regularization matrix and the estimated model. The ideal point is the L-curve knee, where there is a balance between the quantities represented in the Cartesian axes. The L-curve has been applied to a variety of inverse problems, also in Geophysics. However, the visualization of the knee is not always an easy task, in special when the L-curve does not the L shape. In this work three methodologies are employed for the search and obtainment of the optimal regularization parameter from the L curve. The first criterion is the utilization of Hansen's tool box which extracts λ automatically. The second criterion consists in to extract visually the optimal parameter. By third criterion one understands the construction of the first derivative of the L-curve, and the posterior automatic extraction of the inflexion point. The utilization of the L-curve with the three above criteria were applied and validated in traveltime tomography and 2-D gravity inversion. After many simulations with synthetic data, noise- free as well as data corrupted with noise, with the regularization orders 0, 1, and 2, we verified that the three criteria are valid and provide satisfactory results. The third criterion presented the best performance, specially in cases where the L-curve has an irregular shape.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallick, S.
1999-03-01
In this paper, a prestack inversion method using a genetic algorithm (GA) is presented, and issues relating to the implementation of prestack GA inversion in practice are discussed. GA is a Monte-Carlo type inversion, using a natural analogy to the biological evolution process. When GA is cast into a Bayesian framework, a priori information of the model parameters and the physics of the forward problem are used to compute synthetic data. These synthetic data can then be matched with observations to obtain approximate estimates of the marginal a posteriori probability density (PPD) functions in the model space. Plots of thesemore » PPD functions allow an interpreter to choose models which best describe the specific geologic setting and lead to an accurate prediction of seismic lithology. Poststack inversion and prestack GA inversion were applied to a Woodbine gas sand data set from East Texas. A comparison of prestack inversion with poststack inversion demonstrates that prestack inversion shows detailed stratigraphic features of the subsurface which are not visible on the poststack inversion.« less
Feedback control by online learning an inverse model.
Waegeman, Tim; Wyffels, Francis; Schrauwen, Francis
2012-10-01
A model, predictor, or error estimator is often used by a feedback controller to control a plant. Creating such a model is difficult when the plant exhibits nonlinear behavior. In this paper, a novel online learning control framework is proposed that does not require explicit knowledge about the plant. This framework uses two learning modules, one for creating an inverse model, and the other for actually controlling the plant. Except for their inputs, they are identical. The inverse model learns by the exploration performed by the not yet fully trained controller, while the actual controller is based on the currently learned model. The proposed framework allows fast online learning of an accurate controller. The controller can be applied on a broad range of tasks with different dynamic characteristics. We validate this claim by applying our control framework on several control tasks: 1) the heating tank problem (slow nonlinear dynamics); 2) flight pitch control (slow linear dynamics); and 3) the balancing problem of a double inverted pendulum (fast linear and nonlinear dynamics). The results of these experiments show that fast learning and accurate control can be achieved. Furthermore, a comparison is made with some classical control approaches, and observations concerning convergence and stability are made.
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.
Inverse transport calculations in optical imaging with subspace optimization algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu
2014-09-15
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less
NASA Technical Reports Server (NTRS)
Fymat, A. L.
1976-01-01
The paper studies the inversion of the radiative transfer equation describing the interaction of electromagnetic radiation with atmospheric aerosols. The interaction can be considered as the propagation in the aerosol medium of two light beams: the direct beam in the line-of-sight attenuated by absorption and scattering, and the diffuse beam arising from scattering into the viewing direction, which propagates more or less in random fashion. The latter beam has single scattering and multiple scattering contributions. In the former case and for single scattering, the problem is reducible to first-kind Fredholm equations, while for multiple scattering it is necessary to invert partial integrodifferential equations. A nonlinear minimization search method, applicable to the solution of both types of problems has been developed, and is applied here to the problem of monitoring aerosol pollution, namely the complex refractive index and size distribution of aerosol particles.
Analysis of nonlinear internal waves observed by Landsat thematic mapper
NASA Astrophysics Data System (ADS)
Artale, V.; Levi, D.; Marullo, S.; Santoleri, R.
1990-09-01
In this work we test the compatibility between the theoretical parameters of a nonlinear wave model and the quantitative information that one can deduce from satellite-derived data. The theoretical parameters are obtained by applying an inverse problem to the solution of the Cauchy problem for the Korteweg-de Vries equation. Our results are applied to the case of internal wave patterns elaborated from two different satellite sensors at the south of Messina (the thematic mapper) and at the north of Messina (the synthetic aperture radar).
Identification of subsurface structures using electromagnetic data and shape priors
NASA Astrophysics Data System (ADS)
Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond
2015-03-01
We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.
On the Inversion for Mass (Re)Distribution from Global (Time-Variable) Gravity Field
NASA Technical Reports Server (NTRS)
Chao, Benjamin F.
2004-01-01
The well-known non-uniqueness of the gravitational inverse problem states the following: The external gravity field, even if completely and exactly known, cannot Uniquely determine the density distribution of the body that produces the gravity field. This is an intrinsic property of a field that obeys the Laplace equation, as already treated in mathematical as well as geophysical literature. In this paper we provide conceptual insight by examining the problem in terms of spherical harmonic expansion of the global gravity field. By comparing the multipoles and the moments of the density function, we show that in 3-S the degree of knowledge deficiency in trying to inversely recover the density distribution from external gravity field is (n+l)(n+2)/2 - (2n+l) = n(n-1)/2 for each harmonic degree n. On the other hand, on a 2-D spherical shell we show via a simple relationship that the inverse solution of the surface density distribution is unique. The latter applies quite readily in the inversion of time-variable gravity signals (such as those observed by the GRACE space mission) where the sources over a wide range of the scales largely come from the Earth's Surface.
Genetic algorithms and their use in Geophysical Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Paul B.
1999-04-01
Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show thatmore » certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.« less
Genetic algorithms and their use in geophysical problems
NASA Astrophysics Data System (ADS)
Parker, Paul Bradley
Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or "fittest" models from a "population" and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Also, optimal efficiency is usually achieved with smaller (<50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (>2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.
NASA Astrophysics Data System (ADS)
Revil, A.
2015-12-01
Geological expertise and petrophysical relationships can be brought together to provide prior information while inverting multiple geophysical datasets. The merging of such information can result in more realistic solution in the distribution of the model parameters, reducing ipse facto the non-uniqueness of the inverse problem. We consider two level of heterogeneities: facies, described by facies boundaries and heteroegenities inside each facies determined by a correlogram. In this presentation, we pose the geophysical inverse problem in terms of Gaussian random fields with mean functions controlled by petrophysical relationships and covariance functions controlled by a prior geological cross-section, including the definition of spatial boundaries for the geological facies. The petrophysical relationship problem is formulated as a regression problem upon each facies. The inversion of the geophysical data is performed in a Bayesian framework. We demonstrate the usefulness of this strategy using a first synthetic case for which we perform a joint inversion of gravity and galvanometric resistivity data with the stations located at the ground surface. The joint inversion is used to recover the density and resistivity distributions of the subsurface. In a second step, we consider the possibility that the facies boundaries are deformable and their shapes are inverted as well. We use the level set approach to perform such deformation preserving prior topological properties of the facies throughout the inversion. With the help of prior facies petrophysical relationships and topological characteristic of each facies, we make posterior inference about multiple geophysical tomograms based on their corresponding geophysical data misfits. The method is applied to a second synthetic case showing that we can recover the heterogeneities inside the facies, the mean values for the petrophysical properties, and, to some extent, the facies boundaries using the 2D joint inversion of gravity and galvanometric resistivity data. For this 2D synthetic example, we note that the position of the facies are well-recovered except far from the ground surfce where the sensitivity is too low. The figure shows the evolution of the shape of the facies during the inversion itertion by iteration.
Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization
NASA Astrophysics Data System (ADS)
Yamagishi, Masao; Yamada, Isao
2017-04-01
Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.; Kuvshinov, Alexey V.
2016-05-01
This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.
Fu, Zhongtao; Yang, Wenyu; Yang, Zhen
2013-08-01
In this paper, we present an efficient method based on geometric algebra for computing the solutions to the inverse kinematics problem (IKP) of the 6R robot manipulators with offset wrist. Due to the fact that there exist some difficulties to solve the inverse kinematics problem when the kinematics equations are complex, highly nonlinear, coupled and multiple solutions in terms of these robot manipulators stated mathematically, we apply the theory of Geometric Algebra to the kinematic modeling of 6R robot manipulators simply and generate closed-form kinematics equations, reformulate the problem as a generalized eigenvalue problem with symbolic elimination technique, and then yield 16 solutions. Finally, a spray painting robot, which conforms to the type of robot manipulators, is used as an example of implementation for the effectiveness and real-time of this method. The experimental results show that this method has a large advantage over the classical methods on geometric intuition, computation and real-time, and can be directly extended to all serial robot manipulators and completely automatized, which provides a new tool on the analysis and application of general robot manipulators.
Metamodel-based inverse method for parameter identification: elastic-plastic damage model
NASA Astrophysics Data System (ADS)
Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb
2017-04-01
This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.
Marais, Willem J; Holz, Robert E; Hu, Yu Hen; Kuehn, Ralph E; Eloranta, Edwin E; Willett, Rebecca M
2016-10-10
Atmospheric lidar observations provide a unique capability to directly observe the vertical column of cloud and aerosol scattering properties. Detector and solar-background noise, however, hinder the ability of lidar systems to provide reliable backscatter and extinction cross-section estimates. Standard methods for solving this inverse problem are most effective with high signal-to-noise ratio observations that are only available at low resolution in uniform scenes. This paper describes a novel method for solving the inverse problem with high-resolution, lower signal-to-noise ratio observations that are effective in non-uniform scenes. The novelty is twofold. First, the inferences of the backscatter and extinction are applied to images, whereas current lidar algorithms only use the information content of single profiles. Hence, the latent spatial and temporal information in noisy images are utilized to infer the cross-sections. Second, the noise associated with photon-counting lidar observations can be modeled using a Poisson distribution, and state-of-the-art tools for solving Poisson inverse problems are adapted to the atmospheric lidar problem. It is demonstrated through photon-counting high spectral resolution lidar (HSRL) simulations that the proposed algorithm yields inverted backscatter and extinction cross-sections (per unit volume) with smaller mean squared error values at higher spatial and temporal resolutions, compared to the standard approach. Two case studies of real experimental data are also provided where the proposed algorithm is applied on HSRL observations and the inverted backscatter and extinction cross-sections are compared against the standard approach.
Angle-domain inverse scattering migration/inversion in isotropic media
NASA Astrophysics Data System (ADS)
Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan
2018-07-01
The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.
Indoor detection of passive targets recast as an inverse scattering problem
NASA Astrophysics Data System (ADS)
Gottardi, G.; Moriyama, T.
2017-10-01
The wireless local area networks represent an alternative to custom sensors and dedicated surveillance systems for target indoor detection. The availability of the channel state information has opened the exploitation of the spatial and frequency diversity given by the orthogonal frequency division multiplexing. Such a fine-grained information can be used to solve the detection problem as an inverse scattering problem. The goal of the detection is to reconstruct the properties of the investigation domain, namely to estimate if the domain is empty or occupied by targets, starting from the measurement of the electromagnetic perturbation of the wireless channel. An innovative inversion strategy exploiting both the frequency and the spatial diversity of the channel state information is proposed. The target-dependent features are identified combining the Kruskal-Wallis test and the principal component analysis. The experimental validation points out the detection performance of the proposed method when applied to an existing wireless link of a WiFi architecture deployed in a real indoor scenario. False detection rates lower than 2 [%] have been obtained.
Coupled Hydrogeophysical Inversion and Hydrogeological Data Fusion
NASA Astrophysics Data System (ADS)
Cirpka, O. A.; Schwede, R. L.; Li, W.
2012-12-01
Tomographic geophysical monitoring methods give the opportunity to observe hydrogeological tests at higher spatial resolution than is possible with classical hydraulic monitoring tools. This has been demonstrated in a substantial number of studies in which electrical resistivity tomography (ERT) has been used to monitor salt-tracer experiments. It is now accepted that inversion of such data sets requires a fully coupled framework, explicitly accounting for the hydraulic processes (groundwater flow and solute transport), the relationship between solute and geophysical properties (petrophysical relationship such as Archie's law), and the governing equations of the geophysical surveying techniques (e.g., the Poisson equation) as consistent coupled system. These data sets can be amended with data from other - more direct - hydrogeological tests to infer the distribution of hydraulic aquifer parameters. In the inversion framework, meaningful condensation of data does not only contribute to inversion efficiency but also increases the stability of the inversion. In particular, transient concentration data themselves only weakly depend on hydraulic conductivity, and model improvement using gradient-based methods is only possible when a substantial agreement between measurements and model output already exists. The latter also holds when concentrations are monitored by ERT. Tracer arrival times, by contrast, show high sensitivity and a more monotonic dependence on hydraulic conductivity than concentrations themselves. Thus, even without using temporal-moment generating equations, inverting travel times rather than concentrations or related geoelectrical signals themselves is advantageous. We have applied this approach to concentrations measured directly or via ERT, and to heat-tracer data. We present a consistent inversion framework including temporal moments of concentrations, geoelectrical signals obtained during salt-tracer tests, drawdown data from hydraulic tomography and flowmeter measurements to identify mainly the hydraulic-conductivity distribution. By stating the inversion as geostatistical conditioning problem, we obtain parameter sets together with their correlated uncertainty. While we have applied the quasi-linear geostatistical approach as inverse kernel, other methods - such as ensemble Kalman methods - may suit the same purpose, particularly when many data points are to be included. In order to identify 3-D fields, discretized by about 50 million grid points, we use the high-performance-computing framework DUNE to solve the involved partial differential equations on midrange computer cluster. We have quantified the worth of different data types in these inference problems. In practical applications, the constitutive relationships between geophysical, thermal, and hydraulic properties can pose a problem, requiring additional inversion. However, not well constrained transient boundary conditions may put inversion efforts on larger (e.g. regional) scales even more into question. We envision that future hydrogeophysical inversion efforts will target boundary conditions, such as groundwater recharge rates, in conjunction with - or instead of - aquifer parameters. By this, the distinction between data assimilation and parameter estimation will gradually vanish.
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonetto, Andrea; Dall'Anese, Emiliano
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
3D gravity inversion and uncertainty assessment of basement relief via Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Pallero, J. L. G.; Fernández-Martínez, J. L.; Bonvalot, S.; Fudym, O.
2017-04-01
Nonlinear gravity inversion in sedimentary basins is a classical problem in applied geophysics. Although a 2D approximation is widely used, 3D models have been also proposed to better take into account the basin geometry. A common nonlinear approach to this 3D problem consists in modeling the basin as a set of right rectangular prisms with prescribed density contrast, whose depths are the unknowns. Then, the problem is iteratively solved via local optimization techniques from an initial model computed using some simplifications or being estimated using prior geophysical models. Nevertheless, this kind of approach is highly dependent on the prior information that is used, and lacks from a correct solution appraisal (nonlinear uncertainty analysis). In this paper, we use the family of global Particle Swarm Optimization (PSO) optimizers for the 3D gravity inversion and model appraisal of the solution that is adopted for basement relief estimation in sedimentary basins. Synthetic and real cases are illustrated, showing that robust results are obtained. Therefore, PSO seems to be a very good alternative for 3D gravity inversion and uncertainty assessment of basement relief when used in a sampling while optimizing approach. That way important geological questions can be answered probabilistically in order to perform risk assessment in the decisions that are made.
NASA Astrophysics Data System (ADS)
Juhojuntti, N. G.; Kamm, J.
2010-12-01
We present a layered-model approach to joint inversion of shallow seismic refraction and resistivity (DC) data, which we believe is a seldom tested method of addressing the problem. This method has been developed as we believe that for shallow sedimentary environments (roughly <100 m depth) a model with a few layers and sharp layer boundaries better represents the subsurface than a smooth minimum-structure (grid) model. Due to the strong assumption our model parameterization implies on the subsurface, only a low number of well resolved model parameters has to be estimated, and provided that this assumptions holds our method can also be applied to other environments. We are using a least-squares inversion, with lateral smoothness constraints, allowing lateral variations in the seismic velocity and the resistivity but no vertical variations. One exception is a positive gradient in the seismic velocity in the uppermost layer in order to get diving rays (the refractions in the deeper layers are modeled as head waves). We assume no connection between seismic velocity and resistivity, and these parameters are allowed to vary individually within the layers. The layer boundaries are, however, common for both parameters. During the inversion lateral smoothing can be applied to the layer boundaries as well as to the seismic velocity and the resistivity. The number of layers is specified before the inversion, and typically we use models with three layers. Depending on the type of environment it is possible to apply smoothing either to the depth of the layer boundaries or to the thickness of the layers, although normally the former is used for shallow sedimentary environments. The smoothing parameters can be chosen independently for each layer. For the DC data we use a finite-difference algorithm to perform the forward modeling and to calculate the Jacobian matrix, while for the seismic data the corresponding entities are retrieved via ray-tracing, using components from the RAYINVR package. The modular layout of the code makes it straightforward to include other types of geophysical data, i.e. gravity. The code has been tested using synthetic examples with fairly simple 2D geometries, mainly for checking the validity of the calculations. The inversion generally converges towards the correct solution, although there could be stability problems if the starting model is too erroneous. We have also applied the code to field data from seismic refraction and multi-electrode resistivity measurements at typical sand-gravel groundwater reservoirs. The tests are promising, as the calculated depths agree fairly well with information from drilling and the velocity and resistivity values appear reasonable. Current work includes better regularization of the inversion as well as defining individual weight factors for the different datasets, as the present algorithm tends to constrain the depths mainly by using the seismic data. More complex synthetic examples will also be tested, including models addressing the seismic hidden-layer problem.
Estimating surface acoustic impedance with the inverse method.
Piechowicz, Janusz
2011-01-01
Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.
Sanz, E.; Voss, C.I.
2006-01-01
Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only concentration observations. Permeability, freshwater inflow, solute molecular diffusivity, and porosity can be estimated with roughly equivalent confidence using observations of only the logarithm of concentration. Furthermore, covariance analysis allows a logical reduction of the number of estimated parameters for ill-posed inverse seawater intrusion problems. Ill-posed problems may exhibit poor estimation convergence, have a non-unique solution, have multiple minima, or require excessive computational effort, and the condition often occurs when estimating too many or co-dependent parameters. For the Henry problem, such analysis allows selection of the two parameters that control system physics from among all possible system parameters. ?? 2005 Elsevier Ltd. All rights reserved.
An Inverse Neural Controller Based on the Applicability Domain of RBF Network Models
Alexandridis, Alex; Stogiannos, Marios; Papaioannou, Nikolaos; Zois, Elias; Sarimveis, Haralambos
2018-01-01
This paper presents a novel methodology of generic nature for controlling nonlinear systems, using inverse radial basis function neural network models, which may combine diverse data originating from various sources. The algorithm starts by applying the particle swarm optimization-based non-symmetric variant of the fuzzy means (PSO-NSFM) algorithm so that an approximation of the inverse system dynamics is obtained. PSO-NSFM offers models of high accuracy combined with small network structures. Next, the applicability domain concept is suitably tailored and embedded into the proposed control structure in order to ensure that extrapolation is avoided in the controller predictions. Finally, an error correction term, estimating the error produced by the unmodeled dynamics and/or unmeasured external disturbances, is included to the control scheme to increase robustness. The resulting controller guarantees bounded input-bounded state (BIBS) stability for the closed loop system when the open loop system is BIBS stable. The proposed methodology is evaluated on two different control problems, namely, the control of an experimental armature-controlled direct current (DC) motor and the stabilization of a highly nonlinear simulated inverted pendulum. For each one of these problems, appropriate case studies are tested, in which a conventional neural controller employing inverse models and a PID controller are also applied. The results reveal the ability of the proposed control scheme to handle and manipulate diverse data through a data fusion approach and illustrate the superiority of the method in terms of faster and less oscillatory responses. PMID:29361781
Parameter estimation using meta-heuristics in systems biology: a comprehensive review.
Sun, Jianyong; Garibaldi, Jonathan M; Hodgman, Charlie
2012-01-01
This paper gives a comprehensive review of the application of meta-heuristics to optimization problems in systems biology, mainly focussing on the parameter estimation problem (also called the inverse problem or model calibration). It is intended for either the system biologist who wishes to learn more about the various optimization techniques available and/or the meta-heuristic optimizer who is interested in applying such techniques to problems in systems biology. First, the parameter estimation problems emerging from different areas of systems biology are described from the point of view of machine learning. Brief descriptions of various meta-heuristics developed for these problems follow, along with outlines of their advantages and disadvantages. Several important issues in applying meta-heuristics to the systems biology modelling problem are addressed, including the reliability and identifiability of model parameters, optimal design of experiments, and so on. Finally, we highlight some possible future research directions in this field.
2017-01-01
Objective Electrical Impedance Tomography (EIT) is a powerful non-invasive technique for imaging applications. The goal is to estimate the electrical properties of living tissues by measuring the potential at the boundary of the domain. Being safe with respect to patient health, non-invasive, and having no known hazards, EIT is an attractive and promising technology. However, it suffers from a particular technical difficulty, which consists of solving a nonlinear inverse problem in real time. Several nonlinear approaches have been proposed as a replacement for the linear solver, but in practice very few are capable of stable, high-quality, and real-time EIT imaging because of their very low robustness to errors and inaccurate modeling, or because they require considerable computational effort. Methods In this paper, a post-processing technique based on an artificial neural network (ANN) is proposed to obtain a nonlinear solution to the inverse problem, starting from a linear solution. While common reconstruction methods based on ANNs estimate the solution directly from the measured data, the method proposed here enhances the solution obtained from a linear solver. Conclusion Applying a linear reconstruction algorithm before applying an ANN reduces the effects of noise and modeling errors. Hence, this approach significantly reduces the error associated with solving 2D inverse problems using machine-learning-based algorithms. Significance This work presents radical enhancements in the stability of nonlinear methods for biomedical EIT applications. PMID:29206856
Methodes entropiques appliquees au probleme inverse en magnetoencephalographie
NASA Astrophysics Data System (ADS)
Lapalme, Ervig
2005-07-01
This thesis is devoted to biomagnetic source localization using magnetoencephalography. This problem is known to have an infinite number of solutions. So methods are required to take into account anatomical and functional information on the solution. The work presented in this thesis uses the maximum entropy on the mean method to constrain the solution. This method originates from statistical mechanics and information theory. This thesis is divided into two main parts containing three chapters each. The first part reviews the magnetoencephalographic inverse problem: the theory needed to understand its context and the hypotheses for simplifying the problem. In the last chapter of this first part, the maximum entropy on the mean method is presented: its origins are explained and also how it is applied to our problem. The second part is the original work of this thesis presenting three articles; one of them already published and two others submitted for publication. In the first article, a biomagnetic source model is developed and applied in a theoretical con text but still demonstrating the efficiency of the method. In the second article, we go one step further towards a realistic modelization of the cerebral activation. The main priors are estimated using the magnetoencephalographic data. This method proved to be very efficient in realistic simulations. In the third article, the previous method is extended to deal with time signals thus exploiting the excellent time resolution offered by magnetoencephalography. Compared with our previous work, the temporal method is applied to real magnetoencephalographic data coming from a somatotopy experience and results agree with previous physiological knowledge about this kind of cognitive process.
A physiologically motivated sparse, compact, and smooth (SCS) approach to EEG source localization.
Cao, Cheng; Akalin Acar, Zeynep; Kreutz-Delgado, Kenneth; Makeig, Scott
2012-01-01
Here, we introduce a novel approach to the EEG inverse problem based on the assumption that principal cortical sources of multi-channel EEG recordings may be assumed to be spatially sparse, compact, and smooth (SCS). To enforce these characteristics of solutions to the EEG inverse problem, we propose a correlation-variance model which factors a cortical source space covariance matrix into the multiplication of a pre-given correlation coefficient matrix and the square root of the diagonal variance matrix learned from the data under a Bayesian learning framework. We tested the SCS method using simulated EEG data with various SNR and applied it to a real ECOG data set. We compare the results of SCS to those of an established SBL algorithm.
NASA Astrophysics Data System (ADS)
Avdyushev, Victor A.
2017-12-01
Orbit determination from a small sample of observations over a very short observed orbital arc is a strongly nonlinear inverse problem. In such problems an evaluation of orbital uncertainty due to random observation errors is greatly complicated, since linear estimations conventionally used are no longer acceptable for describing the uncertainty even as a rough approximation. Nevertheless, if an inverse problem is weakly intrinsically nonlinear, then one can resort to the so-called method of disturbed observations (aka observational Monte Carlo). Previously, we showed that the weaker the intrinsic nonlinearity, the more efficient the method, i.e. the more accurate it enables one to simulate stochastically the orbital uncertainty, while it is strictly exact only when the problem is intrinsically linear. However, as we ascertained experimentally, its efficiency was found to be higher than that of other stochastic methods widely applied in practice. In the present paper we investigate the intrinsic nonlinearity in complicated inverse problems of Celestial Mechanics when orbits are determined from little informative samples of observations, which typically occurs for recently discovered asteroids. To inquire into the question, we introduce an index of intrinsic nonlinearity. In asteroid problems it evinces that the intrinsic nonlinearity can be strong enough to affect appreciably probabilistic estimates, especially at the very short observed orbital arcs that the asteroids travel on for about a hundredth of their orbital periods and less. As it is known from regression analysis, the source of intrinsic nonlinearity is the nonflatness of the estimation subspace specified by a dynamical model in the observation space. Our numerical results indicate that when determining asteroid orbits it is actually very slight. However, in the parametric space the effect of intrinsic nonlinearity is exaggerated mainly by the ill-conditioning of the inverse problem. Even so, as for the method of disturbed observations, we conclude that it practically should be still entirely acceptable to adequately describe the orbital uncertainty since, from a geometrical point of view, the efficiency of the method directly depends only on the nonflatness of the estimation subspace and it gets higher as the nonflatness decreases.
Inverse methods for 3D quantitative optical coherence elasticity imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Dong, Li; Wijesinghe, Philip; Hugenberg, Nicholas; Sampson, David D.; Munro, Peter R. T.; Kennedy, Brendan F.; Oberai, Assad A.
2017-02-01
In elastography, quantitative elastograms are desirable as they are system and operator independent. Such quantification also facilitates more accurate diagnosis, longitudinal studies and studies performed across multiple sites. In optical elastography (compression, surface-wave or shear-wave), quantitative elastograms are typically obtained by assuming some form of homogeneity. This simplifies data processing at the expense of smearing sharp transitions in elastic properties, and/or introducing artifacts in these regions. Recently, we proposed an inverse problem-based approach to compression OCE that does not assume homogeneity, and overcomes the drawbacks described above. In this approach, the difference between the measured and predicted displacement field is minimized by seeking the optimal distribution of elastic parameters. The predicted displacements and recovered elastic parameters together satisfy the constraint of the equations of equilibrium. This approach, which has been applied in two spatial dimensions assuming plane strain, has yielded accurate material property distributions. Here, we describe the extension of the inverse problem approach to three dimensions. In addition to the advantage of visualizing elastic properties in three dimensions, this extension eliminates the plane strain assumption and is therefore closer to the true physical state. It does, however, incur greater computational costs. We address this challenge through a modified adjoint problem, spatially adaptive grid resolution, and three-dimensional decomposition techniques. Through these techniques the inverse problem is solved on a typical desktop machine within a wall clock time of 20 hours. We present the details of the method and quantitative elasticity images of phantoms and tissue samples.
On the inversion of geodetic integrals defined over the sphere using 1-D FFT
NASA Astrophysics Data System (ADS)
García, R. V.; Alejo, C. A.
2005-08-01
An iterative method is presented which performs inversion of integrals defined over the sphere. The method is based on one-dimensional fast Fourier transform (1-D FFT) inversion and is implemented with the projected Landweber technique, which is used to solve constrained least-squares problems reducing the associated 1-D cyclic-convolution error. The results obtained are as precise as the direct matrix inversion approach, but with better computational efficiency. A case study uses the inversion of Hotine’s integral to obtain gravity disturbances from geoid undulations. Numerical convergence is also analyzed and comparisons with respect to the direct matrix inversion method using conjugate gradient (CG) iteration are presented. Like the CG method, the number of iterations needed to get the optimum (i.e., small) error decreases as the measurement noise increases. Nevertheless, for discrete data given over a whole parallel band, the method can be applied directly without implementing the projected Landweber method, since no cyclic convolution error exists.
NASA Astrophysics Data System (ADS)
Zhdanov, M. S.; Cuma, M.; Black, N.; Wilson, G. A.
2009-12-01
The marine controlled source electromagnetic (MCSEM) method has become widely used in offshore oil and gas exploration. Interpretation of MCSEM data is still a very challenging problem, especially if one would like to take into account the realistic 3D structure of the subsurface. The inversion of MCSEM data is complicated by the fact that the EM response of a hydrocarbon-bearing reservoir is very weak in comparison with the background EM fields generated by an electric dipole transmitter in complex geoelectrical structures formed by a conductive sea-water layer and the terranes beneath it. In this paper, we present a review of the recent developments in the area of large-scale 3D EM forward modeling and inversion. Our approach is based on using a new integral form of Maxwell’s equations allowing for an inhomogeneous background conductivity, which results in a numerically effective integral representation for 3D EM field. This representation provides an efficient tool for the solution of 3D EM inverse problems. To obtain a robust inverse model of the conductivity distribution, we apply regularization based on a focusing stabilizing functional which allows for the recovery of models with both smooth and sharp geoelectrical boundaries. The method is implemented in a fully parallel computer code, which makes it possible to run large-scale 3D inversions on grids with millions of inversion cells. This new technique can be effectively used for active EM detection and monitoring of the subsurface targets.
Regularized minimum I-divergence methods for the inverse blackbody radiation problem
NASA Astrophysics Data System (ADS)
Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin
2006-08-01
This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.
NASA Astrophysics Data System (ADS)
Fukuda, J.; Johnson, K. M.
2009-12-01
Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress drop.
Noncolocated Structural Vibration Suppression Using Zero Annihilation Periodic Control
NASA Technical Reports Server (NTRS)
Bayard, David S.; Boussalis, Dhemetrios
1993-01-01
The Zero Annihilation Periodic (ZAP) controller is applied to the problem of vibration control of a noncolocated flexible structure. It is shown that even though the transfer function is nonminimum-phase, a plant inverse controller can be designed which elicits a deadbeat closed-loop response.
Variational methods for direct/inverse problems of atmospheric dynamics and chemistry
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena
2013-04-01
We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).
A simulation based method to assess inversion algorithms for transverse relaxation data
NASA Astrophysics Data System (ADS)
Ghosh, Supriyo; Keener, Kevin M.; Pan, Yong
2008-04-01
NMR relaxometry is a very useful tool for understanding various chemical and physical phenomena in complex multiphase systems. A Carr-Purcell-Meiboom-Gill (CPMG) [P.T. Callaghan, Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991] experiment is an easy and quick way to obtain transverse relaxation constant (T2) in low field. Most of the samples usually have a distribution of T2 values. Extraction of this distribution of T2s from the noisy decay data is essentially an ill-posed inverse problem. Various inversion approaches have been used to solve this problem, to date. A major issue in using an inversion algorithm is determining how accurate the computed distribution is. A systematic analysis of an inversion algorithm, UPEN [G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data, Journal of Magnetic Resonance 132 (1998) 65-77; G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data II. Data spacing, T2 data, systematic data errors, and diagnostics, Journal of Magnetic Resonance 147 (2000) 273-285] was performed by means of simulated CPMG data generation. Through our simulation technique and statistical analyses, the effects of various experimental parameters on the computed distribution were evaluated. We converged to the true distribution by matching up the inversion results from a series of true decay data and a noisy simulated data. In addition to simulation studies, the same approach was also applied on real experimental data to support the simulation results.
Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.
Dosso, Stan E; Nielsen, Peter L
2002-01-01
This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.
NASA Astrophysics Data System (ADS)
Shirzaei, M.; Walter, T. R.
2009-10-01
Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.
NASA Astrophysics Data System (ADS)
Sun, Jiajia; Li, Yaoguo
2017-02-01
Joint inversion that simultaneously inverts multiple geophysical data sets to recover a common Earth model is increasingly being applied to exploration problems. Petrophysical data can serve as an effective constraint to link different physical property models in such inversions. There are two challenges, among others, associated with the petrophysical approach to joint inversion. One is related to the multimodality of petrophysical data because there often exist more than one relationship between different physical properties in a region of study. The other challenge arises from the fact that petrophysical relationships have different characteristics and can exhibit point, linear, quadratic, or exponential forms in a crossplot. The fuzzy c-means (FCM) clustering technique is effective in tackling the first challenge and has been applied successfully. We focus on the second challenge in this paper and develop a joint inversion method based on variations of the FCM clustering technique. To account for the specific shapes of petrophysical relationships, we introduce several different fuzzy clustering algorithms that are capable of handling different shapes of petrophysical relationships. We present two synthetic and one field data examples and demonstrate that, by choosing appropriate distance measures for the clustering component in the joint inversion algorithm, the proposed joint inversion method provides an effective means of handling common petrophysical situations we encounter in practice. The jointly inverted models have both enhanced structural similarity and increased petrophysical correlation, and better represent the subsurface in the spatial domain and the parameter domain of physical properties.
NASA Astrophysics Data System (ADS)
Zhang, Junwei
I built parts-based and manifold based mathematical learning model for the geophysical inverse problem and I applied this approach to two problems. One is related to the detection of the oil-water encroachment front during the water flooding of an oil reservoir. In this application, I propose a new 4D inversion approach based on the Gauss-Newton approach to invert time-lapse cross-well resistance data. The goal of this study is to image the position of the oil-water encroachment front in a heterogeneous clayey sand reservoir. This approach is based on explicitly connecting the change of resistivity to the petrophysical properties controlling the position of the front (porosity and permeability) and to the saturation of the water phase through a petrophysical resistivity model accounting for bulk and surface conductivity contributions and saturation. The distributions of the permeability and porosity are also inverted using the time-lapse resistivity data in order to better reconstruct the position of the oil water encroachment front. In our synthetic test case, we get a better position of the front with the by-products of porosity and permeability inferences near the flow trajectory and close to the wells. The numerical simulations show that the position of the front is recovered well but the distribution of the recovered porosity and permeability is only fair. A comparison with a commercial code based on a classical Gauss-Newton approach with no information provided by the two-phase flow model fails to recover the position of the front. The new approach could be also used for the time-lapse monitoring of various processes in both geothermal fields and oil and gas reservoirs using a combination of geophysical methods. A paper has been published in Geophysical Journal International on this topic and I am the first author of this paper. The second application is related to the detection of geological facies boundaries and their deforation to satisfy to geophysica data and prior distributions. We pose the geophysical inverse problem in terms of Gaussian random fields with mean functions controlled by petrophysical relationships and covariance functions controlled by a prior geological cross-section, including the definition of spatial boundaries for the geological facies. The petrophysical relationship problem is formulated as a regression problem upon each facies. The inversion is performed in a Bayesian framework. We demonstrate the usefulness of this strategy using a first synthetic case study, performing a joint inversion of gravity and galvanometric resistivity data with the stations all located at the ground surface. The joint inversion is used to recover the density and resistivity distributions of the subsurface. In a second step, we consider the possibility that the facies boundaries are deformable and their shapes are inverted as well. We use the level set approach to deform the facies boundaries preserving prior topological properties of the facies throughout the inversion. With the additional help of prior facies petrophysical relationships, topological characteristic of each facies, we make posterior inference about multiple geophysical tomograms based on their corresponding geophysical data misfits. The result of the inversion technique is encouraging when applied to a second synthetic case study, showing that we can recover the heterogeneities inside the facies, the mean values for the petrophysical properties, and, to some extent, the facies boundaries. A paper has been submitted to Geophysics on this topic and I am the first author of this paper. During this thesis, I also worked on the time lapse inversion problem of gravity data in collaboration with Marios Karaoulis and a paper was published in Geophysical Journal international on this topic. I also worked on the time-lapse inversion of cross-well geophysical data (seismic and resistivity) using both a structural approach named the cross-gradient approach and a petrophysical approach. A paper was published in Geophysics on this topic.
Automatic alignment for three-dimensional tomographic reconstruction
NASA Astrophysics Data System (ADS)
van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.
2018-02-01
In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.
NASA Astrophysics Data System (ADS)
Arsenault, Louis-Francois; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.
We present a machine learning-based statistical regression approach to the inversion of Fredholm integrals of the first kind by studying an important example for the quantum materials community, the analytical continuation problem of quantum many-body physics. It involves reconstructing the frequency dependence of physical excitation spectra from data obtained at specific points in the complex frequency plane. The approach provides a natural regularization in cases where the inverse of the Fredholm kernel is ill-conditioned and yields robust error metrics. The stability of the forward problem permits the construction of a large database of input-output pairs. Machine learning methods applied to this database generate approximate solutions which are projected onto the subspace of functions satisfying relevant constraints. We show that for low input noise the method performs as well or better than Maximum Entropy (MaxEnt) under standard error metrics, and is substantially more robust to noise. We expect the methodology to be similarly effective for any problem involving a formally ill-conditioned inversion, provided that the forward problem can be efficiently solved. AJM was supported by the Office of Science of the U.S. Department of Energy under Subcontract No. 3F-3138 and LFA by the Columbia Univeristy IDS-ROADS project, UR009033-05 which also provided part support to RN and LH.
NASA Astrophysics Data System (ADS)
Volkov, D.
2017-12-01
We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.
A 2D forward and inverse code for streaming potential problems
NASA Astrophysics Data System (ADS)
Soueid Ahmed, A.; Jardani, A.; Revil, A.
2013-12-01
The self-potential method corresponds to the passive measurement of the electrical field in response to the occurrence of natural sources of current in the ground. One of these sources corresponds to the streaming current associated with the flow of the groundwater. We can therefore apply the self- potential method to recover non-intrusively some information regarding the groundwater flow. We first solve the forward problem starting with the solution of the groundwater flow problem, then computing the source current density, and finally solving a Poisson equation for the electrical potential. We use the finite-element method to solve the relevant partial differential equations. In order to reduce the number of (petrophysical) model parameters required to solve the forward problem, we introduced an effective charge density tensor of the pore water, which can be determined directly from the permeability tensor for neutral pore waters. The second aspect of our work concerns the inversion of the self-potential data using Tikhonov regularization with smoothness and weighting depth constraints. This approach accounts for the distribution of the electrical resistivity, which can be independently and approximately determined from electrical resistivity tomography. A numerical code, SP2DINV, has been implemented in Matlab to perform both the forward and inverse modeling. Three synthetic case studies are discussed.
Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method
NASA Astrophysics Data System (ADS)
Sun, Yong; Meng, Zhaohai; Li, Fengting
2018-03-01
Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.
Nonlinear refraction and reflection travel time tomography
Zhang, Jiahua; ten Brink, Uri S.; Toksoz, M.N.
1998-01-01
We develop a rapid nonlinear travel time tomography method that simultaneously inverts refraction and reflection travel times on a regular velocity grid. For travel time and ray path calculations, we apply a wave front method employing graph theory. The first-arrival refraction travel times are calculated on the basis of cell velocities, and the later refraction and reflection travel times are computed using both cell velocities and given interfaces. We solve a regularized nonlinear inverse problem. A Laplacian operator is applied to regularize the model parameters (cell slownesses and reflector geometry) so that the inverse problem is valid for a continuum. The travel times are also regularized such that we invert travel time curves rather than travel time points. A conjugate gradient method is applied to minimize the nonlinear objective function. After obtaining a solution, we perform nonlinear Monte Carlo inversions for uncertainty analysis and compute the posterior model covariance. In numerical experiments, we demonstrate that combining the first arrival refraction travel times with later reflection travel times can better reconstruct the velocity field as well as the reflector geometry. This combination is particularly important for modeling crustal structures where large velocity variations occur in the upper crust. We apply this approach to model the crustal structure of the California Borderland using ocean bottom seismometer and land data collected during the Los Angeles Region Seismic Experiment along two marine survey lines. Details of our image include a high-velocity zone under the Catalina Ridge, but a smooth gradient zone between. Catalina Ridge and San Clemente Ridge. The Moho depth is about 22 km with lateral variations. Copyright 1998 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Kiełczyński, P.; Szalewski, M.; Balcerzak, A.
2014-07-01
Simultaneous determination of the viscosity and density of liquids is of great importance in the monitoring of technological processes in the chemical, petroleum, and pharmaceutical industry, as well as in geophysics. In this paper, the authors present the application of Love waves for simultaneous inverse determination of the viscosity and density of liquids. The inversion procedure is based on measurements of the dispersion curves of phase velocity and attenuation of ultrasonic Love waves. The direct problem of the Love wave propagation in a layered waveguide covered by a viscous liquid was formulated and solved. Love waves propagate in an elastic layered waveguide covered on its surface with a viscous (Newtonian) liquid. The inverse problem is formulated as an optimization problem with appropriately constructed objective function that depends on the material properties of an elastic waveguide of the Love wave, material parameters of a liquid (i.e., viscosity and density), and the experimental data. The results of numerical calculations show that Love waves can be efficiently applied to determine simultaneously the physical properties of liquids (i.e., viscosity and density). Sensors based on this method can be very attractive for industrial applications to monitor on-line the parameters (density and viscosity) of process liquid during the course of technological processes, e.g., in polymer industry.
NASA Astrophysics Data System (ADS)
Li, Jinghe; Song, Linping; Liu, Qing Huo
2016-02-01
A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.
Easy way to determine quantitative spatial resolution distribution for a general inverse problem
NASA Astrophysics Data System (ADS)
An, M.; Feng, M.
2013-12-01
The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.
Recursive inverse kinematics for robot arms via Kalman filtering and Bryson-Frazier smoothing
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1987-01-01
This paper applies linear filtering and smoothing theory to solve recursively the inverse kinematics problem for serial multilink manipulators. This problem is to find a set of joint angles that achieve a prescribed tip position and/or orientation. A widely applicable numerical search solution is presented. The approach finds the minimum of a generalized distance between the desired and the actual manipulator tip position and/or orientation. Both a first-order steepest-descent gradient search and a second-order Newton-Raphson search are developed. The optimal relaxation factor required for the steepest descent method is computed recursively using an outward/inward procedure similar to those used typically for recursive inverse dynamics calculations. The second-order search requires evaluation of a gradient and an approximate Hessian. A Gauss-Markov approach is used to approximate the Hessian matrix in terms of products of first-order derivatives. This matrix is inverted recursively using a two-stage process of inward Kalman filtering followed by outward smoothing. This two-stage process is analogous to that recently developed by the author to solve by means of spatial filtering and smoothing the forward dynamics problem for serial manipulators.
NASA Astrophysics Data System (ADS)
Al-Ma'shumah, Fathimah; Permana, Dony; Sidarto, Kuntjoro Adji
2015-12-01
Customer Lifetime Value is an important and useful concept in marketing. One of its benefits is to help a company for budgeting marketing expenditure for customer acquisition and customer retention. Many mathematical models have been introduced to calculate CLV considering the customer retention/migration classification scheme. A fairly new class of these models which will be described in this paper uses Markov Chain Models (MCM). This class of models has the major advantage for its flexibility to be modified to several different cases/classification schemes. In this model, the probabilities of customer retention and acquisition play an important role. From Pfeifer and Carraway, 2000, the final formula of CLV obtained from MCM usually contains nonlinear form of the transition probability matrix. This nonlinearity makes the inverse problem of CLV difficult to solve. This paper aims to solve this inverse problem, yielding the approximate transition probabilities for the customers, by applying metaheuristic optimization algorithm developed by Yang, 2013, Flower Pollination Algorithm. The major interpretation of obtaining the transition probabilities are to set goals for marketing teams in keeping the relative frequencies of customer acquisition and customer retention.
INFO-RNA--a fast approach to inverse RNA folding.
Busch, Anke; Backofen, Rolf
2006-08-01
The structure of RNA molecules is often crucial for their function. Therefore, secondary structure prediction has gained much interest. Here, we consider the inverse RNA folding problem, which means designing RNA sequences that fold into a given structure. We introduce a new algorithm for the inverse folding problem (INFO-RNA) that consists of two parts; a dynamic programming method for good initial sequences and a following improved stochastic local search that uses an effective neighbor selection method. During the initialization, we design a sequence that among all sequences adopts the given structure with the lowest possible energy. For the selection of neighbors during the search, we use a kind of look-ahead of one selection step applying an additional energy-based criterion. Afterwards, the pre-ordered neighbors are tested using the actual optimization criterion of minimizing the structure distance between the target structure and the mfe structure of the considered neighbor. We compared our algorithm to RNAinverse and RNA-SSD for artificial and biological test sets. Using INFO-RNA, we performed better than RNAinverse and in most cases, we gained better results than RNA-SSD, the probably best inverse RNA folding tool on the market. www.bioinf.uni-freiburg.de?Subpages/software.html.
Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems
NASA Astrophysics Data System (ADS)
Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.
2010-12-01
Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.
On the optimization of electromagnetic geophysical data: Application of the PSO algorithm
NASA Astrophysics Data System (ADS)
Godio, A.; Santilano, A.
2018-01-01
Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.
NASA Technical Reports Server (NTRS)
Smith, James A.
1992-01-01
The inversion of the leaf area index (LAI) canopy parameter from optical spectral reflectance measurements is obtained using a backpropagation artificial neural network trained using input-output pairs generated by a multiple scattering reflectance model. The problem of LAI estimation over sparse canopies (LAI < 1.0) with varying soil reflectance backgrounds is particularly difficult. Standard multiple regression methods applied to canopies within a single homogeneous soil type yield good results but perform unacceptably when applied across soil boundaries, resulting in absolute percentage errors of >1000 percent for low LAI. Minimization methods applied to merit functions constructed from differences between measured reflectances and predicted reflectances using multiple-scattering models are unacceptably sensitive to a good initial guess for the desired parameter. In contrast, the neural network reported generally yields absolute percentage errors of <30 percent when weighting coefficients trained on one soil type were applied to predicted canopy reflectance at a different soil background.
Robinson, Katherine M; Ninowski, Jerilyn E
2003-12-01
Problems of the form a + b - b have been used to assess conceptual understanding of the relationship between addition and subtraction. No study has investigated the same relationship between multiplication and division on problems of the form d x e / e. In both types of inversion problems, no calculation is required if the inverse relationship between the operations is understood. Adult participants solved addition/subtraction and multiplication/division inversion (e.g., 9 x 22 / 22) and standard (e.g., 2 + 27 - 28) problems. Participants started to use the inversion strategy earlier and more frequently on addition/subtraction problems. Participants took longer to solve both types of multiplication/division problems. Overall, conceptual understanding of the relationship between multiplication and division was not as strong as that between addition and subtraction. One explanation for this difference in performance is that the operation of division is more weakly represented and understood than the other operations and that this weakness affects performance on problems of the form d x e / e.
NASA Technical Reports Server (NTRS)
Mutterperl, William
1944-01-01
A method of conformal transformation is developed that maps an airfoil into a straight line, the line being chosen as the extended chord line of the airfoil. The mapping is accomplished by operating directly with the airfoil ordinates. The absence of any preliminary transformation is found to shorten the work substantially over that of previous methods. Use is made of the superposition of solutions to obtain a rigorous counterpart of the approximate methods of thin-airfoils theory. The method is applied to the solution of the direct and inverse problems for arbitrary airfoils and pressure distributions. Numerical examples are given. Applications to more general types of regions, in particular to biplanes and to cascades of airfoils, are indicated. (author)
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1989-01-01
Progress in the direct-inverse wing design method in curvilinear coordinates has been made. This includes the remedying of a spanwise oscillation problem and the assessment of grid skewness, viscous interaction, and the initial airfoil section on the final design. It was found that, in response to the spanwise oscillation problem that designing at every other spanwise station produced the best results for the cases presented, a smoothly varying grid is especially needed for the accurate design at the wing tip, the boundary layer displacement thicknesses must be included in a successful wing design, the design of high and medium aspect ratio wings is possible with this code, and the final airfoil section designed is fairly independent of the initial section.
Extracting Low-Frequency Information from Time Attenuation in Elastic Waveform Inversion
NASA Astrophysics Data System (ADS)
Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong
2017-03-01
Low-frequency information is crucial for recovering background velocity, but the lack of low-frequency information in field data makes inversion impractical without accurate initial models. Laplace-Fourier domain waveform inversion can recover a smooth model from real data without low-frequency information, which can be used for subsequent inversion as an ideal starting model. In general, it also starts with low frequencies and includes higher frequencies at later inversion stages, while the difference is that its ultralow frequency information comes from the Laplace-Fourier domain. Meanwhile, a direct implementation of the Laplace-transformed wavefield using frequency domain inversion is also very convenient. However, because broad frequency bands are often used in the pure time domain waveform inversion, it is difficult to extract the wavefields dominated by low frequencies in this case. In this paper, low-frequency components are constructed by introducing time attenuation into the recorded residuals, and the rest of the method is identical to the traditional time domain inversion. Time windowing and frequency filtering are also applied to mitigate the ambiguity of the inverse problem. Therefore, we can start at low frequencies and to move to higher frequencies. The experiment shows that the proposed method can achieve a good inversion result in the presence of a linear initial model and records without low-frequency information.
Point-source inversion techniques
NASA Astrophysics Data System (ADS)
Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.
1982-11-01
A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.
Techniques for Accelerating Iterative Methods for the Solution of Mathematical Problems
1989-07-01
m, we can find a solu ion to the problem by using generalized inverses. Hence, ;= Ih.i = GAi = G - where G is of the form (18). A simple choice for V...have understood why I was not available for many of their activities and not home many of the nights. Their love is forever. I have saved the best for...Xk) Extrapolation applied to terms xP through Xk F Operator on x G Iteration function Ik Identity matrix of rank k Solution of the problem or the limit
On numerical reconstructions of lithographic masks in DUV scatterometry
NASA Astrophysics Data System (ADS)
Henn, M.-A.; Model, R.; Bär, M.; Wurm, M.; Bodermann, B.; Rathsfeld, A.; Gross, H.
2009-06-01
The solution of the inverse problem in scatterometry employing deep ultraviolet light (DUV) is discussed, i.e. we consider the determination of periodic surface structures from light diffraction patterns. With decreasing dimensions of the structures on photo lithography masks and wafers, increasing demands on the required metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line structures in order to determine the sidewall angles, heights, and critical dimensions (CD), i.e., the top and bottom widths. The latter quantities are typically in the range of tens of nanometers. All these angles, heights, and CDs are the fundamental figures in order to evaluate the quality of the manufacturing process. To measure those quantities a DUV scatterometer is used, which typically operates at a wavelength of 193 nm. The diffraction of light by periodic 2D structures can be simulated using the finite element method for the Helmholtz equation. The corresponding inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Fixing the class of gratings and the set of measurements, this inverse problem reduces to a finite dimensional nonlinear operator equation. Reformulating the problem as an optimization problem, a vast number of numerical schemes can be applied. Our tool is a sequential quadratic programing (SQP) variant of the Gauss-Newton iteration. In a first step, in which we use a simulated data set, we investigate how accurate the geometrical parameters of an EUV mask can be reconstructed, using light in the DUV range. We then determine the expected uncertainties of geometric parameters by reconstructing from simulated input data perturbed by noise representing the estimated uncertainties of input data. In the last step, we use the measurement data obtained from the new DUV scatterometer at PTB to determine the geometrical parameters of a typical EUV mask with our reconstruction algorithm. The results are compared to the outcome of investigations with two alternative methods namely EUV scatterometry and SEM measurements.
Joint Inversion of Source Location and Source Mechanism of Induced Microseismics
NASA Astrophysics Data System (ADS)
Liang, C.
2014-12-01
Seismic source mechanism is a useful property to indicate the source physics and stress and strain distribution in regional, local and micro scales. In this study we jointly invert source mechanisms and locations for microseismics induced in fluid fracturing treatment in the oil and gas industry. For the events that are big enough to see waveforms, there are quite a few techniques can be applied to invert the source mechanism including waveform inversion, first polarity inversion and many other methods and variants based on these methods. However, for events that are too small to identify in seismic traces such as the microseismics induced by the fluid fracturing in the Oil and Gas industry, a source scanning algorithms (SSA for short) with waveform stacking are usually applied. At the same time, a joint inversion of location and source mechanism are possible but at a cost of high computation budget. The algorithm is thereby called Source Location and Mechanism Scanning Algorithm, SLMSA for short. In this case, for given velocity structure, all possible combinations of source locations (X,Y and Z) and source mechanism (Strike, Dip and Rake) are used to compute travel-times and polarities of waveforms. Correcting Normal moveout times and polarities, and stacking all waveforms, the (X, Y, Z , strike, dip, rake) combination that gives the strongest stacking waveform is identified as the solution. To solve the problem of high computation problem, CPU-GPU programing is applied. Numerical datasets are used to test the algorithm. The SLMSA has also been applied to a fluid fracturing datasets and reveal several advantages against the location only method: (1) for shear sources, the source only program can hardly locate them because of the canceling out of positive and negative polarized traces, but the SLMSA method can successfully pick up those events; (2) microseismic locations alone may not be enough to indicate the directionality of micro-fractures. The statistics of source mechanisms can certainly provide more knowledges on the orientation of fractures; (3) in our practice, the joint inversion method almost always yield more events than the source only method and for those events that are also picked by the SSA method, the stacking power of SLMSA are always higher than the ones obtained in SSA.
Information fusion in regularized inversion of tomographic pumping tests
Bohling, Geoffrey C.; ,
2008-01-01
In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.
Parameter estimation in nonlinear distributed systems - Approximation theory and convergence results
NASA Technical Reports Server (NTRS)
Banks, H. T.; Reich, Simeon; Rosen, I. G.
1988-01-01
An abstract approximation framework and convergence theory is described for Galerkin approximations applied to inverse problems involving nonlinear distributed parameter systems. Parameter estimation problems are considered and formulated as the minimization of a least-squares-like performance index over a compact admissible parameter set subject to state constraints given by an inhomogeneous nonlinear distributed system. The theory applies to systems whose dynamics can be described by either time-independent or nonstationary strongly maximal monotonic operators defined on a reflexive Banach space which is densely and continuously embedded in a Hilbert space. It is demonstrated that if readily verifiable conditions on the system's dependence on the unknown parameters are satisfied, and the usual Galerkin approximation assumption holds, then solutions to the approximating problems exist and approximate a solution to the original infinite-dimensional identification problem.
Children's Understanding of the Arithmetic Concepts of Inversion and Associativity
ERIC Educational Resources Information Center
Robinson, Katherine M.; Ninowski, Jerilyn E.; Gray, Melissa L.
2006-01-01
Previous studies have shown that even preschoolers can solve inversion problems of the form a + b - b by using the knowledge that addition and subtraction are inverse operations. In this study, a new type of inversion problem of the form d x e [divided by] e was also examined. Grade 6 and 8 students solved inversion problems of both types as well…
NASA Astrophysics Data System (ADS)
Kopacz, Monika; Jacob, Daniel J.; Henze, Daven K.; Heald, Colette L.; Streets, David G.; Zhang, Qiang
2009-02-01
We apply the adjoint of an atmospheric chemical transport model (GEOS-Chem CTM) to constrain Asian sources of carbon monoxide (CO) with 2° × 2.5° spatial resolution using Measurement of Pollution in the Troposphere (MOPITT) satellite observations of CO columns in February-April 2001. Results are compared to the more common analytical method for solving the same Bayesian inverse problem and applied to the same data set. The analytical method is more exact but because of computational limitations it can only constrain emissions over coarse regions. We find that the correction factors to the a priori CO emission inventory from the adjoint inversion are generally consistent with those of the analytical inversion when averaged over the large regions of the latter. The adjoint solution reveals fine-scale variability (cities, political boundaries) that the analytical inversion cannot resolve, for example, in the Indian subcontinent or between Korea and Japan, and some of that variability is of opposite sign which points to large aggregation errors in the analytical solution. Upward correction factors to Chinese emissions from the prior inventory are largest in central and eastern China, consistent with a recent bottom-up revision of that inventory, although the revised inventory also sees the need for upward corrections in southern China where the adjoint and analytical inversions call for downward correction. Correction factors for biomass burning emissions derived from the adjoint and analytical inversions are consistent with a recent bottom-up inventory on the basis of MODIS satellite fire data.
Inverse simulation system for evaluating handling qualities during rendezvous and docking
NASA Astrophysics Data System (ADS)
Zhou, Wanmeng; Wang, Hua; Thomson, Douglas; Tang, Guojin; Zhang, Fan
2017-08-01
The traditional method used for handling qualities assessment of manned space vehicles is too time-consuming to meet the requirements of an increasingly fast design process. In this study, a rendezvous and docking inverse simulation system to assess the handling qualities of spacecraft is proposed using a previously developed model-predictive-control architecture. By considering the fixed discrete force of the thrusters of the system, the inverse model is constructed using the least squares estimation method with a hyper-ellipsoidal restriction, the continuous control outputs of which are subsequently dispersed by pulse width modulation with sensitivity factors introduced. The inputs in every step are deemed constant parameters, and the method could be considered as a general method for solving nominal, redundant, and insufficient inverse problems. The rendezvous and docking inverse simulation is applied to a nine-degrees-of-freedom platform, and a novel handling qualities evaluation scheme is established according to the operation precision and astronauts' workload. Finally, different nominal trajectories are scored by the inverse simulation and an established evaluation scheme. The scores can offer theoretical guidance for astronaut training and more complex operation missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitanidis, Peter
As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO 2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic,more » tracer and thermal tests before CO 2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO 2 storage examples.« less
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
The Earthquake‐Source Inversion Validation (SIV) Project
Mai, P. Martin; Schorlemmer, Danijel; Page, Morgan T.; Ampuero, Jean-Paul; Asano, Kimiyuki; Causse, Mathieu; Custodio, Susana; Fan, Wenyuan; Festa, Gaetano; Galis, Martin; Gallovic, Frantisek; Imperatori, Walter; Käser, Martin; Malytskyy, Dmytro; Okuwaki, Ryo; Pollitz, Fred; Passone, Luca; Razafindrakoto, Hoby N. T.; Sekiguchi, Haruko; Song, Seok Goo; Somala, Surendra N.; Thingbaijam, Kiran K. S.; Twardzik, Cedric; van Driel, Martin; Vyas, Jagdish C.; Wang, Rongjiang; Yagi, Yuji; Zielke, Olaf
2016-01-01
Finite‐fault earthquake source inversions infer the (time‐dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake‐source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward‐modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source‐model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake‐source imaging problem.
Numerical Leak Detection in a Pipeline Network of Complex Structure with Unsteady Flow
NASA Astrophysics Data System (ADS)
Aida-zade, K. R.; Ashrafova, E. R.
2017-12-01
An inverse problem for a pipeline network of complex loopback structure is solved numerically. The problem is to determine the locations and amounts of leaks from unsteady flow characteristics measured at some pipeline points. The features of the problem include impulse functions involved in a system of hyperbolic differential equations, the absence of classical initial conditions, and boundary conditions specified as nonseparated relations between the states at the endpoints of adjacent pipeline segments. The problem is reduced to a parametric optimal control problem without initial conditions, but with nonseparated boundary conditions. The latter problem is solved by applying first-order optimization methods. Results of numerical experiments are presented.
NASA Technical Reports Server (NTRS)
Sabatier, P. C.
1972-01-01
The progressive realization of the consequences of nonuniqueness imply an evolution of both the methods and the centers of interest in inverse problems. This evolution is schematically described together with the various mathematical methods used. A comparative description is given of inverse methods in scientific research, with examples taken from mathematics, quantum and classical physics, seismology, transport theory, radiative transfer, electromagnetic scattering, electrocardiology, etc. It is hoped that this paper will pave the way for an interdisciplinary study of inverse problems.
Refining mortality estimates in shark demographic analyses: a Bayesian inverse matrix approach.
Smart, Jonathan J; Punt, André E; White, William T; Simpfendorfer, Colin A
2018-01-18
Leslie matrix models are an important analysis tool in conservation biology that are applied to a diversity of taxa. The standard approach estimates the finite rate of population growth (λ) from a set of vital rates. In some instances, an estimate of λ is available, but the vital rates are poorly understood and can be solved for using an inverse matrix approach. However, these approaches are rarely attempted due to prerequisites of information on the structure of age or stage classes. This study addressed this issue by using a combination of Monte Carlo simulations and the sample-importance-resampling (SIR) algorithm to solve the inverse matrix problem without data on population structure. This approach was applied to the grey reef shark (Carcharhinus amblyrhynchos) from the Great Barrier Reef (GBR) in Australia to determine the demography of this population. Additionally, these outputs were applied to another heavily fished population from Papua New Guinea (PNG) that requires estimates of λ for fisheries management. The SIR analysis determined that natural mortality (M) and total mortality (Z) based on indirect methods have previously been overestimated for C. amblyrhynchos, leading to an underestimated λ. The updated Z distributions determined using SIR provided λ estimates that matched an empirical λ for the GBR population and corrected obvious error in the demographic parameters for the PNG population. This approach provides opportunity for the inverse matrix approach to be applied more broadly to situations where information on population structure is lacking. © 2018 by the Ecological Society of America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tupek, Michael R.
2016-06-30
In recent years there has been a proliferation of modeling techniques for forward predictions of crack propagation in brittle materials, including: phase-field/gradient damage models, peridynamics, cohesive-zone models, and G/XFEM enrichment techniques. However, progress on the corresponding inverse problems has been relatively lacking. Taking advantage of key features of existing modeling approaches, we propose a parabolic regularization of Barenblatt cohesive models which borrows extensively from previous phase-field and gradient damage formulations. An efficient explicit time integration strategy for this type of nonlocal fracture model is then proposed and justified. In addition, we present a C++ computational framework for computing in- putmore » parameter sensitivities efficiently for explicit dynamic problems using the adjoint method. This capability allows for solving inverse problems involving crack propagation to answer interesting engineering questions such as: 1) what is the optimal design topology and material placement for a heterogeneous structure to maximize fracture resistance, 2) what loads must have been applied to a structure for it to have failed in an observed way, 3) what are the existing cracks in a structure given various experimental observations, etc. In this work, we focus on the first of these engineering questions and demonstrate a capability to automatically and efficiently compute optimal designs intended to minimize crack propagation in structures.« less
Borghero, Francesco; Demontis, Francesco
2016-09-01
In the framework of geometrical optics, we consider the following inverse problem: given a two-parameter family of curves (congruence) (i.e., f(x,y,z)=c1,g(x,y,z)=c2), construct the refractive-index distribution function n=n(x,y,z) of a 3D continuous transparent inhomogeneous isotropic medium, allowing for the creation of the given congruence as a family of monochromatic light rays. We solve this problem by following two different procedures: 1. By applying Fermat's principle, we establish a system of two first-order linear nonhomogeneous PDEs in the unique unknown function n=n(x,y,z) relating the assigned congruence of rays with all possible refractive-index profiles compatible with this family. Moreover, we furnish analytical proof that the family of rays must be a normal congruence. 2. By applying the eikonal equation, we establish a second system of two first-order linear homogeneous PDEs whose solutions give the equation S(x,y,z)=const. of the geometric wavefronts and, consequently, all pertinent refractive-index distribution functions n=n(x,y,z). Finally, we make a comparison between the two procedures described above, discussing appropriate examples having exact solutions.
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey B.; Raymond, C.; Smrekar, S.; Millbury, C.
2004-01-01
This viewgraph presentation reviews a Bayesian approach to the inversion of gravity and magnetic data with specific application to the Ismenius Area of Mars. Many inverse problems encountered in geophysics and planetary science are well known to be non-unique (i.e. inversion of gravity the density structure of a body). In hopes of reducing the non-uniqueness of solutions, there has been interest in the joint analysis of data. An example is the joint inversion of gravity and magnetic data, with the assumption that the same physical anomalies generate both the observed magnetic and gravitational anomalies. In this talk, we formulate the joint analysis of different types of data in a Bayesian framework and apply the formalism to the inference of the density and remanent magnetization structure for a local region in the Ismenius area of Mars. The Bayesian approach allows prior information or constraints in the solutions to be incorporated in the inversion, with the "best" solutions those whose forward predictions most closely match the data while remaining consistent with assumed constraints. The application of this framework to the inversion of gravity and magnetic data on Mars reveals two typical challenges - the forward predictions of the data have a linear dependence on some of the quantities of interest, and non-linear dependence on others (termed the "linear" and "non-linear" variables, respectively). For observations with Gaussian noise, a Bayesian approach to inversion for "linear" variables reduces to a linear filtering problem, with an explicitly computable "error" matrix. However, for models whose forward predictions have non-linear dependencies, inference is no longer given by such a simple linear problem, and moreover, the uncertainty in the solution is no longer completely specified by a computable "error matrix". It is therefore important to develop methods for sampling from the full Bayesian posterior to provide a complete and statistically consistent picture of model uncertainty, and what has been learned from observations. We will discuss advanced numerical techniques, including Monte Carlo Markov
BOOK REVIEW: Inverse Problems. Activities for Undergraduates
NASA Astrophysics Data System (ADS)
Yamamoto, Masahiro
2003-06-01
This book is a valuable introduction to inverse problems. In particular, from the educational point of view, the author addresses the questions of what constitutes an inverse problem and how and why we should study them. Such an approach has been eagerly awaited for a long time. Professor Groetsch, of the University of Cincinnati, is a world-renowned specialist in inverse problems, in particular the theory of regularization. Moreover, he has made a remarkable contribution to educational activities in the field of inverse problems, which was the subject of his previous book (Groetsch C W 1993 Inverse Problems in the Mathematical Sciences (Braunschweig: Vieweg)). For this reason, he is one of the most qualified to write an introductory book on inverse problems. Without question, inverse problems are important, necessary and appear in various aspects. So it is crucial to introduce students to exercises in inverse problems. However, there are not many introductory books which are directly accessible by students in the first two undergraduate years. As a consequence, students often encounter diverse concrete inverse problems before becoming aware of their general principles. The main purpose of this book is to present activities to allow first-year undergraduates to learn inverse theory. To my knowledge, this book is a rare attempt to do this and, in my opinion, a great success. The author emphasizes that it is very important to teach inverse theory in the early years. He writes; `If students consider only the direct problem, they are not looking at the problem from all sides .... The habit of always looking at problems from the direct point of view is intellectually limiting ...' (page 21). The book is very carefully organized so that teachers will be able to use it as a textbook. After an introduction in chapter 1, sucessive chapters deal with inverse problems in precalculus, calculus, differential equations and linear algebra. In order to let one gain some insight into the nature of inverse problems and the appropriate mode of thought, chapter 1 offers historical vignettes, most of which have played an essential role in the development of natural science. These vignettes cover the first successful application of `non-destructive testing' by Archimedes (page 4) via Newton's laws of motion up to literary tomography, and readers will be able to enjoy a wide overview of inverse problems. Therefore, as the author asks, the reader should not skip this chapter. This may not be hard to do, since the headings of the sections are quite intriguing (`Archimedes' Bath', `Another World', `Got the Time?', `Head Games', etc). The author embarks on the technical approach to inverse problems in chapter 2. He has elegantly designed each section with a guide specifying course level, objective, mathematical and scientifical background and appropriate technology (e.g. types of calculators required). The guides are designed such that teachers may be able to construct effective and attractive courses by themselves. The book is not intended to offer one rigidly determined course, but should be used flexibly and independently according to the situation. Moreover, every section closes with activities which can be chosen according to the students' interests and levels of ability. Some of these exercises do not have ready solutions, but require long-term study, so readers are not required to solve all of them. After chapter 5, which contains discrete inverse problems such as the algebraic reconstruction technique and the Backus - Gilbert method, there are answers and commentaries to the activities. Finally, scripts in MATLAB are attached, although they can also be downloaded from the author's web page (http://math.uc.edu/~groetsch/). This book is aimed at students but it will be very valuable to researchers wishing to retain a wide overview of inverse problems in the midst of busy research activities. A Japanese version was published in 2002.
Martín, Andrés; Barrientos, Antonio; Del Cerro, Jaime
2018-03-22
This article presents a new method to solve the inverse kinematics problem of hyper-redundant and soft manipulators. From an engineering perspective, this kind of robots are underdetermined systems. Therefore, they exhibit an infinite number of solutions for the inverse kinematics problem, and to choose the best one can be a great challenge. A new algorithm based on the cyclic coordinate descent (CCD) and named as natural-CCD is proposed to solve this issue. It takes its name as a result of generating very harmonious robot movements and trajectories that also appear in nature, such as the golden spiral. In addition, it has been applied to perform continuous trajectories, to develop whole-body movements, to analyze motion planning in complex environments, and to study fault tolerance, even for both prismatic and rotational joints. The proposed algorithm is very simple, precise, and computationally efficient. It works for robots either in two or three spatial dimensions and handles a large amount of degrees-of-freedom. Because of this, it is aimed to break down barriers between discrete hyper-redundant and continuum soft robots.
Learning the inverse kinetics of an octopus-like manipulator in three-dimensional space.
Giorelli, M; Renda, F; Calisti, M; Arienti, A; Ferri, G; Laschi, C
2015-05-13
This work addresses the inverse kinematics problem of a bioinspired octopus-like manipulator moving in three-dimensional space. The bioinspired manipulator has a conical soft structure that confers the ability of twirling around objects as a real octopus arm does. Despite the simple design, the soft conical shape manipulator driven by cables is described by nonlinear differential equations, which are difficult to solve analytically. Since exact solutions of the equations are not available, the Jacobian matrix cannot be calculated analytically and the classical iterative methods cannot be used. To overcome the intrinsic problems of methods based on the Jacobian matrix, this paper proposes a neural network learning the inverse kinematics of a soft octopus-like manipulator driven by cables. After the learning phase, a feed-forward neural network is able to represent the relation between manipulator tip positions and forces applied to the cables. Experimental results show that a desired tip position can be achieved in a short time, since heavy computations are avoided, with a degree of accuracy of 8% relative average error with respect to the total arm length.
NASA Astrophysics Data System (ADS)
Chen, Siyu; Zhang, Hanming; Li, Lei; Xi, Xiaoqi; Han, Yu; Yan, Bin
2016-10-01
X-ray computed tomography (CT) has been extensively applied in industrial non-destructive testing (NDT). However, in practical applications, the X-ray beam polychromaticity often results in beam hardening problems for image reconstruction. The beam hardening artifacts, which manifested as cupping, streaks and flares, not only debase the image quality, but also disturb the subsequent analyses. Unfortunately, conventional CT scanning requires that the scanned object is completely covered by the field of view (FOV), the state-of-art beam hardening correction methods only consider the ideal scanning configuration, and often suffer problems for interior tomography due to the projection truncation. Aiming at this problem, this paper proposed a beam hardening correction method based on radon inversion transform for interior tomography. Experimental results show that, compared to the conventional correction algorithms, the proposed approach has achieved excellent performance in both beam hardening artifacts reduction and truncation artifacts suppression. Therefore, the presented method has vitally theoretic and practicable meaning in artifacts correction of industrial CT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manoli, Gabriele, E-mail: manoli@dmsa.unipd.it; Nicholas School of the Environment, Duke University, Durham, NC 27708; Rossi, Matteo
The modeling of unsaturated groundwater flow is affected by a high degree of uncertainty related to both measurement and model errors. Geophysical methods such as Electrical Resistivity Tomography (ERT) can provide useful indirect information on the hydrological processes occurring in the vadose zone. In this paper, we propose and test an iterated particle filter method to solve the coupled hydrogeophysical inverse problem. We focus on an infiltration test monitored by time-lapse ERT and modeled using Richards equation. The goal is to identify hydrological model parameters from ERT electrical potential measurements. Traditional uncoupled inversion relies on the solution of two sequentialmore » inverse problems, the first one applied to the ERT measurements, the second one to Richards equation. This approach does not ensure an accurate quantitative description of the physical state, typically violating mass balance. To avoid one of these two inversions and incorporate in the process more physical simulation constraints, we cast the problem within the framework of a SIR (Sequential Importance Resampling) data assimilation approach that uses a Richards equation solver to model the hydrological dynamics and a forward ERT simulator combined with Archie's law to serve as measurement model. ERT observations are then used to update the state of the system as well as to estimate the model parameters and their posterior distribution. The limitations of the traditional sequential Bayesian approach are investigated and an innovative iterative approach is proposed to estimate the model parameters with high accuracy. The numerical properties of the developed algorithm are verified on both homogeneous and heterogeneous synthetic test cases based on a real-world field experiment.« less
Remote sensing of phytoplankton chlorophyll-a concentration by use of ridge function fields.
Pelletier, Bruno; Frouin, Robert
2006-02-01
A methodology is presented for retrieving phytoplankton chlorophyll-a concentration from space. The data to be inverted, namely, vectors of top-of-atmosphere reflectance in the solar spectrum, are treated as explanatory variables conditioned by angular geometry. This approach leads to a continuum of inverse problems, i.e., a collection of similar inverse problems continuously indexed by the angular variables. The resolution of the continuum of inverse problems is studied from the least-squares viewpoint and yields a solution expressed as a function field over the set of permitted values for the angular variables, i.e., a map defined on that set and valued in a subspace of a function space. The function fields of interest, for reasons of approximation theory, are those valued in nested sequences of subspaces, such as ridge function approximation spaces, the union of which is dense. Ridge function fields constructed on synthetic yet realistic data for case I waters handle well situations of both weakly and strongly absorbing aerosols, and they are robust to noise, showing improvement in accuracy compared with classic inversion techniques. The methodology is applied to actual imagery from the Sea-Viewing Wide Field-of-View Sensor (SeaWiFS); noise in the data are taken into account. The chlorophyll-a concentration obtained with the function field methodology differs from that obtained by use of the standard SeaWiFS algorithm by 15.7% on average. The results empirically validate the underlying hypothesis that the inversion is solved in a least-squares sense. They also show that large levels of noise can be managed if the noise distribution is known or estimated.
Inverse Problems in Complex Models and Applications to Earth Sciences
NASA Astrophysics Data System (ADS)
Bosch, M. E.
2015-12-01
The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied for the estimation of lithological structure of the crust, with the lithotype body regions conditioning the mass density and magnetic susceptibility fields. At planetary scale, the Earth mantle temperature and element composition is inferred from seismic travel-time and geodetic data.
NASA Astrophysics Data System (ADS)
Khachaturov, R. V.
2016-09-01
It is shown that finding the equivalence set for solving multiobjective discrete optimization problems is advantageous over finding the set of Pareto optimal decisions. An example of a set of key parameters characterizing the economic efficiency of a commercial firm is proposed, and a mathematical model of its activities is constructed. In contrast to the classical problem of finding the maximum profit for any business, this study deals with a multiobjective optimization problem. A method for solving inverse multiobjective problems in a multidimensional pseudometric space is proposed for finding the best project of firm's activities. The solution of a particular problem of this type is presented.
Probabilistic numerical methods for PDE-constrained Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Cockayne, Jon; Oates, Chris; Sullivan, Tim; Girolami, Mark
2017-06-01
This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse problems. This method is tested on a challenging inverse problem with a nonlinear forward model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro
2016-07-01
This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.
Inverse dynamic substructuring using the direct hybrid assembly in the frequency domain
NASA Astrophysics Data System (ADS)
D'Ambrogio, Walter; Fregolent, Annalisa
2014-04-01
The paper deals with the identification of the dynamic behaviour of a structural subsystem, starting from the known dynamic behaviour of both the coupled system and the remaining part of the structural system (residual subsystem). This topic is also known as decoupling problem, subsystem subtraction or inverse dynamic substructuring. Whenever it is necessary to combine numerical models (e.g. FEM) and test models (e.g. FRFs), one speaks of experimental dynamic substructuring. Substructure decoupling techniques can be classified as inverse coupling or direct decoupling techniques. In inverse coupling, the equations describing the coupling problem are rearranged to isolate the unknown substructure instead of the coupled structure. On the contrary, direct decoupling consists in adding to the coupled system a fictitious subsystem that is the negative of the residual subsystem. Starting from a reduced version of the 3-field formulation (dynamic equilibrium using FRFs, compatibility and equilibrium of interface forces), a direct hybrid assembly is developed by requiring that both compatibility and equilibrium conditions are satisfied exactly, either at coupling DoFs only, or at additional internal DoFs of the residual subsystem. Equilibrium and compatibility DoFs might not be the same: this generates the so-called non-collocated approach. The technique is applied using experimental data from an assembled system made by a plate and a rigid mass.
Localization of synchronous cortical neural sources.
Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc
2013-03-01
Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.
NASA Astrophysics Data System (ADS)
Pontes, P. C.; Naveira-Cotta, C. P.
2016-09-01
The theoretical analysis for the design of microreactors in biodiesel production is a complicated task due to the complex liquid-liquid flow and mass transfer processes, and the transesterification reaction that takes place within these microsystems. Thus, computational simulation is an important tool that aids in understanding the physical-chemical phenomenon and, consequently, in determining the suitable conditions that maximize the conversion of triglycerides during the biodiesel synthesis. A diffusive-convective-reactive coupled nonlinear mathematical model, that governs the mass transfer process during the transesterification reaction in parallel plates microreactors, under isothermal conditions, is here described. A hybrid numerical-analytical solution via the Generalized Integral Transform Technique (GITT) for this partial differential system is developed and the eigenfunction expansions convergence rates are extensively analyzed and illustrated. The heuristic method of Particle Swarm Optimization (PSO) is applied in the inverse analysis of the proposed direct problem, to estimate the reaction kinetics constants, which is a critical step in the design of such microsystems. The results present a good agreement with the limited experimental data in the literature, but indicate that the GITT methodology combined with the PSO approach provide a reliable computational algorithm for direct-inverse analysis in such reactive mass transfer problems.
Physics-based Inverse Problem to Deduce Marine Atmospheric Boundary Layer Parameters
2017-03-07
please find the Final Technical Report with SF 298 for Dr. Erin E. Hackett’s ONR grant entitled Physics-based Inverse Problem to Deduce Marine...From- To) 07/03/2017 Final Technica l Dec 2012- Dec 2016 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Physics-based Inverse Problem to Deduce Marine...SUPPLEMENTARY NOTES 14. ABSTRACT This report describes research results related to the development and implementation of an inverse problem approach for
The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions
NASA Astrophysics Data System (ADS)
Loaiciga, Hugo A.; MariñO, Miguel A.
1987-01-01
The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.
A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.
2013-01-01
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554
A fast marching algorithm for the factored eikonal equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Treister, Eran, E-mail: erantreister@gmail.com; Haber, Eldad, E-mail: haber@math.ubc.ca; Department of Mathematics, The University of British Columbia, Vancouver, BC
The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. Thismore » inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modeling and inversion by Gauss–Newton.« less
Refraction traveltime tomography based on damped wave equation for irregular topographic model
NASA Astrophysics Data System (ADS)
Park, Yunhui; Pyun, Sukjoon
2018-03-01
Land seismic data generally have time-static issues due to irregular topography and weathered layers at shallow depths. Unless the time static is handled appropriately, interpretation of the subsurface structures can be easily distorted. Therefore, static corrections are commonly applied to land seismic data. The near-surface velocity, which is required for static corrections, can be inferred from first-arrival traveltime tomography, which must consider the irregular topography, as the land seismic data are generally obtained in irregular topography. This paper proposes a refraction traveltime tomography technique that is applicable to an irregular topographic model. This technique uses unstructured meshes to express an irregular topography, and traveltimes calculated from the frequency-domain damped wavefields using the finite element method. The diagonal elements of the approximate Hessian matrix were adopted for preconditioning, and the principle of reciprocity was introduced to efficiently calculate the Fréchet derivative. We also included regularization to resolve the ill-posed inverse problem, and used the nonlinear conjugate gradient method to solve the inverse problem. As the damped wavefields were used, there were no issues associated with artificial reflections caused by unstructured meshes. In addition, the shadow zone problem could be circumvented because this method is based on the exact wave equation, which does not require a high-frequency assumption. Furthermore, the proposed method was both robust to an initial velocity model and efficient compared to full wavefield inversions. Through synthetic and field data examples, our method was shown to successfully reconstruct shallow velocity structures. To verify our method, static corrections were roughly applied to the field data using the estimated near-surface velocity. By comparing common shot gathers and stack sections with and without static corrections, we confirmed that the proposed tomography algorithm can be used to correct the statics of land seismic data.
Analysing 21cm signal with artificial neural network
NASA Astrophysics Data System (ADS)
Shimabukuro, Hayato; a Semelin, Benoit
2018-05-01
The 21cm signal at epoch of reionization (EoR) should be observed within next decade. We expect that cosmic 21cm signal at the EoR provides us both cosmological and astrophysical information. In order to extract fruitful information from observation data, we need to develop inversion method. For such a method, we introduce artificial neural network (ANN) which is one of the machine learning techniques. We apply the ANN to inversion problem to constrain astrophysical parameters from 21cm power spectrum. We train the architecture of the neural network with 70 training datasets and apply it to 54 test datasets with different value of parameters. We find that the quality of the parameter reconstruction depends on the sensitivity of the power spectrum to the different parameter sets at a given redshift and also find that the accuracy of reconstruction is improved by increasing the number of given redshifts. We conclude that the ANN is viable inversion method whose main strength is that they require a sparse extrapolation of the parameter space and thus should be usable with full simulation.
NASA Astrophysics Data System (ADS)
Pedesseau, Laurent; Jouanna, Paul
2004-12-01
The SASP (semianalytical stochastic perturbations) method is an original mixed macro-nano-approach dedicated to the mass equilibrium of multispecies phases, periphases, and interphases. This general method, applied here to the reflexive relation Ck⇔μk between the concentrations Ck and the chemical potentials μk of k species within a fluid in equilibrium, leads to the distribution of the particles at the atomic scale. The macroaspects of the method, based on analytical Taylor's developments of chemical potentials, are intimately mixed with the nanoaspects of molecular mechanics computations on stochastically perturbed states. This numerical approach, directly linked to definitions, is universal by comparison with current approaches, DLVO Derjaguin-Landau-Verwey-Overbeek, grand canonical Monte Carlo, etc., without any restriction on the number of species, concentrations, or boundary conditions. The determination of the relation Ck⇔μk implies in fact two problems: a direct problem Ck⇒μk and an inverse problem μk⇒Ck. Validation of the method is demonstrated in case studies A and B which treat, respectively, a direct problem and an inverse problem within a free saturated gypsum solution. The flexibility of the method is illustrated in case study C dealing with an inverse problem within a solution interphase, confined between two (120) gypsum faces, remaining in connection with a reference solution. This last inverse problem leads to the mass equilibrium of ions and water molecules within a 3 Å thick gypsum interface. The major unexpected observation is the repulsion of SO42- ions towards the reference solution and the attraction of Ca2+ ions from the reference solution, the concentration being 50 times higher within the interphase as compared to the free solution. The SASP method is today the unique approach able to tackle the simulation of the number and distribution of ions plus water molecules in such extreme confined conditions. This result is of prime importance for all coupled chemical-mechanical problems dealing with interfaces, and more generally for a wide variety of applications such as phase changes, osmotic equilibrium, surface energy, etc., in complex chemical-physics situations.
3D CSEM inversion based on goal-oriented adaptive finite element method
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.
2016-12-01
We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.
NASA Astrophysics Data System (ADS)
Jardani, A.; Soueid Ahmed, A.; Revil, A.; Dupont, J.
2013-12-01
Pumping tests are usually employed to predict the hydraulic conductivity filed from the inversion of the head measurements. Nevertheless, the inverse problem is strongly underdetermined and a reliable imaging requires a considerable number of wells. We propose to add more information to the inversion of the heads by adding (non-intrusive) streaming potentials (SP) data. The SP corresponds to perturbations in the local electrical field caused directly by the fow of the ground water. These SP are obtained with a set of the non-polarising electrodes installed at the ground surface. We developed a geostatistical method for the estimation of the hydraulic conductivity field from measurements of hydraulic heads and SP during pumping and injection experiments. We use the adjoint state method and a recent petrophysical formulation of the streaming potential problem in which the streaming coupling coefficient is derived from the hydraulic conductivity allowed reducing of the unknown parameters. The geostatistical inverse framework is applied to three synthetic case studies with different number of the wells and electrodes used to measure the hydraulic heads and the streaming potentials. To evaluate the benefits of the incorporating of the streaming potential to the hydraulic data, we compared the cases in which the data are coupled or not to map the hydraulic conductivity. The results of the inversion revealed that a dense distribution of electrodes can be used to infer the heterogeneities in the hydraulic conductivity field. Incorporating the streaming potential information to the hydraulic head data improves the estimate of hydraulic conductivity field especially when the number of piezometers is limited.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oleinikov, A. I., E-mail: a.i.oleinikov@mail.ru; Bormotin, K. S., E-mail: cvmi@knastu.ru
It is shown that inverse problems of steady-state creep bending of plates in both the geometrically linear and nonlinear formulations can be represented in a variational formulation. Steady-state values of the obtained functionals corresponding to the solutions of the problems of inelastic deformation and springback are determined by applying a finite element procedure to the functionals. Optimal laws of creep deformation are formulated using the criterion of minimizing damage in the functionals of the inverse problems. The formulated problems are reduced to the problems solved by the finite element method using MSC.Marc software. Currently, forming of light metals poses tremendousmore » challenges due to their low ductility at room temperature and their unusual deformation characteristics at hot-cold work: strong asymmetry between tensile and compressive behavior, and a very pronounced anisotropy. We used the constitutive models of steady-state creep of initially transverse isotropy structural materials the kind of the stress state has influence. The paper gives basics of the developed computer-aided system of design, modeling, and electronic simulation targeting the processes of manufacture of wing integral panels. The modeling results can be used to calculate the die tooling, determine the panel processibility, and control panel rejection in the course of forming.« less
NASA Astrophysics Data System (ADS)
Ángel López Comino, José; Stich, Daniel; Ferreira, Ana M. G.; Morales Soto, José
2015-04-01
The inversion of seismic data for extended fault slip distributions provides us detailed models of earthquake sources. The validity of the solutions depends on the fit between observed and synthetic seismograms generated with the source model. However, there may exist more than one model that fit the data in a similar way, leading to a multiplicity of solutions. This underdetermined problem has been analyzed and studied by several authors, who agree that inverting for a single best model may become overly dependent on the details of the procedure. We have addressed this resolution problem by using a global search that scans the solutions domain using random slipmaps, applying a Popperian inversion strategy that involves the generation of a representative set of slip distributions. The proposed technique solves the forward problem for a large set of models calculating their corresponding synthetic seismograms. Then, we propose to perform extended fault inversion through falsification, that is, falsify inappropriate trial models that do not reproduce the data within a reasonable level of mismodelling. The remainder of surviving trial models forms our set of coequal solutions. Thereby the ambiguities that might exist can be detected by taking a look at the solutions, allowing for an efficient assessment of the resolution. The solution set may contain only members with similar slip distributions, or else uncover some fundamental ambiguity like, for example, different patterns of main slip patches or different patterns of rupture propagation. For a feasibility study, the proposed resolution test has been evaluated using teleseismic body wave recordings from the September 5th 2012 Nicoya, Costa Rica earthquake. Note that the inversion strategy can be applied to any type of seismic, geodetic or tsunami data for which we can handle the forward problem. A 2D von Karman distribution is used to describe the spectrum of heterogeneity in slipmaps, and we generate possible models by spectral synthesis for random phase, keeping the rake angle, rupture velocity and slip velocity function fixed. The 2012 Nicoya earthquake turns out to be relatively well constrained from 50 teleseismic waveforms. The solution set contains 252 out of 10.000 trial models with normalized L1-fit within 5 percent from the global minimum. The set includes only similar solutions -a single centred slip patch- with minor differences. Uncertainties are related to the details of the slip maximum, including the amount of peak slip (2m to 3.5m), as well as the characteristics of peripheral slip below 1 m. Synthetic tests suggest that slip patterns like Nicoya may be a fortunate case, while it may be more difficult to unambiguously reconstruct more distributed slip from teleseismic data.
NASA Astrophysics Data System (ADS)
Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.
2017-10-01
Over the recent decades, a number of fast approximate solutions of Lippmann-Schwinger equation, which are more accurate than classic Born and Rytov approximations, were proposed in the field of electromagnetic modeling. Those developments could be naturally extended to acoustic and elastic fields; however, until recently, they were almost unknown in seismology. This paper presents several solutions of this kind applied to acoustic modeling for both lossy and lossless media. We evaluated the numerical merits of those methods and provide an estimation of their numerical complexity. In our numerical realization we use the matrix-free implementation of the corresponding integral operator. We study the accuracy of those approximate solutions and demonstrate, that the quasi-analytical approximation is more accurate, than the Born approximation. Further, we apply the quasi-analytical approximation to the solution of the inverse problem. It is demonstrated that, this approach improves the estimation of the data gradient, comparing to the Born approximation. The developed inversion algorithm is based on the conjugate-gradient type optimization. Numerical model study demonstrates that the quasi-analytical solution significantly reduces computation time of the seismic full-waveform inversion. We also show how the quasi-analytical approximation can be extended to the case of elastic wavefield.
NASA Astrophysics Data System (ADS)
Podlipenko, Yu. K.; Shestopalov, Yu. V.
2017-09-01
We investigate the guaranteed estimation problem of linear functionals from solutions to transmission problems for the Helmholtz equation with inexact data. The right-hand sides of equations entering the statements of transmission problems and the statistical characteristics of observation errors are supposed to be unknown and belonging to certain sets. It is shown that the optimal linear mean square estimates of the above mentioned functionals and estimation errors are expressed via solutions to the systems of transmission problems of the special type. The results and techniques can be applied in the analysis and estimation of solution to forward and inverse electromagnetic and acoustic problems with uncertain data that arise in mathematical models of the wave diffraction on transparent bodies.
Bayesian inversion of refraction seismic traveltime data
NASA Astrophysics Data System (ADS)
Ryberg, T.; Haberland, Ch
2018-03-01
We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test for a synthetic data set from a known model is also presented.
Asteroseismic inversions in the Kepler era: application to the Kepler Legacy sample
NASA Astrophysics Data System (ADS)
Buldgen, Gaël; Reese, Daniel; Dupret, Marc-Antoine
2017-10-01
In the past few years, the CoRoT and Kepler missions have carried out what is now called the space photometry revolution. This revolution is still ongoing thanks to K2 and will be continued by the Tess and Plato2.0 missions. However, the photometry revolution must also be followed by progress in stellar modelling, in order to lead to more precise and accurate determinations of fundamental stellar parameters such as masses, radii and ages. In this context, the long-lasting problems related to mixing processes in stellar interior is the main obstacle to further improvements of stellar modelling. In this contribution, we will apply structural asteroseismic inversion techniques to targets from the Kepler Legacy sample and analyse how these can help us constrain the fundamental parameters and mixing processes in these stars. Our approach is based on previous studies using the SOLA inversion technique [1] to determine integrated quantities such as the mean density [2], the acoustic radius, and core conditions indicators [3], and has already been successfully applied to the 16Cyg binary system [4]. We will show how this technique can be applied to the Kepler Legacy sample and how new indicators can help us to further constrain the chemical composition profiles of stars as well as provide stringent constraints on stellar ages.
NASA Astrophysics Data System (ADS)
Ialongo, S.; Cella, F.; Fedi, M.; Florio, G.
2011-12-01
Most geophysical inversion problems are characterized by a number of data considerably higher than the number of the unknown parameters. This corresponds to solve highly underdetermined systems. To get a unique solution, a priori information must be therefore introduced. We here analyze the inversion of the gravity gradient tensor (GGT). Previous approaches to invert jointly or independently more gradient components are by Li (2001) proposing an algorithm using a depth weighting function and Zhdanov et alii (2004), providing a well focused inversion of gradient data. Both the methods give a much-improved solution compared with the minimum length solution, which is invariably shallow and not representative of the true source distribution. For very undetermined problems, this feature is due to the role of the depth weighting matrices used by both the methods. Recently, Cella and Fedi (2011) showed however that for magnetic and gravity data the depth weighting function has to be defined carefully, under a preliminary application of Euler Deconvolution or Depth from Extreme Point methods, yielding the appropriate structural index and then using it as the rate decay of the weighting function. We therefore propose to extend this last approach to invert jointly or independently the GGT tensor using the structural index as weighting function rate decay. In case of a joint inversion, gravity data can be added as well. This multicomponent case is also relevant because the simultaneous use of several components and gravity increase the number of data and reduce the algebraic ambiguity compared to the inversion of a single component. The reduction of such ambiguity was shown in Fedi et al, (2005) decisive to get an improved depth resolution in inverse problems, independently from any form of depth weighting function. The method is demonstrated to synthetic cases and applied to real cases, such as the Vredefort impact area (South Africa), characterized by a complex density distribution, well defining a central uplift area, ring structures and low density sediments. REFERENCES Cella F., and Fedi M., 2011, Inversion of potential field data using the structural index as weighting function rate decay, Geophysical Prospecting, doi: 10.1111/j.1365-2478.2011.00974.x Fedi M., Hansen P. C., and Paoletti V., 2005 Analysis of depth resolution in potential-field inversion. Geophysics, 70, NO. 6 Li, Y., 2001, 3-D inversion of gravity gradiometry data: 71st Annual Meeting, SEG, Expanded Abstracts, 1470-1473. Zhdanov, M. S., Ellis, R. G., and Mukherjee, S., 2004, Regularized focusing inversion of 3-D gravity tensor data: Geophysics, 69, 925-937.
A Forward Glimpse into Inverse Problems through a Geology Example
ERIC Educational Resources Information Center
Winkel, Brian J.
2012-01-01
This paper describes a forward approach to an inverse problem related to detecting the nature of geological substrata which makes use of optimization techniques in a multivariable calculus setting. The true nature of the related inverse problem is highlighted. (Contains 2 figures.)
Using field inversion to quantify functional errors in turbulence closures
NASA Astrophysics Data System (ADS)
Singh, Anand Pratap; Duraisamy, Karthik
2016-04-01
A data-informed approach is presented with the objective of quantifying errors and uncertainties in the functional forms of turbulence closure models. The approach creates modeling information from higher-fidelity simulations and experimental data. Specifically, a Bayesian formalism is adopted to infer discrepancies in the source terms of transport equations. A key enabling idea is the transformation of the functional inversion procedure (which is inherently infinite-dimensional) into a finite-dimensional problem in which the distribution of the unknown function is estimated at discrete mesh locations in the computational domain. This allows for the use of an efficient adjoint-driven inversion procedure. The output of the inversion is a full-field of discrepancy that provides hitherto inaccessible modeling information. The utility of the approach is demonstrated by applying it to a number of problems including channel flow, shock-boundary layer interactions, and flows with curvature and separation. In all these cases, the posterior model correlates well with the data. Furthermore, it is shown that even if limited data (such as surface pressures) are used, the accuracy of the inferred solution is improved over the entire computational domain. The results suggest that, by directly addressing the connection between physical data and model discrepancies, the field inversion approach materially enhances the value of computational and experimental data for model improvement. The resulting information can be used by the modeler as a guiding tool to design more accurate model forms, or serve as input to machine learning algorithms to directly replace deficient modeling terms.
Nonstationary Deformation of an Elastic Layer with Mixed Boundary Conditions
NASA Astrophysics Data System (ADS)
Kubenko, V. D.
2016-11-01
The analytic solution to the plane problem for an elastic layer under a nonstationary surface load is found for mixed boundary conditions: normal stress and tangential displacement are specified on one side of the layer (fourth boundary-value problem of elasticity) and tangential stress and normal displacement are specified on the other side of the layer (second boundary-value problem of elasticity). The Laplace and Fourier integral transforms are applied. The inverse Laplace and Fourier transforms are found exactly using tabulated formulas and convolution theorems for various nonstationary loads. Explicit analytical expressions for stresses and displacements are derived. Loads applied to a constant surface area and to a surface area varying in a prescribed manner are considered. Computations demonstrate the dependence of the normal stress on time and spatial coordinates. Features of wave processes are analyzed
NASA Astrophysics Data System (ADS)
Ray, Anandaroop; Key, Kerry; Bodin, Thomas; Myer, David; Constable, Steven
2014-12-01
We apply a reversible-jump Markov chain Monte Carlo method to sample the Bayesian posterior model probability density function of 2-D seafloor resistivity as constrained by marine controlled source electromagnetic data. This density function of earth models conveys information on which parts of the model space are illuminated by the data. Whereas conventional gradient-based inversion approaches require subjective regularization choices to stabilize this highly non-linear and non-unique inverse problem and provide only a single solution with no model uncertainty information, the method we use entirely avoids model regularization. The result of our approach is an ensemble of models that can be visualized and queried to provide meaningful information about the sensitivity of the data to the subsurface, and the level of resolution of model parameters. We represent models in 2-D using a Voronoi cell parametrization. To make the 2-D problem practical, we use a source-receiver common midpoint approximation with 1-D forward modelling. Our algorithm is transdimensional and self-parametrizing where the number of resistivity cells within a 2-D depth section is variable, as are their positions and geometries. Two synthetic studies demonstrate the algorithm's use in the appraisal of a thin, segmented, resistive reservoir which makes for a challenging exploration target. As a demonstration example, we apply our method to survey data collected over the Scarborough gas field on the Northwest Australian shelf.
A Tensor-Train accelerated solver for integral equations in complex geometries
NASA Astrophysics Data System (ADS)
Corona, Eduardo; Rahimian, Abtin; Zorin, Denis
2017-04-01
We present a framework using the Quantized Tensor Train (QTT) decomposition to accurately and efficiently solve volume and boundary integral equations in three dimensions. We describe how the QTT decomposition can be used as a hierarchical compression and inversion scheme for matrices arising from the discretization of integral equations. For a broad range of problems, computational and storage costs of the inversion scheme are extremely modest O (log N) and once the inverse is computed, it can be applied in O (Nlog N) . We analyze the QTT ranks for hierarchically low rank matrices and discuss its relationship to commonly used hierarchical compression techniques such as FMM and HSS. We prove that the QTT ranks are bounded for translation-invariant systems and argue that this behavior extends to non-translation invariant volume and boundary integrals. For volume integrals, the QTT decomposition provides an efficient direct solver requiring significantly less memory compared to other fast direct solvers. We present results demonstrating the remarkable performance of the QTT-based solver when applied to both translation and non-translation invariant volume integrals in 3D. For boundary integral equations, we demonstrate that using a QTT decomposition to construct preconditioners for a Krylov subspace method leads to an efficient and robust solver with a small memory footprint. We test the QTT preconditioners in the iterative solution of an exterior elliptic boundary value problem (Laplace) formulated as a boundary integral equation in complex, multiply connected geometries.
Bayesian inference for disease prevalence using negative binomial group testing
Pritchard, Nicholas A.; Tebbs, Joshua M.
2011-01-01
Group testing, also known as pooled testing, and inverse sampling are both widely used methods of data collection when the goal is to estimate a small proportion. Taking a Bayesian approach, we consider the new problem of estimating disease prevalence from group testing when inverse (negative binomial) sampling is used. Using different distributions to incorporate prior knowledge of disease incidence and different loss functions, we derive closed form expressions for posterior distributions and resulting point and credible interval estimators. We then evaluate our new estimators, on Bayesian and classical grounds, and apply our methods to a West Nile Virus data set. PMID:21259308
Convergent radial dispersion: A note on evaluation of the Laplace transform solution
Moench, Allen F.
1991-01-01
A numerical inversion algorithm for Laplace transforms that is capable of handling rapid changes in the computed function is applied to the Laplace transform solution to the problem of convergent radial dispersion in a homogeneous aquifer. Prior attempts by the author to invert this solution were unsuccessful for highly advective systems where the Peclet number was relatively large. The algorithm used in this note allows for rapid and accurate inversion of the solution for all Peclet numbers of practical interest, and beyond. Dimensionless breakthrough curves are illustrated for tracer input in the form of a step function, a Dirac impulse, or a rectangular input.
NASA Astrophysics Data System (ADS)
Xu, J.; Heue, K.-P.; Coldewey-Egbers, M.; Romahn, F.; Doicu, A.; Loyola, D.
2018-04-01
Characterizing vertical distributions of ozone from nadir-viewing satellite measurements is known to be challenging, particularly the ozone information in the troposphere. A novel retrieval algorithm called Full-Physics Inverse Learning Machine (FP-ILM), has been developed at DLR in order to estimate ozone profile shapes based on machine learning techniques. In contrast to traditional inversion methods, the FP-ILM algorithm formulates the profile shape retrieval as a classification problem. Its implementation comprises a training phase to derive an inverse function from synthetic measurements, and an operational phase in which the inverse function is applied to real measurements. This paper extends the ability of the FP-ILM retrieval to derive tropospheric ozone columns from GOME- 2 measurements. Results of total and tropical tropospheric ozone columns are compared with the ones using the official GOME Data Processing (GDP) product and the convective-cloud-differential (CCD) method, respectively. Furthermore, the FP-ILM framework will be used for the near-real-time processing of the new European Sentinel sensors with their unprecedented spectral and spatial resolution and corresponding large increases in the amount of data.
Breast ultrasound computed tomography using waveform inversion with source encoding
NASA Astrophysics Data System (ADS)
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Regularization of soft-X-ray imaging in the DIII-D tokamak
Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...
2015-03-02
We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less
Numerical reconstruction of tsunami source using combined seismic, satellite and DART data
NASA Astrophysics Data System (ADS)
Krivorotko, Olga; Kabanikhin, Sergey; Marinin, Igor
2014-05-01
Recent tsunamis, for instance, in Japan (2011), in Sumatra (2004), and at the Indian coast (2004) showed that a system of producing exact and timely information about tsunamis is of a vital importance. Numerical simulation is an effective instrument for providing such information. Bottom relief characteristics and the initial perturbation data (a tsunami source) are required for the direct simulation of tsunamis. The seismic data about the source are usually obtained in a few tens of minutes after an event has occurred (the seismic waves velocity being about five hundred kilometres per minute, while the velocity of tsunami waves is less than twelve kilometres per minute). A difference in the arrival times of seismic and tsunami waves can be used when operationally refining the tsunami source parameters and modelling expected tsunami wave height on the shore. The most suitable physical models related to the tsunamis simulation are based on the shallow water equations. The problem of identification parameters of a tsunami source using additional measurements of a passing wave is called inverse tsunami problem. We investigate three different inverse problems of determining a tsunami source using three different additional data: Deep-ocean Assessment and Reporting of Tsunamis (DART) measurements, satellite wave-form images and seismic data. These problems are severely ill-posed. We apply regularization techniques to control the degree of ill-posedness such as Fourier expansion, truncated singular value decomposition, numerical regularization. The algorithm of selecting the truncated number of singular values of an inverse problem operator which is agreed with the error level in measured data is described and analyzed. In numerical experiment we used gradient methods (Landweber iteration and conjugate gradient method) for solving inverse tsunami problems. Gradient methods are based on minimizing the corresponding misfit function. To calculate the gradient of the misfit function, the adjoint problem is solved. The conservative finite-difference schemes for solving the direct and adjoint problems in the approximation of shallow water are constructed. Results of numerical experiments of the tsunami source reconstruction are presented and discussed. We show that using a combination of three different types of data allows one to increase the stability and efficiency of tsunami source reconstruction. Non-profit organization WAPMERR (World Agency of Planetary Monitoring and Earthquake Risk Reduction) in collaboration with Informap software development department developed the Integrated Tsunami Research and Information System (ITRIS) to simulate tsunami waves and earthquakes, river course changes, coastal zone floods, and risk estimates for coastal constructions at wave run-ups and earthquakes. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. This work was supported by the Russian Foundation for Basic Research (project No. 12-01-00773 'Theory and Numerical Methods for Solving Combined Inverse Problems of Mathematical Physics') and interdisciplinary project of SB RAS 14 'Inverse Problems and Applications: Theory, Algorithms, Software'.
NASA Astrophysics Data System (ADS)
Schumacher, F.; Friederich, W.; Lamara, S.
2016-02-01
We present a new conceptual approach to scattering-integral-based seismic full waveform inversion (FWI) that allows a flexible, extendable, modular and both computationally and storage-efficient numerical implementation. To achieve maximum modularity and extendability, interactions between the three fundamental steps carried out sequentially in each iteration of the inversion procedure, namely, solving the forward problem, computing waveform sensitivity kernels and deriving a model update, are kept at an absolute minimum and are implemented by dedicated interfaces. To realize storage efficiency and maximum flexibility, the spatial discretization of the inverted earth model is allowed to be completely independent of the spatial discretization employed by the forward solver. For computational efficiency reasons, the inversion is done in the frequency domain. The benefits of our approach are as follows: (1) Each of the three stages of an iteration is realized by a stand-alone software program. In this way, we avoid the monolithic, unflexible and hard-to-modify codes that have often been written for solving inverse problems. (2) The solution of the forward problem, required for kernel computation, can be obtained by any wave propagation modelling code giving users maximum flexibility in choosing the forward modelling method. Both time-domain and frequency-domain approaches can be used. (3) Forward solvers typically demand spatial discretizations that are significantly denser than actually desired for the inverted model. Exploiting this fact by pre-integrating the kernels allows a dramatic reduction of disk space and makes kernel storage feasible. No assumptions are made on the spatial discretization scheme employed by the forward solver. (4) In addition, working in the frequency domain effectively reduces the amount of data, the number of kernels to be computed and the number of equations to be solved. (5) Updating the model by solving a large equation system can be done using different mathematical approaches. Since kernels are stored on disk, it can be repeated many times for different regularization parameters without need to solve the forward problem, making the approach accessible to Occam's method. Changes of choice of misfit functional, weighting of data and selection of data subsets are still possible at this stage. We have coded our approach to FWI into a program package called ASKI (Analysis of Sensitivity and Kernel Inversion) which can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. It is written in modern FORTRAN language using object-oriented concepts that reflect the modular structure of the inversion procedure. We validate our FWI method by a small-scale synthetic study and present first results of its application to high-quality seismological data acquired in the southern Aegean.
Eigenproblem solution by a combined Sturm sequence and inverse iteration technique.
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1973-01-01
Description of an efficient and numerically stable algorithm, along with a complete listing of the associated computer program, developed for the accurate computation of specified roots and associated vectors of the eigenvalue problem Aq = lambda Bq with band symmetric A and B, B being also positive-definite. The desired roots are first isolated by the Sturm sequence procedure; then a special variant of the inverse iteration technique is applied for the individual determination of each root along with its vector. The algorithm fully exploits the banded form of relevant matrices, and the associated program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be most significantly economical in comparison to similar existing procedures. The program may be conveniently utilized for the efficient solution of practical engineering problems, involving free vibration and buckling analysis of structures. Results of such analyses are presented for representative structures.
Structural Damage Detection Using Changes in Natural Frequencies: Theory and Applications
NASA Astrophysics Data System (ADS)
He, K.; Zhu, W. D.
2011-07-01
A vibration-based method that uses changes in natural frequencies of a structure to detect damage has advantages over conventional nondestructive tests in detecting various types of damage, including loosening of bolted joints, using minimum measurement data. Two major challenges associated with applications of the vibration-based damage detection method to engineering structures are addressed: accurate modeling of structures and the development of a robust inverse algorithm to detect damage, which are defined as the forward and inverse problems, respectively. To resolve the forward problem, new physics-based finite element modeling techniques are developed for fillets in thin-walled beams and for bolted joints, so that complex structures can be accurately modeled with a reasonable model size. To resolve the inverse problem, a logistical function transformation is introduced to convert the constrained optimization problem to an unconstrained one, and a robust iterative algorithm using a trust-region method, called the Levenberg-Marquardt method, is developed to accurately detect the locations and extent of damage. The new methodology can ensure global convergence of the iterative algorithm in solving under-determined system equations and deal with damage detection problems with relatively large modeling error and measurement noise. The vibration-based damage detection method is applied to various structures including lightning masts, a space frame structure and one of its components, and a pipeline. The exact locations and extent of damage can be detected in the numerical simulation where there is no modeling error and measurement noise. The locations and extent of damage can be successfully detected in experimental damage detection.
NASA Astrophysics Data System (ADS)
Filippi, Anthony Matthew
For complex systems, sufficient a priori knowledge is often lacking about the mathematical or empirical relationship between cause and effect or between inputs and outputs of a given system. Automated machine learning may offer a useful solution in such cases. Coastal marine optical environments represent such a case, as the optical remote sensing inverse problem remains largely unsolved. A self-organizing, cybernetic mathematical modeling approach known as the group method of data handling (GMDH), a type of statistical learning network (SLN), was used to generate explicit spectral inversion models for optically shallow coastal waters. Optically shallow water light fields represent a particularly difficult challenge in oceanographic remote sensing. Several algorithm-input data treatment combinations were utilized in multiple experiments to automatically generate inverse solutions for various inherent optical property (IOP), bottom optical property (BOP), constituent concentration, and bottom depth estimations. The objective was to identify the optimal remote-sensing reflectance Rrs(lambda) inversion algorithm. The GMDH also has the potential of inductive discovery of physical hydro-optical laws. Simulated data were used to develop generalized, quasi-universal relationships. The Hydrolight numerical forward model, based on radiative transfer theory, was used to compute simulated above-water remote-sensing reflectance Rrs(lambda) psuedodata, matching the spectral channels and resolution of the experimental Naval Research Laboratory Ocean PHILLS (Portable Hyperspectral Imager for Low-Light Spectroscopy) sensor. The input-output pairs were for GMDH and artificial neural network (ANN) model development, the latter of which was used as a baseline, or control, algorithm. Both types of models were applied to in situ and aircraft data. Also, in situ spectroradiometer-derived Rrs(lambda) were used as input to an optimization-based inversion procedure. Target variables included bottom depth z b, chlorophyll a concentration [chl- a], spectral bottom irradiance reflectance Rb(lambda), and spectral total absorption a(lambda) and spectral total backscattering bb(lambda) coefficients. When applying the cybernetic and neural models to in situ HyperTSRB-derived Rrs, the difference in the means of the absolute error of the inversion estimates for zb was significant (alpha = 0.05). GMDH yielded significantly better zb than the ANN. The ANN model posted a mean absolute error (MAE) of 0.62214 m, compared with 0.55161 m for GMDH.
TOPEX/POSEIDON tides estimated using a global inverse model
NASA Technical Reports Server (NTRS)
Egbert, Gary D.; Bennett, Andrew F.; Foreman, Michael G. G.
1994-01-01
Altimetric data from the TOPEX/POSEIDON mission will be used for studies of global ocean circulation and marine geophysics. However, it is first necessary to remove the ocean tides, which are aliased in the raw data. The tides are constrained by the two distinct types of information: the hydrodynamic equations which the tidal fields of elevations and velocities must satisfy, and direct observational data from tide gauges and satellite altimetry. Here we develop and apply a generalized inverse method, which allows us to combine rationally all of this information into global tidal fields best fitting both the data and the dynamics, in a least squares sense. The resulting inverse solution is a sum of the direct solution to the astronomically forced Laplace tidal equations and a linear combination of the representers for the data functionals. The representer functions (one for each datum) are determined by the dynamical equations, and by our prior estimates of the statistics or errors in these equations. Our major task is a direct numerical calculation of these representers. This task is computationally intensive, but well suited to massively parallel processing. By calculating the representers we reduce the full (infinite dimensional) problem to a relatively low-dimensional problem at the outset, allowing full control over the conditioning and hence the stability of the inverse solution. With the representers calculated we can easily update our model as additional TOPEX/POSEIDON data become available. As an initial illustration we invert harmonic constants from a set of 80 open-ocean tide gauges. We then present a practical scheme for direct inversion of TOPEX/POSEIDON crossover data. We apply this method to 38 cycles of geophysical data records (GDR) data, computing preliminary global estimates of the four principal tidal constituents, M(sub 2), S(sub 2), K(sub 1) and O(sub 1). The inverse solution yields tidal fields which are simultaneously smoother, and in better agreement with altimetric and ground truth data, than previously proposed tidal models. Relative to the 'default' tidal corrections provided with the TOPEX/POSEIDON GDR, the inverse solution reduces crossover difference variances significantly (approximately 20-30%), even though only a small number of free parameters (approximately equal to 1000) are actually fit to the crossover data.
On the computation of molecular surface correlations for protein docking using fourier techniques.
Sakk, Eric
2007-08-01
The computation of surface correlations using a variety of molecular models has been applied to the unbound protein docking problem. Because of the computational complexity involved in examining all possible molecular orientations, the fast Fourier transform (FFT) (a fast numerical implementation of the discrete Fourier transform (DFT)) is generally applied to minimize the number of calculations. This approach is rooted in the convolution theorem which allows one to inverse transform the product of two DFTs in order to perform the correlation calculation. However, such a DFT calculation results in a cyclic or "circular" correlation which, in general, does not lead to the same result as the linear correlation desired for the docking problem. In this work, we provide computational bounds for constructing molecular models used in the molecular surface correlation problem. The derived bounds are then shown to be consistent with various intuitive guidelines previously reported in the protein docking literature. Finally, these bounds are applied to different molecular models in order to investigate their effect on the correlation calculation.
Bayesian approach to inverse statistical mechanics.
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Bayesian approach to inverse statistical mechanics
NASA Astrophysics Data System (ADS)
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Regolith thermal property inversion in the LUNAR-A heat-flow experiment
NASA Astrophysics Data System (ADS)
Hagermann, A.; Tanaka, S.; Yoshida, S.; Fujimura, A.; Mizutani, H.
2001-11-01
In 2003, two penetrators of the LUNAR--A mission of ISAS will investigate the internal structure of the Moon by conducting seismic and heat--flow experiments. Heat-flow is the product of thermal gradient tial T / tial z, and thermal conductivity λ of the lunar regolith. For measuring the thermal conductivity (or dissusivity), each penetrator will carry five thermal property sensors, consisting of small disc heaters. The thermal response Ts(t) of the heater itself to the constant known power supply of approx. 50 mW serves as the data for the subsequent data interpretation. Horai et al. (1991) found a forward analytical solution to the problem of determining the thermal inertia λ ρ c of the regolith for constant thermal properties and a simplyfied geometry. In the inversion, the problem of deriving the unknown thermal properties of a medium from known heat sources and temperatures is an Identification Heat Conduction Problem (IDHCP), an ill--posed inverse problem. Assuming that thermal conductivity λ and heat capacity ρ c are linear functions of temperature (which is reasonable in most cases), one can apply a Kirchhoff transformation to linearize the heat conduction equation, which minimizes computing time. Then the error functional, i.e. the difference between the measured temperature response of the heater and the predicted temperature response, can be minimized, thus solving for thermal dissusivity κ = λ / (ρ c), wich will complete the set of parameters needed for a detailed description of thermal properties of the lunar regolith. Results of model calculations will be presented, in which synthetic data and calibration data are used to invert the unknown thermal diffusivity of the unknown medium by means of a modified Newton Method. Due to the ill-posedness of the problem, the number of parameters to be solved for should be limited. As the model calculations reveal, a homogeneous regolith allows for a fast and accurate inversion.
Young children's use of derived fact strategies for addition and subtraction
Dowker, Ann
2014-01-01
Forty-four children between 6;0 and 7;11 took part in a study of derived fact strategy use. They were assigned to addition and subtraction levels on the basis of calculation pretests. They were then given Dowker's (1998) test of derived fact strategies in addition, involving strategies based on the Identity, Commutativity, Addend +1, Addend −1, and addition/subtraction Inverse principles; and test of derived fact strategies in subtraction, involving strategies based on the Identity, Minuend +1, Minuend −1, Subtrahend +1, Subtrahend −1, Complement and addition/subtraction Inverse principles. The exact arithmetic problems given varied according to the child's previously assessed calculation level and were selected to be just a little too difficult for the child to solve unaided. Children were given the answer to a problem and then asked to solve another problem that could be solved quickly by using this answer, together with the principle being assessed. The children also took the WISC Arithmetic subtest. Strategies differed greatly in difficulty, with Identity being the easiest, and the Inverse and Complement principles being most difficult. The Subtrahend +1 and Subtrahend −1 problems often elicited incorrect strategies based on an overextension of the principles of addition to subtraction. It was concluded that children may have difficulty with understanding and applying the relationships between addition and subtraction. Derived fact strategy use was significantly related to both calculation level and to WISC Arithmetic scaled score. PMID:24431996
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
NASA Astrophysics Data System (ADS)
Gutowitz, Howard
1991-08-01
Cellular automata, dynamic systems in which space and time are discrete, are yielding interesting applications in both the physical and natural sciences. The thirty four contributions in this book cover many aspects of contemporary studies on cellular automata and include reviews, research reports, and guides to recent literature and available software. Chapters cover mathematical analysis, the structure of the space of cellular automata, learning rules with specified properties: cellular automata in biology, physics, chemistry, and computation theory; and generalizations of cellular automata in neural nets, Boolean nets, and coupled map lattices. Current work on cellular automata may be viewed as revolving around two central and closely related problems: the forward problem and the inverse problem. The forward problem concerns the description of properties of given cellular automata. Properties considered include reversibility, invariants, criticality, fractal dimension, and computational power. The role of cellular automata in computation theory is seen as a particularly exciting venue for exploring parallel computers as theoretical and practical tools in mathematical physics. The inverse problem, an area of study gaining prominence particularly in the natural sciences, involves designing rules that possess specified properties or perform specified task. A long-term goal is to develop a set of techniques that can find a rule or set of rules that can reproduce quantitative observations of a physical system. Studies of the inverse problem take up the organization and structure of the set of automata, in particular the parameterization of the space of cellular automata. Optimization and learning techniques, like the genetic algorithm and adaptive stochastic cellular automata are applied to find cellular automaton rules that model such physical phenomena as crystal growth or perform such adaptive-learning tasks as balancing an inverted pole. Howard Gutowitz is Collaborateur in the Service de Physique du Solide et Résonance Magnetique, Commissariat a I'Energie Atomique, Saclay, France.
Joint two dimensional inversion of gravity and magnetotelluric data using correspondence maps
NASA Astrophysics Data System (ADS)
Carrillo Lopez, J.; Gallardo, L. A.
2016-12-01
Inverse problems in Earth sciences are inherently non-unique. To improve models and reduce the number of solutions we need to provide extra information. In geological context, this information could be a priori information, for example, geological information, well log data, smoothness, or actually, information of measures of different kind of data. Joint inversion provides an approach to improve the solution and reduce the errors due to suppositions of each method. To do that, we need a link between two or more models. Some approaches have been explored successfully in recent years. For example, Gallardo and Meju (2003), Gallardo and Meju (2004, 2011), and Gallardo et. al. (2012) used the directions of properties to measure the similarity between models minimizing their cross gradients. In this work, we proposed a joint iterative inversion method that use spatial distribution of properties as a link. Correspondence maps could be better characterizing specific Earth systems due they consider the relation between properties. We implemented a code in Fortran to do a two dimensional inversion of magnetotelluric and gravity data, which are two of the standard methods in geophysical exploration. Synthetic tests show the advantages of joint inversion using correspondence maps against separate inversion. Finally, we applied this technique to magnetotelluric and gravity data in the geothermal zone located in Cerro Prieto, México.
NASA Astrophysics Data System (ADS)
Penfold, Scott; Zalas, Rafał; Casiraghi, Margherita; Brooke, Mark; Censor, Yair; Schulte, Reinhard
2017-05-01
A split feasibility formulation for the inverse problem of intensity-modulated radiation therapy treatment planning with dose-volume constraints included in the planning algorithm is presented. It involves a new type of sparsity constraint that enables the inclusion of a percentage-violation constraint in the model problem and its handling by continuous (as opposed to integer) methods. We propose an iterative algorithmic framework for solving such a problem by applying the feasibility-seeking CQ-algorithm of Byrne combined with the automatic relaxation method that uses cyclic projections. Detailed implementation instructions are furnished. Functionality of the algorithm was demonstrated through the creation of an intensity-modulated proton therapy plan for a simple 2D C-shaped geometry and also for a realistic base-of-skull chordoma treatment site. Monte Carlo simulations of proton pencil beams of varying energy were conducted to obtain dose distributions for the 2D test case. A research release of the Pinnacle 3 proton treatment planning system was used to extract pencil beam doses for a clinical base-of-skull chordoma case. In both cases the beamlet doses were calculated to satisfy dose-volume constraints according to our new algorithm. Examination of the dose-volume histograms following inverse planning with our algorithm demonstrated that it performed as intended. The application of our proposed algorithm to dose-volume constraint inverse planning was successfully demonstrated. Comparison with optimized dose distributions from the research release of the Pinnacle 3 treatment planning system showed the algorithm could achieve equivalent or superior results.
Application of a stochastic inverse to the geophysical inverse problem
NASA Technical Reports Server (NTRS)
Jordan, T. H.; Minster, J. B.
1972-01-01
The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.
Analysis of space telescope data collection system
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Schoggen, W. O.
1982-01-01
An analysis of the expected performance for the Multiple Access (MA) system is provided. The analysis covers the expected bit error rate performance, the effects of synchronization loss, the problem of self-interference, and the problem of phase ambiguity. The problem of false acceptance of a command word due to data inversion is discussed. A mathematical determination of the probability of accepting an erroneous command word due to a data inversion is presented. The problem is examined for three cases: (1) a data inversion only, (2) a data inversion and a random error within the same command word, and a block (up to 256 48-bit words) containing both a data inversion and a random error.
Analysis of a Two-Dimensional Thermal Cloaking Problem on the Basis of Optimization
NASA Astrophysics Data System (ADS)
Alekseev, G. V.
2018-04-01
For a two-dimensional model of thermal scattering, inverse problems arising in the development of tools for cloaking material bodies on the basis of a mixed thermal cloaking strategy are considered. By applying the optimization approach, these problems are reduced to optimization ones in which the role of controls is played by variable parameters of the medium occupying the cloaking shell and by the heat flux through a boundary segment of the basic domain. The solvability of the direct and optimization problems is proved, and an optimality system is derived. Based on its analysis, sufficient conditions on the input data are established that ensure the uniqueness and stability of optimal solutions.
NASA Astrophysics Data System (ADS)
Klepikova, Maria V.; Le Borgne, Tanguy; Bour, Olivier; Davy, Philippe
2011-09-01
SummaryTemperature profiles in the subsurface are known to be sensitive to groundwater flow. Here we show that they are also strongly related to vertical flow in the boreholes themselves. Based on a numerical model of flow and heat transfer at the borehole scale, we propose a method to invert temperature measurements to derive borehole flow velocities. This method is applied to an experimental site in fractured crystalline rocks. Vertical flow velocities deduced from the inversion of temperature measurements are compared with direct heat-pulse flowmeter measurements showing a good agreement over two orders of magnitudes. Applying this methodology under ambient, single and cross-borehole pumping conditions allows us to estimate fracture hydraulic head and local transmissivity, as well as inter-borehole fracture connectivity. Thus, these results provide new insights on how to include temperature profiles in inverse problems for estimating hydraulic fracture properties.
Lane, John W.; Day-Lewis, Frederick D.; Versteeg, Roelof J.; Casey, Clifton C.
2004-01-01
Crosswell radar methods can be used to dynamically image ground-water flow and mass transport associated with tracer tests, hydraulic tests, and natural physical processes, for improved characterization of preferential flow paths and complex aquifer heterogeneity. Unfortunately, because the raypath coverage of the interwell region is limited by the borehole geometry, the tomographic inverse problem is typically underdetermined, and tomograms may contain artifacts such as spurious blurring or streaking that confuse interpretation.We implement object-based inversion (using a constrained, non-linear, least-squares algorithm) to improve results from pixel-based inversion approaches that utilize regularization criteria, such as damping or smoothness. Our approach requires pre- and post-injection travel-time data. Parameterization of the image plane comprises a small number of objects rather than a large number of pixels, resulting in an overdetermined problem that reduces the need for prior information. The nature and geometry of the objects are based on hydrologic insight into aquifer characteristics, the nature of the experiment, and the planned use of the geophysical results.The object-based inversion is demonstrated using synthetic and crosswell radar field data acquired during vegetable-oil injection experiments at a site in Fridley, Minnesota. The region where oil has displaced ground water is discretized as a stack of rectangles of variable horizontal extents. The inversion provides the geometry of the affected region and an estimate of the radar slowness change for each rectangle. Applying petrophysical models to these results and porosity from neutron logs, we estimate the vegetable-oil emulsion saturation in various layers.Using synthetic- and field-data examples, object-based inversion is shown to be an effective strategy for inverting crosswell radar tomography data acquired to monitor the emplacement of vegetable-oil emulsions. A principal advantage of object-based inversion is that it yields images that hydrologists and engineers can easily interpret and use for model calibration.
Riemann–Hilbert problem approach for two-dimensional flow inverse scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agaltsov, A. D., E-mail: agalets@gmail.com; Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr; IEPT RAS, 117997 Moscow
2014-10-15
We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.
Inversion of geophysical potential field data using the finite element method
NASA Astrophysics Data System (ADS)
Lamichhane, Bishnu P.; Gross, Lutz
2017-12-01
The inversion of geophysical potential field data can be formulated as an optimization problem with a constraint in the form of a partial differential equation (PDE). It is common practice, if possible, to provide an analytical solution for the forward problem and to reduce the problem to a finite dimensional optimization problem. In an alternative approach the optimization is applied to the problem and the resulting continuous problem which is defined by a set of coupled PDEs is subsequently solved using a standard PDE discretization method, such as the finite element method (FEM). In this paper, we show that under very mild conditions on the data misfit functional and the forward problem in the three-dimensional space, the continuous optimization problem and its FEM discretization are well-posed including the existence and uniqueness of respective solutions. We provide error estimates for the FEM solution. A main result of the paper is that the FEM spaces used for the forward problem and the Lagrange multiplier need to be identical but can be chosen independently from the FEM space used to represent the unknown physical property. We will demonstrate the convergence of the solution approximations in a numerical example. The second numerical example which investigates the selection of FEM spaces, shows that from the perspective of computational efficiency one should use 2 to 4 times finer mesh for the forward problem in comparison to the mesh of the physical property.
Xia, J.; Miller, R.D.; Xu, Y.
2008-01-01
Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (>2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. We employed a data-resolution matrix to select data that would be well predicted and we find that there are advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher-mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher-mode data are normally more accurately predicted than fundamental-mode data because of restrictions on the data kernel for the inversion system. We used synthetic and real-world examples to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher-mode data in inversion can provide better results. We also calculated model-resolution matrices in these examples to show the potential of increasing model resolution with selected surface-wave data. ?? Birkhaueser 2008.
Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing
NASA Technical Reports Server (NTRS)
Chu, W. P.
1985-01-01
The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.
Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media
NASA Astrophysics Data System (ADS)
Fichtner, A.; Simutė, S.
2017-12-01
We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.
NASA Astrophysics Data System (ADS)
Ahn, Chi Young; Jeon, Kiwan; Park, Won-Kwang
2015-06-01
This study analyzes the well-known MUltiple SIgnal Classification (MUSIC) algorithm to identify unknown support of thin penetrable electromagnetic inhomogeneity from scattered field data collected within the so-called multi-static response matrix in limited-view inverse scattering problems. The mathematical theories of MUSIC are partially discovered, e.g., in the full-view problem, for an unknown target of dielectric contrast or a perfectly conducting crack with the Dirichlet boundary condition (Transverse Magnetic-TM polarization) and so on. Hence, we perform further research to analyze the MUSIC-type imaging functional and to certify some well-known but theoretically unexplained phenomena. For this purpose, we establish a relationship between the MUSIC imaging functional and an infinite series of Bessel functions of integer order of the first kind. This relationship is based on the rigorous asymptotic expansion formula in the existence of a thin inhomogeneity with a smooth supporting curve. Various results of numerical simulation are presented in order to support the identified structure of MUSIC. Although a priori information of the target is needed, we suggest a least condition of range of incident and observation directions to apply MUSIC in the limited-view problem.
NASA Astrophysics Data System (ADS)
Llopis-Albert, Carlos; Palacios-Marqués, Daniel; Merigó, José M.
2014-04-01
In this paper a methodology for the stochastic management of groundwater quality problems is presented, which can be used to provide agricultural advisory services. A stochastic algorithm to solve the coupled flow and mass transport inverse problem is combined with a stochastic management approach to develop methods for integrating uncertainty; thus obtaining more reliable policies on groundwater nitrate pollution control from agriculture. The stochastic inverse model allows identifying non-Gaussian parameters and reducing uncertainty in heterogeneous aquifers by constraining stochastic simulations to data. The management model determines the spatial and temporal distribution of fertilizer application rates that maximizes net benefits in agriculture constrained by quality requirements in groundwater at various control sites. The quality constraints can be taken, for instance, by those given by water laws such as the EU Water Framework Directive (WFD). Furthermore, the methodology allows providing the trade-off between higher economic returns and reliability in meeting the environmental standards. Therefore, this new technology can help stakeholders in the decision-making process under an uncertainty environment. The methodology has been successfully applied to a 2D synthetic aquifer, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques.
FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems
NASA Astrophysics Data System (ADS)
Vourc'h, Eric; Rodet, Thomas
2015-11-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2015 was a one-day workshop held in May 2015 which attracted around 70 attendees. Each of the submitted papers has been reviewed by two reviewers. There have been 15 accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks: GDR ISIS, GDR MIA, GDR MOA and GDR Ondes. The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA and SATIE.
NASA Astrophysics Data System (ADS)
Chanthawara, Krittidej; Kaennakham, Sayan; Toutip, Wattana
2016-02-01
The methodology of Dual Reciprocity Boundary Element Method (DRBEM) is applied to the convection-diffusion problems and investigating its performance is our first objective of the work. Seven types of Radial Basis Functions (RBF); Linear, Thin-plate Spline, Cubic, Compactly Supported, Inverse Multiquadric, Quadratic, and that proposed by [12], were closely investigated in order to numerically compare their effectiveness drawbacks etc. and this is taken as our second objective. A sufficient number of simulations were performed covering as many aspects as possible. Varidated against both exacts and other numerical works, the final results imply strongly that the Thin-Plate Spline and Linear type of RBF are superior to others in terms of both solutions' quality and CPU-time spent while the Inverse Multiquadric seems to poorly yield the results. It is also found that DRBEM can perform relatively well at moderate level of convective force and as anticipated becomes unstable when the problem becomes more convective-dominated, as normally found in all classical mesh-dependence methods.
Coll-Font, Jaume; Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrel J; Wang, Dafang; Brooks, Dana H; van Dam, Peter; Macleod, Rob S
2014-09-01
Cardiac electrical imaging often requires the examination of different forward and inverse problem formulations based on mathematical and numerical approximations of the underlying source and the intervening volume conductor that can generate the associated voltages on the surface of the body. If the goal is to recover the source on the heart from body surface potentials, the solution strategy must include numerical techniques that can incorporate appropriate constraints and recover useful solutions, even though the problem is badly posed. Creating complete software solutions to such problems is a daunting undertaking. In order to make such tools more accessible to a broad array of researchers, the Center for Integrative Biomedical Computing (CIBC) has made an ECG forward/inverse toolkit available within the open source SCIRun system. Here we report on three new methods added to the inverse suite of the toolkit. These new algorithms, namely a Total Variation method, a non-decreasing TMP inverse and a spline-based inverse, consist of two inverse methods that take advantage of the temporal structure of the heart potentials and one that leverages the spatial characteristics of the transmembrane potentials. These three methods further expand the possibilities of researchers in cardiology to explore and compare solutions to their particular imaging problem.
NASA Astrophysics Data System (ADS)
Dorn, O.; Lesselier, D.
2010-07-01
Inverse problems in electromagnetics have a long history and have stimulated exciting research over many decades. New applications and solution methods are still emerging, providing a rich source of challenging topics for further investigation. The purpose of this special issue is to combine descriptions of several such developments that are expected to have the potential to fundamentally fuel new research, and to provide an overview of novel methods and applications for electromagnetic inverse problems. There have been several special sections published in Inverse Problems over the last decade addressing fully, or partly, electromagnetic inverse problems. Examples are: Electromagnetic imaging and inversion of the Earth's subsurface (Guest Editors: D Lesselier and T Habashy) October 2000 Testing inversion algorithms against experimental data (Guest Editors: K Belkebir and M Saillard) December 2001 Electromagnetic and ultrasonic nondestructive evaluation (Guest Editors: D Lesselier and J Bowler) December 2002 Electromagnetic characterization of buried obstacles (Guest Editors: D Lesselier and W C Chew) December 2004 Testing inversion algorithms against experimental data: inhomogeneous targets (Guest Editors: K Belkebir and M Saillard) December 2005 Testing inversion algorithms against experimental data: 3D targets (Guest Editors: A Litman and L Crocco) February 2009 In a certain sense, the current issue can be understood as a continuation of this series of special sections on electromagnetic inverse problems. On the other hand, its focus is intended to be more general than previous ones. Instead of trying to cover a well-defined, somewhat specialized research topic as completely as possible, this issue aims to show the broad range of techniques and applications that are relevant to electromagnetic imaging nowadays, which may serve as a source of inspiration and encouragement for all those entering this active and rapidly developing research area. Also, the construction of this special issue is likely to have been different from preceding ones. In addition to the invitations sent to specific research groups involved in electromagnetic inverse problems, the Guest Editors also solicited recommendations, from a large number of experts, of potential authors who were thereupon encouraged to contribute. Moreover, an open call for contributions was published on the homepage of Inverse Problems in order to attract as wide a scope of contributions as possible. This special issue's attempt at generality might also define its limitations: by no means could this collection of papers be exhaustive or complete, and as Guest Editors we are well aware that many exciting topics and potential contributions will be missing. This, however, also determines its very special flavor: besides addressing electromagnetic inverse problems in a broad sense, there were only a few restrictions on the contributions considered for this section. One requirement was plausible evidence of either novelty or the emergent nature of the technique or application described, judged mainly by the referees, and in some cases by the Guest Editors. The technical quality of the contributions always remained a stringent condition of acceptance, final adjudication (possibly questionable either way, not always positive) being made in most cases once a thorough revision process had been carried out. Therefore, we hope that the final result presented here constitutes an interesting collection of novel ideas and applications, properly refereed and edited, which will find its own readership and which can stimulate significant new research in the topics represented. Overall, as Guest Editors, we feel quite fortunate to have obtained such a strong response to the call for this issue and to have a really wide-ranging collection of high-quality contributions which, indeed, can be read from the first to the last page with sustained enthusiasm. A large number of applications and techniques is represented, overall via 16 contributions with 45 authors in total. This shows, in our opinion, that electromagnetic imaging and inversion remain amongst the most challenging and active research areas in applied inverse problems today. Below, we give a brief overview of the contributions included in this issue, ordered alphabetically by the surname of the leading author. 1. The complexity of handling potential randomness of the source in an inverse scattering problem is not minor, and the literature is far from being replete in this configuration. The contribution by G Bao, S N Chow, P Li and H Zhou, `Numerical solution of an inverse medium scattering problem with a stochastic source', exemplifies how to hybridize Wiener chaos expansion with a recursive linearization method in order to solve the stochastic problem as a set of decoupled deterministic ones. 2. In cases where the forward problem is expensive to evaluate, database methods might become a reliable method of choice, while enabling one to deliver more information on the inversion itself. The contribution by S Bilicz, M Lambert and Sz Gyimóthy, `Kriging-based generation of optimal databases as forward and inverse surrogate models', describes such a technique which uses kriging for constructing an efficient database with the goal of achieving an equidistant distribution of points in the measurement space. 3. Anisotropy remains a considerable challenge in electromagnetic imaging, which is tackled in the contribution by F Cakoni, D Colton, P Monk and J Sun, `The inverse electromagnetic scattering problem for anisotropic media', via the fact that transmission eigenvalues can be retrieved from a far-field scattering pattern, yielding, in particular, lower and upper bounds of the index of refraction of the unknown (dielectric anisotropic) scatterer. 4. So-called subspace optimization methods (SOM) have attracted a lot of interest recently in many fields. The contribution by X Chen, `Subspace-based optimization method for inverse scattering problems with an inhomogeneous background medium', illustrates how to address a realistic situation in which the medium containing the unknown obstacles is not homogeneous, via blending a properly developed SOM with a finite-element approach to the required Green's functions. 5. H Egger, M Hanke, C Schneider, J Schöberl and S Zaglmayr, in their contribution `Adjoint-based sampling methods for electromagnetic scattering', show how to efficiently develop sampling methods without explicit knowledge of the dyadic Green's function once an adjoint problem has been solved at much lower computational cost. This is demonstrated by examples in demanding propagative and diffusive situations. 6. Passive sensor arrays can be employed to image reflectors from ambient noise via proper migration of cross-correlation matrices into their embedding medium. This is investigated, and resolution, in particular, is considered in detail, as a function of the characteristics of the sensor array and those of the noise, in the contribution by J Garnier and G Papanicolaou, `Resolution analysis for imaging with noise'. 7. A direct reconstruction technique based on the conformal mapping theorem is proposed and investigated in depth in the contribution by H Haddar and R Kress, `Conformal mapping and impedance tomography'. This paper expands on previous work, with inclusions in homogeneous media, convergence results, and numerical illustrations. 8. The contribution by T Hohage and S Langer, `Acceleration techniques for regularized Newton methods applied to electromagnetic inverse medium scattering problems', focuses on a spectral preconditioner intended to accelerate regularized Newton methods as employed for the retrieval of a local inhomogeneity in a three-dimensional vector electromagnetic case, while also illustrating the implementation of a Lepskiĭ-type stopping rule outsmarting a traditional discrepancy principle. 9. Geophysical applications are a rich source of practically relevant inverse problems. The contribution by M Li, A Abubakar and T Habashy, `Application of a two-and-a-half dimensional model-based algorithm to crosswell electromagnetic data inversion', deals with a model-based inversion technique for electromagnetic imaging which addresses novel challenges such as multi-physics inversion, and incorporation of prior knowledge, such as in hydrocarbon recovery. 10. Non-stationary inverse problems, considered as a special class of Bayesian inverse problems, are framed via an orthogonal decomposition representation in the contribution by A Lipponen, A Seppänen and J P Kaipio, `Reduced order estimation of nonstationary flows with electrical impedance tomography'. The goal is to simultaneously estimate, from electrical impedance tomography data, certain characteristics of the Navier--Stokes fluid flow model together with time-varying concentration distribution. 11. Non-iterative imaging methods of thin, penetrable cracks, based on asymptotic expansion of the scattering amplitude and analysis of the multi-static response matrix, are discussed in the contribution by W-K Park, `On the imaging of thin dielectric inclusions buried within a half-space', completing, for a shallow burial case at multiple frequencies, the direct imaging of small obstacles (here, along their transverse dimension), MUSIC and non-MUSIC type indicator functions being used for that purpose. 12. The contribution by R Potthast, `A study on orthogonality sampling' envisages quick localization and shaping of obstacles from (portions of) far-field scattering patterns collected at one or more time-harmonic frequencies, via the simple calculation (and summation) of scalar products between those patterns and a test function. This is numerically exemplified for Neumann/Dirichlet boundary conditions and homogeneous/heterogeneous embedding media. 13. The contribution by J D Shea, P Kosmas, B D Van Veen and S C Hagness, `Contrast-enhanced microwave imaging of breast tumors: a computational study using 3D realistic numerical phantoms', aims at microwave medical imaging, namely the early detection of breast cancer. The use of contrast enhancing agents is discussed in detail and a number of reconstructions in three-dimensional geometry of realistic numerical breast phantoms are presented. 14. The contribution by D A Subbarayappa and V Isakov, `Increasing stability of the continuation for the Maxwell system', discusses enhanced log-type stability results for continuation of solutions of the time-harmonic Maxwell system, adding a fresh chapter to the interesting story of the study of the Cauchy problem for PDE. 15. In their contribution, `Recent developments of a monotonicity imaging method for magnetic induction tomography in the small skin-depth regime', A Tamburrino, S Ventre and G Rubinacci extend the recently developed monotonicity method toward the application of magnetic induction tomography in order to map surface-breaking defects affecting a damaged metal component. 16. The contribution by F Viani, P Rocca, M Benedetti, G Oliveri and A Massa, `Electromagnetic passive localization and tracking of moving targets in a WSN-infrastructured environment', contributes to what could still be seen as a niche problem, yet both useful in terms of applications, e.g., security, and challenging in terms of methodologies and experiments, in particular, in view of the complexity of environments in which this endeavor is to take place and the variability of the wireless sensor networks employed. To conclude, we would like to thank the able and tireless work of Kate Watt and Zoë Crossman, as past and present Publishers of the Journal, on what was definitely a long and exciting journey (sometimes a little discouraging when reports were not arriving, or authors were late, or Guest Editors overwhelmed) that started from a thorough discussion at the `Manchester workshop on electromagnetic inverse problems' held mid-June 2009, between Kate Watt and the Guest Editors. We gratefully acknowledge the fact that W W Symes gave us his full backing to carry out this special issue and that A K Louis completed it successfully. Last, but not least, the staff of Inverse Problems should be thanked, since they work together to make it a premier journal.
Children's Understanding of the Inverse Relation between Multiplication and Division
ERIC Educational Resources Information Center
Robinson, Katherine M.; Dube, Adam K.
2009-01-01
Children's understanding of the inversion concept in multiplication and division problems (i.e., that on problems of the form "d multiplied by e/e" no calculations are required) was investigated. Children in Grades 6, 7, and 8 completed an inversion problem-solving task, an assessment of procedures task, and a factual knowledge task of simple…
A Volunteer Computing Project for Solving Geoacoustic Inversion Problems
NASA Astrophysics Data System (ADS)
Zaikin, Oleg; Petrov, Pavel; Posypkin, Mikhail; Bulavintsev, Vadim; Kurochkin, Ilya
2017-12-01
A volunteer computing project aimed at solving computationally hard inverse problems in underwater acoustics is described. This project was used to study the possibilities of the sound speed profile reconstruction in a shallow-water waveguide using a dispersion-based geoacoustic inversion scheme. The computational capabilities provided by the project allowed us to investigate the accuracy of the inversion for different mesh sizes of the sound speed profile discretization grid. This problem suits well for volunteer computing because it can be easily decomposed into independent simpler subproblems.
Efficient Implementation of High Order Inverse Lax-Wendroff Boundary Treatment for Conservation Laws
2011-07-15
with or without source terms representing chemical reactions in detonations . The results demonstrate the designed fifth order accuracy, stability, and...good performance for problems involving complicated interactions between detonation /shock waves and solid boundaries. AMS subject classification... detonation ; no-penetration con- ditions 1Division of Applied Mathematics, Brown University, Providence, RI 02912. E-mail: sirui@dam.brown.edu. 2State Key
Photon-limited Sensing and Surveillance
2015-01-29
considerable time delay). More specifically, there were four main outcomes from this work: • Improved understanding of the fundmental limitations of...that we design novel cameras for photon-limited settings based on the principles of CS. Most prior theoretical results in compressed sensing and related...inverse problems apply to idealized settings where the noise is i.i.d., and do not account for signal-dependent noise and physical sensing
Correlation-based regularization and gradient operators for (joint) inversion on unstructured meshes
NASA Astrophysics Data System (ADS)
Jordi, Claudio; Doetsch, Joseph; Günther, Thomas; Schmelzbach, Cedric; Robertsson, Johan
2017-04-01
When working with unstructured meshes for geophysical inversions, special attention should be paid to the design of the operators that are used for regularizing the inverse problem and coupling of different property models in joint inversions. Regularization constraints for inversions on unstructured meshes are often defined in a rather ad-hoc manner and usually only involve the cell to which the operator is applied and its direct neighbours. Similarly, most structural coupling operators for joint inversion, such as the popular cross-gradients operator, are only defined in the direct neighbourhood of a cell. As a result, the regularization and coupling length scales and strength of these operators depend on the discretization as well as cell sizes and shape. Especially for unstructured meshes, where the cell sizes vary throughout the model domain, the dependency of the operator on the discretization may lead to artefacts. Designing operators that are based on a spatial correlation model allows to define correlation length scales over which an operator acts (called footprint), reducing the dependency on the discretization and the effects of variable cell sizes. Moreover, correlation-based operators can accommodate for expected anisotropy by using different length scales in horizontal and vertical directions. Correlation-based regularization operators also known as stochastic regularization operators have already been successfully applied to inversions on regular grids. Here, we formulate stochastic operators for unstructured meshes and apply them in 2D surface and 3D cross-well electrical resistivity tomography data inversion examples of layered media. Especially for the synthetic cross-well example, improved inversion results are achieved when stochastic regularization is used instead of a classical smoothness constraint. For the case of cross-gradients operators for joint inversion, the correlation model is used to define the footprint of the operator and weigh the contributions of the property values that are used to calculate the cross-gradients. In a first series of synthetic-data tests, we examined the mesh dependency of the cross-gradients operators. Compared to operators that are only defined in the direct neighbourhood of a cell, the dependency on the cell size of the cross-gradients calculation is markedly reduced when using operators with larger footprints. A second test with synthetic models focussed on the effect of small-scale variabilities of the parameter value on the cross-gradients calculation. Small-scale variabilities that are superimposed on a global trend of the property value can potentially degrade the cross-gradients calculation and destabilize joint inversion. We observe that the cross-gradients from operators with footprints larger than the length scale of the variabilities are less affected compared to operators with a small footprint. In joint inversions on unstructured meshes, we thus expect the correlation-based coupling operators to ensure robust coupling on a physically meaningful scale.
Self-constrained inversion of potential fields
NASA Astrophysics Data System (ADS)
Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.
2013-11-01
We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.
Inversion of residual stress profiles from ultrasonic Rayleigh wave dispersion data
NASA Astrophysics Data System (ADS)
Mora, P.; Spies, M.
2018-05-01
We investigate theoretically and with synthetic data the performance of several inversion methods to infer a residual stress state from ultrasonic surface wave dispersion data. We show that this particular problem may reveal in relevant materials undesired behaviors for some methods that could be reliably applied to infer other properties. We focus on two methods, one based on a Taylor-expansion, and another one based on a piecewise linear expansion regularized by a singular value decomposition. We explain the instabilities of the Taylor-based method by highlighting singularities in the series of coefficients. At the same time, we show that the other method can successfully provide performances which only weakly depend on the material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bledsoe, Keith C.
2015-04-01
The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratory’s INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric.more » This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.« less
NASA Astrophysics Data System (ADS)
Guseinov, I. M.; Khanmamedov, A. Kh.; Mamedova, A. F.
2018-04-01
We consider the Schrödinger equation with an additional quadratic potential on the entire axis and use the transformation operator method to study the direct and inverse problems of the scattering theory. We obtain the main integral equations of the inverse problem and prove that the basic equations are uniquely solvable.
Assessing non-uniqueness: An algebraic approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasco, Don W.
Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.
FOREWORD: Imaging from coupled physics Imaging from coupled physics
NASA Astrophysics Data System (ADS)
Arridge, S. R.; Scherzer, O.
2012-08-01
Due to the increased demand for tomographic imaging in applied sciences, such as medicine, biology and nondestructive testing, the field has expanded enormously in the past few decades. The common task of tomography is to image the interior of three-dimensional objects from indirect measurement data. In practical realizations, the specimen to be investigated is exposed to probing fields. A variety of these, such as acoustic, electromagnetic or thermal radiation, amongst others, have been advocated in the literature. In all cases, the field is measured after interaction with internal mechanisms of attenuation and/or scattering and images are reconstructed using inverse problems techniques, representing spatial maps of the parameters of these perturbation mechanisms. In the majority of these imaging modalities, either the useful contrast is of low resolution, or high resolution images are obtained with limited contrast or quantitative discriminatory ability. In the last decade, an alternative phenomenon has become of increasing interest, although its origins can be traced much further back; see Widlak and Scherzer [1], Kuchment and Steinhaur [2], and Seo et al [3] in this issue for references to this historical context. Rather than using the same physical field for probing and measurement, with a contrast caused by perturbation, these methods exploit the generation of a secondary physical field which can be measured in addition to, or without, the often dominating effect of the primary probe field. These techniques are variously called 'hybrid imaging' or 'multimodality imaging'. However, in this article and special section we suggest the term 'imaging from coupled physics' (ICP) to more clearly distinguish this methodology from those that simply measure several types of data simultaneously. The key idea is that contrast induced by one type of radiation is read by another kind, so that both high resolution and high contrast are obtained simultaneously. As with all new imaging techniques, the discovery of physical principles which can be exploited to yield information about internal physical parameters has led, hand in hand, to the development of new mathematical methods for solving the corresponding inverse problems. In many cases, the coupled physics imaging problems are expected to be much better posed than conventional tomographical imaging problems. However, still, at the current state of research, there exist a variety of open mathematical questions regarding uniqueness, existence and stability. In this special section we have invited contributions from many of the leading researchers in the mathematics, physics and engineering of these techniques to survey and to elaborate on these novel methodologies, and to present recent research directions. Historically, one of the best studied strongly ill-posed problems in the mathematical literature is the Calderón problem occuring in conductivity imaging, and one of the first examples of ICP is the use of magnetic resonance imaging (MRI) to detect internal current distributions. This topic, known as current density imaging (CDI) or magnetic resonance elecrical impedance tomography (MREIT), and its related technique of magnetic resonance electrical property tomography (MREPT), is reviewed by Wildak and Scherzer [1], and also by Seo et al [3], where experimental studies are documented. Mathematically, several of the ICP problems can be analyzed in terms of the 'p-Laplacian' which raises interesting research questions of non-linear partial differential equations. One approach for analyzing and for the solution of the CDI problem, using characteristics of the 1-Laplacian, is discussed by Tamasan and Veras [4]. Moreover, Moradifam et al [5] present a novel iterative algorithm based on Bregman splitting for solving the CDI problem. Probably the most active research areas in ICP are related to acoustic detection, because most of these techniques rely on the photoacoustic effect wherein absorption of an ultrashort pulse of light, having propagated by multiple scattering some distance into a diffusing medium, generates a source of acoustic waves that are propagated with hyperbolic stability to a surface detector. A complementary problem is that of 'acousto-optics' which uses focussed acoustic waves as the primary field to induce perturbations in optical or electrical properties, which are thus spatially localized. Similar physical principles apply to implement ultrasound modulated electrical impedance tomography (UMEIT). These topics are included in the review of Wildak and Scherzer [1], and Kuchment and Steinhauer [2] offer a general analysis of their structure in terms of pseudo-differential operators. 'Acousto-electrical' imaging is analyzed as a particular case by Ammari et al [6]. In the paper by Tarvainen et al [7], the photo-acoustic problem is studied with respect to different models of the light propagation step. In the paper by Monard and Bal [8], a more general problem for the reconstruction of an anisotropic diffusion parameter from power density measurements is considered; here, issues of uniqueness with respect to the number of measurements is of great importance. A distinctive, and highly important, example of ICP is that of elastography, in which the primary field is low-frequency ultrasound giving rise to mechanical displacement that reveals information on the local elasticity tensor. As in all the methods discussed in this section, this contrast mechanism is measured internally, with a secondary technique, which in this case can be either MRI or ultrasound. McLaughlin et al [9] give a comprehensive analysis of this problem. Our intention for this special section was to provide both an overview and a snapshot of current work in this exciting area. The increasing interest, and the involvement of cross-disciplinary groups of scientists, will continue to lead to the rapid expansion and important new results in this novel area of imaging science. References [1] Widlak T and Scherzer O 2012 Inverse Problems 28 084008 [2] Kuchment P and Steinhauer D 2012 Inverse Problems 28 084007 [3] Seo J K, Kim D-H, Lee J, Kwon O I, Sajib S Z K and Woo E J 2012 Inverse Problems 28 084002 [4] Tamasan A and Veras J 2012 Inverse Problems 28 084006 [5] Moradifam A, Nachman A and Timonov A 2012 Inverse Problems 28 084003 [6] Ammari H, Garnier J and Jing W 2012 Inverse Problems 28 084005 [7] Tarvainen T, Cox B T, Kaipio J P and Arridge S R 2012 Inverse Problems 28 084009 [8] Monard F and Bal G 2012 Inverse Problems 28 084001 [9] McLaughlin J, Oberai A and Yoon J R 2012 Inverse Problems 28 084004
Including geological information in the inverse problem of palaeothermal reconstruction
NASA Astrophysics Data System (ADS)
Trautner, S.; Nielsen, S. B.
2003-04-01
A reliable reconstruction of sediment thermal history is of central importance to the assessment of hydrocarbon potential and the understanding of basin evolution. However, only rarely do sedimentation history and borehole data in the form of present day temperatures and vitrinite reflectance constrain the past thermal evolution to a useful level of accuracy (Gallagher and Sambridge,1992; Nielsen,1998; Trautner and Nielsen,2003). This is reflected in the inverse solutions to the problem of determining heat flow history from borehole data: The recent heat flow is constrained by data while older values are governed by the chosen a prior heat flow. In this paper we reduce this problem by including geological information in the inverse problem. Through a careful analysis of geological and geophysical data the timing of the tectonic processes, which may influence heat flow, can be inferred. The heat flow history is then parameterised to allow for the temporal variations characteristic of the different tectonic events. The inversion scheme applies a Markov chain Monte Carlo (MCMC) approach (Nielsen and Gallagher, 1999; Ferrero and Gallagher,2002), which efficiently explores the model space and futhermore samples the posterior probability distribution of the model. The technique is demonstrated on wells in the northern North Sea with emphasis on the stretching event in Late Jurassic. The wells are characterised by maximum sediment temperature at the present day, which is the worst case for resolution of the past thermal history because vitrinite reflectance is determined mainly by the maximum temperature. Including geological information significantly improves the thermal resolution. Ferrero, C. and Gallagher,K.,2002. Stochastic thermal history modelling.1. Constraining heat flow histories and their uncertainty. Marine and Petroleum Geology, 19, 633-648. Gallagher,K. and Sambridge, M., 1992. The resolution of past heat flow in sedimentary basins from non-linear inversion of geochemical data: the smoothest model approach, with synthetic examples. Geophysical Journal International, 109, 78-95. Nielsen, S.B, 1998. Inversion and sensitivity analysis in basin modelling. Geoscience 98. Keele University, UK, Abstract Volume, 56. Nielsen, S.B. and Gallagher, K., 1999. Efficient sampling of 3-D basin modelling scenarios. Extended Abstracts Volume, 1999 AAPG International Conference &Exhibition, Birmingham, England, September 12-15, 1999, p. 369 - 372. Trautner S. and Nielsen, S.B., 2003. 2-D inverse thermal modelling in the Norwegian shelf using Fast Approximate Forward (FAF) solutions. In R. Marzi and Duppenbecker, S. (Ed.), Multi-Dimensional Basin Modeling, AAPG, in press.
Geoacoustic inversion with two source-receiver arrays in shallow water.
Sukhovich, Alexey; Roux, Philippe; Wathelet, Marc
2010-08-01
A geoacoustic inversion scheme based on a double beamforming algorithm in shallow water is proposed and tested. Double beamforming allows identification of multi-reverberated eigenrays propagating between two vertical transducer arrays according to their emission and reception angles and arrival times. Analysis of eigenray intensities yields the bottom reflection coefficient as a function of angle of incidence. By fitting the experimental reflection coefficient with a theoretical prediction, values of the acoustic parameters of the waveguide bottom can be extracted. The procedure was initially tested in a small-scale tank experiment for a waveguide with a Plexiglas bottom. Inversion results for the speed of shear waves in Plexiglas are in good agreement with the table values. A similar analysis was applied to data collected during an at-sea experiment in shallow coastal waters of the Mediterranean. Bottom reflection coefficient was fitted with the theory in which bottom sediments are modeled as a multi-layered system. Retrieved bottom parameters are in quantitative agreement with those determined from a prior inversion scheme performed in the same area. The present study confirms the interest in processing source-receiver array data through the double beamforming algorithm, and indicates the potential for application of eigenray intensity analysis to geoacoustic inversion problems.
Elastic-Waveform Inversion with Compressive Sensing for Sparse Seismic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; Huang, Lianjie
2015-01-28
Accurate velocity models of compressional- and shear-waves are essential for geothermal reservoir characterization and microseismic imaging. Elastic-waveform inversion of multi-component seismic data can provide high-resolution inversion results of subsurface geophysical properties. However, the method requires seismic data acquired using dense source and receiver arrays. In practice, seismic sources and/or geophones are often sparsely distributed on the surface and/or in a borehole, such as 3D vertical seismic profiling (VSP) surveys. We develop a novel elastic-waveform inversion method with compressive sensing for inversion of sparse seismic data. We employ an alternating-minimization algorithm to solve the optimization problem of our new waveform inversionmore » method. We validate our new method using synthetic VSP data for a geophysical model built using geologic features found at the Raft River enhanced-geothermal-system (EGS) field. We apply our method to synthetic VSP data with a sparse source array and compare the results with those obtained with a dense source array. Our numerical results demonstrate that the velocity models produced with our new method using a sparse source array are almost as accurate as those obtained using a dense source array.« less
NASA Astrophysics Data System (ADS)
Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.
2015-03-01
This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.
Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W
2015-03-21
This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.
NASA Astrophysics Data System (ADS)
Mandolesi, E.; Jones, A. G.; Roux, E.; Lebedev, S.
2009-12-01
Recently different studies were undertaken on the correlation between diverse geophysical datasets. Magnetotelluric (MT) data are used to map the electrical conductivity structure behind the Earth, but one of the problems in MT method is the lack in resolution in mapping zones beneath a region of high conductivity. Joint inversion of different datasets in which a common structure is recognizable reduces non-uniqueness and may improve the quality of interpretation when different dataset are sensitive to different physical properties with an underlined common structure. A common structure is recognized if the change of physical properties occur at the same spatial locations. Common structure may be recognized in 1D inversion of seismic and MT datasets, and numerous authors show that also 2D common structure may drive to an improvement of inversion quality while dataset are jointly inverted. In this presentation a tool to constrain MT 2D inversion with phase velocity of surface wave seismic data (SW) is proposed and is being developed and tested on synthetic data. Results obtained suggest that a joint inversion scheme could be applied with success along a section profile for which data are compatible with a 2D MT model.
LES on Plume Dispersion in the Convective Boundary Layer Capped by a Temperature Inversion
NASA Astrophysics Data System (ADS)
Nakayama, Hiromasa; Tamura, Tetsuro; Abe, Satoshi
Large-eddy simulation (LES) is applied to the problem of plume dispersion in the spatially-developing convective boundary layer (CBL) capped by a temperature inversion. In order to generate inflow turbulence with buoyant forcing, we first, simulate the neutral boundary layer flow (NBL) in the driver region using Lund's method. At the same time, the temperature profile possessing the inversion part is imposed at the entrance of the driver region and the temperature field is calculated as a passive scalar. Next, the buoyancy effect is introduced into the flow field in the main region. We evaluate the applicability of the LES model for atmospheric dispersion in the CBL flow and compare the characteristics of plume dispersion in the CBL flow with those in the neutral boundary layer. The Richardson number based on the temperature increment across the inversion obtained by the present LES model is 22.4 and the capping effect of the temperature inversion can be captured qualitatively in the upper portion of the CBL. Characteristics of flow and temperature fields in the main portion of CBL flow are similar to those of previous experiments[1],[2] and observations[3]. Concerning dispersion behavior, we also find that mean concentrations decrease immediately above the inversion height and the peak values of r.m.s concentrations are located near the inversion height at larger distances from the point source.
Computational methods for inverse problems in geophysics: inversion of travel time observations
Pereyra, V.; Keller, H.B.; Lee, W.H.K.
1980-01-01
General ways of solving various inverse problems are studied for given travel time observations between sources and receivers. These problems are separated into three components: (a) the representation of the unknown quantities appearing in the model; (b) the nonlinear least-squares problem; (c) the direct, two-point ray-tracing problem used to compute travel time once the model parameters are given. Novel software is described for (b) and (c), and some ideas given on (a). Numerical results obtained with artificial data and an implementation of the algorithm are also presented. ?? 1980.
A fixed energy fixed angle inverse scattering in interior transmission problem
NASA Astrophysics Data System (ADS)
Chen, Lung-Hui
2017-06-01
We study the inverse acoustic scattering problem in mathematical physics. The problem is to recover the index of refraction in an inhomogeneous medium by measuring the scattered wave fields in the far field. We transform the problem to the interior transmission problem in the study of the Helmholtz equation. We find an inverse uniqueness on the scatterer with a knowledge of a fixed interior transmission eigenvalue. By examining the solution in a series of spherical harmonics in the far field, we can determine uniquely the perturbation source for the radially symmetric perturbations.
Zatsiorsky, Vladimir M.
2011-01-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907
Exploring L1 model space in search of conductivity bounds for the MT problem
NASA Astrophysics Data System (ADS)
Wheelock, B. D.; Parker, R. L.
2013-12-01
Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.
NASA Astrophysics Data System (ADS)
Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.
2017-07-01
The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.
Geostatistical regularization operators for geophysical inverse problems on irregular meshes
NASA Astrophysics Data System (ADS)
Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA
2018-05-01
Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.
FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)
NASA Astrophysics Data System (ADS)
2014-10-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2014 was a one-day workshop held in May 2014 which attracted around sixty attendees. Each of the submitted papers has been reviewed by two reviewers. There have been nine accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks (GDR ISIS, GDR MIA, GDR MOA, GDR Ondes). The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA, SATIE. Eric Vourc'h and Thomas Rodet
Multistatic aerosol-cloud lidar in space: A theoretical perspective
NASA Astrophysics Data System (ADS)
Mishchenko, M. I.; Alexandrov, M. D.; Brian, C.; Travis, L. D.
2016-12-01
Accurate aerosol and cloud retrievals from space remain quite challenging and typically involve solving a severely ill-posed inverse scattering problem. In this Perspective, we formulate in general terms an aerosol and aerosol-cloud interaction space mission concept intended to provide detailed horizontal and vertical profiles of aerosol physical characteristics as well as identify mutually induced changes in the properties of aerosols and clouds. We argue that a natural and feasible way of addressing the ill-posedness of the inverse scattering problem while having an exquisite vertical-profiling capability is to fly a multistatic (including bistatic) lidar system. We analyze theoretically the capabilities of a formation-flying constellation of a primary satellite equipped with a conventional monostatic (backscattering) lidar and one or more additional platforms each hosting a receiver of the scattered laser light. If successfully implemented, this concept would combine the measurement capabilities of a passive multi-angle multi-spectral polarimeter with the vertical profiling capability of a lidar; address the ill-posedness of the inverse problem caused by the highly limited information content of monostatic lidar measurements; address the ill-posedness of the inverse problem caused by vertical integration and surface reflection in passive photopolarimetric measurements; relax polarization accuracy requirements; eliminate the need for exquisite radiative-transfer modeling of the atmosphere-surface system in data analyses; yield the day-and-night observation capability; provide direct characterization of ground-level aerosols as atmospheric pollutants; and yield direct measurements of polarized bidirectional surface reflectance. We demonstrate, in particular, that supplementing the conventional backscattering lidar with just one additional receiver flown in formation at a scattering angle close to 170° can dramatically increase the information content of the measurements. Although the specific subject of this Perspective is the multistatic lidar concept, all our conclusions equally apply to a multistatic radar system intended to study from space the global distribution of cloud and precipitation characteristics.
Multistatic Aerosol Cloud Lidar in Space: A Theoretical Perspective
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Alexandrov, Mikhail D.; Cairns, Brian; Travis, Larry D.
2016-01-01
Accurate aerosol and cloud retrievals from space remain quite challenging and typically involve solving a severely ill-posed inverse scattering problem. In this Perspective, we formulate in general terms an aerosol and aerosol-cloud interaction space mission concept intended to provide detailed horizontal and vertical profiles of aerosol physical characteristics as well as identify mutually induced changes in the properties of aerosols and clouds. We argue that a natural and feasible way of addressing the ill-posedness of the inverse scattering problem while having an exquisite vertical-profiling capability is to fly a multistatic (including bistatic) lidar system. We analyze theoretically the capabilities of a formation-flying constellation of a primary satellite equipped with a conventional monostatic (backscattering) lidar and one or more additional platforms each hosting a receiver of the scattered laser light. If successfully implemented, this concept would combine the measurement capabilities of a passive multi-angle multi-spectral polarimeter with the vertical profiling capability of a lidar; address the ill-posedness of the inverse problem caused by the highly limited information content of monostatic lidar measurements; address the ill-posedness of the inverse problem caused by vertical integration and surface reflection in passive photopolarimetric measurements; relax polarization accuracy requirements; eliminate the need for exquisite radiative-transfer modeling of the atmosphere-surface system in data analyses; yield the day-and-night observation capability; provide direct characterization of ground-level aerosols as atmospheric pollutants; and yield direct measurements of polarized bidirectional surface reflectance. We demonstrate, in particular, that supplementing the conventional backscattering lidar with just one additional receiver flown in formation at a scattering angle close to 170deg can dramatically increase the information content of the measurements. Although the specific subject of this Perspective is the multistatic lidar concept, all our conclusions equally apply to a multistatic radar system intended to study from space the global distribution of cloud and precipitation characteristics.
NASA Astrophysics Data System (ADS)
Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest
2017-12-01
The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo (MCMC) inversion algorithm, a full non-linear uncertainty analysis of the parameters and the parameter correlations can be accessed. This is essential to understand to what degree the spectral Cole-Cole parameters can be resolved from TDIP data. MCMC inversions of synthetic TDIP data, which show bell-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations increase and become non-linear. It is further investigated how waveform and parameter values influence the resolution of the Cole-Cole parameters. A limiting factor is the value of the frequency exponent, C. As C decreases, the resolution of all the Cole-Cole parameters decreases and the results become increasingly non-linear. While the values of the time constant, τ, must be in the acquisition range to resolve the parameters well, the choice between a 50 per cent and a 100 per cent duty cycle for the current injection does not have an influence on the parameter resolution. The limits of resolution and linearity are also studied in a comparison between the MCMC and a linearized gradient-based inversion approach. The two methods are consistent for resolved models, but the linearized approach tends to underestimate the uncertainties for poorly resolved parameters due to the corresponding non-linear features. Finally, an MCMC inversion of 1-D field data verifies that spectral Cole-Cole parameters can also be resolved from TD field measurements.
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations
NASA Astrophysics Data System (ADS)
Zhi, L.; Gu, H.
2017-12-01
The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion has a better applicability. It doesn't need some assumptions and can estimate more parameters simultaneously. Meanwhile, by using the generalized linear method, the inversion is easily realized and its calculation amount is small. We use the Marmousi model to generate synthetic seismic records to test and analyze the influence of random noise. Without noise, all estimation results are relatively accurate. With the increase of noise, P-wave velocity change and oil saturation change are stable and less affected by noise. S-wave velocity change is most affected by noise. Finally we use the actual field data of time-lapse seismic prospecting to process and the results can prove the availability and feasibility of our method in actual situation.
NASA Astrophysics Data System (ADS)
Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.
2014-12-01
One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.
Riverine Bathymetry Imaging with Indirect Observations
NASA Astrophysics Data System (ADS)
Farthing, M.; Lee, J. H.; Ghorbanidehno, H.; Hesser, T.; Darve, E. F.; Kitanidis, P. K.
2017-12-01
Bathymetry, i.e, depth, imaging in a river is of crucial importance for shipping operations and flood management. With advancements in sensor technology and computational resources, various types of indirect measurements can be used to estimate high-resolution riverbed topography. Especially, the use of surface velocity measurements has been actively investigated recently since they are easy to acquire at a low cost in all river conditions and surface velocities are sensitive to the river depth. In this work, we image riverbed topography using depth-averaged quasi-steady velocity observations related to the topography through the 2D shallow water equations (SWE). The principle component geostatistical approach (PCGA), a fast and scalable variational inverse modeling method powered by low-rank representation of covariance matrix structure, is presented and applied to two "twin" riverine bathymetry identification problems. To compare the efficiency and effectiveness of the proposed method, an ensemble-based approach is also applied to the test problems. Results demonstrate that PCGA is superior to the ensemble-based approach in terms of computational effort and accuracy. Especially, the results obtained from PCGA capture small-scale bathymetry features irrespective of the initial guess through the successive linearization of the forward model. Analysis on the direct survey data of the riverine bathymetry used in one of the test problems shows an efficient, parsimonious choice of the solution basis in PCGA so that the number of the numerical model runs used to achieve the inversion results is close to the minimum number that reconstructs the underlying bathymetry.
Monotonicity based imaging method for time-domain eddy current problems
NASA Astrophysics Data System (ADS)
Su, Z.; Ventre, S.; Udpa, L.; Tamburrino, A.
2017-12-01
Eddy current imaging is an example of inverse problem in nondestructive evaluation for detecting anomalies in conducting materials. This paper introduces the concept of time constants and associated natural modes in eddy current imaging. The monotonicity of time constants is then described and applied to develop a non-iterative imaging method. The proposed imaging method has a low computational cost which makes it suitable for real-time operations. Full 3D numerical examples prove the effectiveness of the method in realistic scenarios. This paper is dedicated to Professor Guglielmo Rubinacci on the occasion of his 65th Birthday.
Identifying the principal coefficient of parabolic equations with non-divergent form
NASA Astrophysics Data System (ADS)
Jiang, L. S.; Bian, B. J.
2005-01-01
We deal with an inverse problem of determining a coefficient a(x, t) of principal part for second order parabolic equations with non-divergent form when the solution is known. Such a problem has important applications in a large fields of applied science. We propose a well-posed approximate algorithm to identify the coefficient. The existence, uniqueness and stability of such solutions a(x, t) are proved. A necessary condition which is a couple system of a parabolic equation and a parabolic variational inequality is deduced. Our numerical simulations show that the coefficient is recovered very well.
How to Detect the Location and Time of a Covert Chemical Attack: A Bayesian Approach
2009-12-01
Inverse Problems, Design and Optimization Symposium 2004. Rio de Janeiro , Brazil. Chan, R., and Yee, E. (1997). A simple model for the probability...sensor interpretation applications and has been successfully applied, for example, to estimate the source strength of pollutant releases in multi...coagulation, and second-order pollutant diffusion in sorption- desorption, are not linear. Furthermore, wide uncertainty bounds exist for several of
NASA Astrophysics Data System (ADS)
Hou, Zhenlong; Huang, Danian
2017-09-01
In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.
Three-Dimensional Anisotropic Acoustic and Elastic Full-Waveform Seismic Inversion
NASA Astrophysics Data System (ADS)
Warner, M.; Morgan, J. V.
2013-12-01
Three-dimensional full-waveform inversion is a high-resolution, high-fidelity, quantitative, seismic imaging technique that has advanced rapidly within the oil and gas industry. The method involves the iterative improvement of a starting model using a series of local linearized updates to solve the full non-linear inversion problem. During the inversion, forward modeling employs the full two-way three-dimensional heterogeneous anisotropic acoustic or elastic wave equation to predict the observed raw field data, wiggle-for-wiggle, trace-by-trace. The method is computationally demanding; it is highly parallelized, and runs on large multi-core multi-node clusters. Here, we demonstrate what can be achieved by applying this newly practical technique to several high-density 3D seismic datasets that were acquired to image four contrasting sedimentary targets: a gas cloud above an oil reservoir, a radially faulted dome, buried fluvial channels, and collapse structures overlying an evaporate sequence. We show that the resulting anisotropic p-wave velocity models match in situ measurements in deep boreholes, reproduce detailed structure observed independently on high-resolution seismic reflection sections, accurately predict the raw seismic data, simplify and sharpen reverse-time-migrated reflection images of deeper horizons, and flatten Kirchhoff-migrated common-image gathers. We also show that full-elastic 3D full-waveform inversion of pure pressure data can generate a reasonable shear-wave velocity model for one of these datasets. For two of the four datasets, the inclusion of significant transversely isotropic anisotropy with a vertical axis of symmetry was necessary in order to fit the kinematics of the field data properly. For the faulted dome, the full-waveform-inversion p-wave velocity model recovers the detailed structure of every fault that can be seen on coincident seismic reflection data. Some of the individual faults represent high-velocity zones, some represent low-velocity zones, some have more-complex internal structure, and some are visible merely as offsets between two regions with contrasting velocity. Although this has not yet been demonstrated quantitatively for this dataset, it seems likely that at least some of this fine structure in the recovered velocity model is related to the detailed lithology, strain history and fluid properties within the individual faults. We have here applied this technique to seismic data that were acquired by the extractive industries, however this inversion scheme is immediately scalable and applicable to a much wider range of problems given sufficient quality and density of observed data. Potential targets range from shallow magma chambers beneath active volcanoes, through whole-crustal sections across plate boundaries, to regional and whole-Earth models.
NASA Astrophysics Data System (ADS)
Luo, H.; Zhang, H.; Gao, J.
2016-12-01
Seismic and magnetotelluric (MT) imaging methods are generally used to characterize subsurface structures at various scales. The two methods are complementary to each other and the integration of them is helpful for more reliably determining the resistivity and velocity models of the target region. Because of the difficulty in finding empirical relationship between resistivity and velocity parameters, Gallardo and Meju [2003] proposed a joint inversion method enforcing resistivity and velocity models consistent in structure, which is realized by minimizing cross gradients between two models. However, it is extremely challenging to combine two different inversion systems together along with the cross gradient constraints. For this reason, Gallardo [2007] proposed a joint inversion scheme that decouples the seismic and MT inversion systems by iteratively performing seismic and MT inversions as well as cross gradient minimization separately. This scheme avoids the complexity of combining two different systems together but it suffers the issue of balancing between data fitting and structure constraint. In this study, we have developed a new joint inversion scheme that avoids the problem encountered by the scheme of Gallardo [2007]. In the new scheme, seismic and MT inversions are still separately performed but the cross gradient minimization is also constrained by model perturbations from separate inversions. In this way, the new scheme still avoids the complexity of combining two different systems together and at the same time the balance between data fitting and structure consistency constraint can be enforced. We have tested our joint inversion algorithm for both 2D and 3D cases. Synthetic tests show that joint inversion better reconstructed the velocity and resistivity models than separate inversions. Compared to separate inversions, joint inversion can remove artifacts in the resistivity model and can improve the resolution for deeper resistivity structures. We will also show results applying the new joint seismic and MT inversion scheme to southwest China, where several MT profiles are available and earthquakes are very active.
Zhan, X.
2005-01-01
A parallel Fortran-MPI (Message Passing Interface) software for numerical inversion of the Laplace transform based on a Fourier series method is developed to meet the need of solving intensive computational problems involving oscillatory water level's response to hydraulic tests in a groundwater environment. The software is a parallel version of ACM (The Association for Computing Machinery) Transactions on Mathematical Software (TOMS) Algorithm 796. Running 38 test examples indicated that implementation of MPI techniques with distributed memory architecture speedups the processing and improves the efficiency. Applications to oscillatory water levels in a well during aquifer tests are presented to illustrate how this package can be applied to solve complicated environmental problems involved in differential and integral equations. The package is free and is easy to use for people with little or no previous experience in using MPI but who wish to get off to a quick start in parallel computing. ?? 2004 Elsevier Ltd. All rights reserved.
On seismological moments and magnitudes
Bolt, B. A.
1991-01-01
My approach to seismology over the years has always been from the point of view of applied mathematics, as exemplified broadly by the work of the late Sir Harold Jeffreys and Professor K. E. Bullen. Both stresses the development of mathematics in the context of physical systems and of modeling, with an eye always on the side of inference. Seismology provided for them and still provides today the almost perfect paradigm; the problem is the resolution of the detailed consitution of the Earth and its geologically short-term dynamics. The latter part, includes, of course, seismic-risk estimation. The last 20 years have seen the construction of a brilliant theoretical formalism for linear inverse problems in seismology , although, oddly enough, the current popular Earth models do not take account it. It is interesting too that the narrow opinion, prevelent a decade ago, to the effect that the traditional seismic body-wave approaches to structural definition were superceded, has been largely abandoned under today's banner of tomography-as though the Oldham-Jeffreys-Gutenbery inversions were not tomography.
A stochastic vortex structure method for interacting particles in turbulent shear flows
NASA Astrophysics Data System (ADS)
Dizaji, Farzad F.; Marshall, Jeffrey S.; Grant, John R.
2018-01-01
In a recent study, we have proposed a new synthetic turbulence method based on stochastic vortex structures (SVSs), and we have demonstrated that this method can accurately predict particle transport, collision, and agglomeration in homogeneous, isotropic turbulence in comparison to direct numerical simulation results. The current paper extends the SVS method to non-homogeneous, anisotropic turbulence. The key element of this extension is a new inversion procedure, by which the vortex initial orientation can be set so as to generate a prescribed Reynolds stress field. After validating this inversion procedure for simple problems, we apply the SVS method to the problem of interacting particle transport by a turbulent planar jet. Measures of the turbulent flow and of particle dispersion, clustering, and collision obtained by the new SVS simulations are shown to compare well with direct numerical simulation results. The influence of different numerical parameters, such as number of vortices and vortex lifetime, on the accuracy of the SVS predictions is also examined.
Maximum likelihood techniques applied to quasi-elastic light scattering
NASA Technical Reports Server (NTRS)
Edwards, Robert V.
1992-01-01
There is a necessity of having an automatic procedure for reliable estimation of the quality of the measurement of particle size from QELS (Quasi-Elastic Light Scattering). Getting the measurement itself, before any error estimates can be made, is a problem because it is obtained by a very indirect measurement of a signal derived from the motion of particles in the system and requires the solution of an inverse problem. The eigenvalue structure of the transform that generates the signal is such that an arbitrarily small amount of noise can obliterate parts of any practical inversion spectrum. This project uses the Maximum Likelihood Estimation (MLE) as a framework to generate a theory and a functioning set of software to oversee the measurement process and extract the particle size information, while at the same time providing error estimates for those measurements. The theory involved verifying a correct form of the covariance matrix for the noise on the measurement and then estimating particle size parameters using a modified histogram approach.
NASA Astrophysics Data System (ADS)
Yee, Eugene
2007-04-01
Although a great deal of research effort has been focused on the forward prediction of the dispersion of contaminants (e.g., chemical and biological warfare agents) released into the turbulent atmosphere, much less work has been directed toward the inverse prediction of agent source location and strength from the measured concentration, even though the importance of this problem for a number of practical applications is obvious. In general, the inverse problem of source reconstruction is ill-posed and unsolvable without additional information. It is demonstrated that a Bayesian probabilistic inferential framework provides a natural and logically consistent method for source reconstruction from a limited number of noisy concentration data. In particular, the Bayesian approach permits one to incorporate prior knowledge about the source as well as additional information regarding both model and data errors. The latter enables a rigorous determination of the uncertainty in the inference of the source parameters (e.g., spatial location, emission rate, release time, etc.), hence extending the potential of the methodology as a tool for quantitative source reconstruction. A model (or, source-receptor relationship) that relates the source distribution to the concentration data measured by a number of sensors is formulated, and Bayesian probability theory is used to derive the posterior probability density function of the source parameters. A computationally efficient methodology for determination of the likelihood function for the problem, based on an adjoint representation of the source-receptor relationship, is described. Furthermore, we describe the application of efficient stochastic algorithms based on Markov chain Monte Carlo (MCMC) for sampling from the posterior distribution of the source parameters, the latter of which is required to undertake the Bayesian computation. The Bayesian inferential methodology for source reconstruction is validated against real dispersion data for two cases involving contaminant dispersion in highly disturbed flows over urban and complex environments where the idealizations of horizontal homogeneity and/or temporal stationarity in the flow cannot be applied to simplify the problem. Furthermore, the methodology is applied to the case of reconstruction of multiple sources.
Sharp Boundary Inversion of 2D Magnetotelluric Data using Bayesian Method.
NASA Astrophysics Data System (ADS)
Zhou, S.; Huang, Q.
2017-12-01
Normally magnetotelluric(MT) inversion method cannot show the distribution of underground resistivity with clear boundary, even if there are obviously different blocks. Aiming to solve this problem, we develop a Bayesian structure to inverse 2D MT sharp boundary data, using boundary location and inside resistivity as the random variables. Firstly, we use other MT inversion results, like ModEM, to analyze the resistivity distribution roughly. Then, we select the suitable random variables and change its data format to traditional staggered grid parameters, which can be used to do finite difference forward part. Finally, we can shape the posterior probability density(PPD), which contains all the prior information and model-data correlation, by Markov Chain Monte Carlo(MCMC) sampling from prior distribution. The depth, resistivity and their uncertainty can be valued. It also works for sensibility estimation. We applied the method to a synthetic case, which composes two large abnormal blocks in a trivial background. We consider the boundary smooth and the near true model weight constrains that mimic joint inversion or constrained inversion, then we find that the model results a more precise and focused depth distribution. And we also test the inversion without constrains and find that the boundary could also be figured, though not as well. Both inversions have a good valuation of resistivity. The constrained result has a lower root mean square than ModEM inversion result. The data sensibility obtained via PPD shows that the resistivity is the most sensible, center depth comes second and both sides are the worst.
NASA Astrophysics Data System (ADS)
Lezina, Natalya; Agoshkov, Valery
2017-04-01
Domain decomposition method (DDM) allows one to present a domain with complex geometry as a set of essentially simpler subdomains. This method is particularly applied for the hydrodynamics of oceans and seas. In each subdomain the system of thermo-hydrodynamic equations in the Boussinesq and hydrostatic approximations is solved. The problem of obtaining solution in the whole domain is that it is necessary to combine solutions in subdomains. For this purposes iterative algorithm is created and numerical experiments are conducted to investigate an effectiveness of developed algorithm using DDM. For symmetric operators in DDM, Poincare-Steklov's operators [1] are used, but for the problems of the hydrodynamics, it is not suitable. In this case for the problem, adjoint equation method [2] and inverse problem theory are used. In addition, it is possible to create algorithms for the parallel calculations using DDM on multiprocessor computer system. DDM for the model of the Baltic Sea dynamics is numerically studied. The results of numerical experiments using DDM are compared with the solution of the system of hydrodynamic equations in the whole domain. The work was supported by the Russian Science Foundation (project 14-11-00609, the formulation of the iterative process and numerical experiments). [1] V.I. Agoshkov, Domain Decompositions Methods in the Mathematical Physics Problem // Numerical processes and systems, No 8, Moscow, 1991 (in Russian). [2] V.I. Agoshkov, Optimal Control Approaches and Adjoint Equations in the Mathematical Physics Problem, Institute of Numerical Mathematics, RAS, Moscow, 2003 (in Russian).
Solving geosteering inverse problems by stochastic Hybrid Monte Carlo method
Shen, Qiuyang; Wu, Xuqing; Chen, Jiefu; ...
2017-11-20
The inverse problems arise in almost all fields of science where the real-world parameters are extracted from a set of measured data. The geosteering inversion plays an essential role in the accurate prediction of oncoming strata as well as a reliable guidance to adjust the borehole position on the fly to reach one or more geological targets. This mathematical treatment is not easy to solve, which requires finding an optimum solution among a large solution space, especially when the problem is non-linear and non-convex. Nowadays, a new generation of logging-while-drilling (LWD) tools has emerged on the market. The so-called azimuthalmore » resistivity LWD tools have azimuthal sensitivity and a large depth of investigation. Hence, the associated inverse problems become much more difficult since the earth model to be inverted will have more detailed structures. The conventional deterministic methods are incapable to solve such a complicated inverse problem, where they suffer from the local minimum trap. Alternatively, stochastic optimizations are in general better at finding global optimal solutions and handling uncertainty quantification. In this article, we investigate the Hybrid Monte Carlo (HMC) based statistical inversion approach and suggest that HMC based inference is more efficient in dealing with the increased complexity and uncertainty faced by the geosteering problems.« less
Adjoint-Based Sensitivity Kernels for Glacial Isostatic Adjustment in a Laterally Varying Earth
NASA Astrophysics Data System (ADS)
Crawford, O.; Al-Attar, D.; Tromp, J.; Mitrovica, J. X.; Austermann, J.; Lau, H. C. P.
2017-12-01
We consider a new approach to both the forward and inverse problems in glacial isostatic adjustment. We present a method for forward modelling GIA in compressible and laterally heterogeneous earth models with a variety of linear and non-linear rheologies. Instead of using the so-called sea level equation, which must be solved iteratively, the forward theory we present consists of a number of coupled evolution equations that can be straightforwardly numerically integrated. We also apply the adjoint method to the inverse problem in order to calculate the derivatives of measurements of GIA with respect to the viscosity structure of the Earth. Such derivatives quantify the sensitivity of the measurements to the model. The adjoint method enables efficient calculation of continuous and laterally varying derivatives, allowing us to calculate the sensitivity of measurements of glacial isostatic adjustment to the Earth's three-dimensional viscosity structure. The derivatives have a number of applications within the inverse method. Firstly, they can be used within a gradient-based optimisation method to find a model which minimises some data misfit function. The derivatives can also be used to quantify the uncertainty in such a model and hence to provide understanding of which parts of the model are well constrained. Finally, they enable construction of measurements which provide sensitivity to a particular part of the model space. We illustrate both the forward and inverse aspects with numerical examples in a spherically symmetric earth model.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).
Frnakenstein: multiple target inverse RNA folding.
Lyngsø, Rune B; Anderson, James W J; Sizikova, Elena; Badugu, Amarendra; Hyland, Tomas; Hein, Jotun
2012-10-09
RNA secondary structure prediction, or folding, is a classic problem in bioinformatics: given a sequence of nucleotides, the aim is to predict the base pairs formed in its three dimensional conformation. The inverse problem of designing a sequence folding into a particular target structure has only more recently received notable interest. With a growing appreciation and understanding of the functional and structural properties of RNA motifs, and a growing interest in utilising biomolecules in nano-scale designs, the interest in the inverse RNA folding problem is bound to increase. However, whereas the RNA folding problem from an algorithmic viewpoint has an elegant and efficient solution, the inverse RNA folding problem appears to be hard. In this paper we present a genetic algorithm approach to solve the inverse folding problem. The main aims of the development was to address the hitherto mostly ignored extension of solving the inverse folding problem, the multi-target inverse folding problem, while simultaneously designing a method with superior performance when measured on the quality of designed sequences. The genetic algorithm has been implemented as a Python program called Frnakenstein. It was benchmarked against four existing methods and several data sets totalling 769 real and predicted single structure targets, and on 292 two structure targets. It performed as well as or better at finding sequences which folded in silico into the target structure than all existing methods, without the heavy bias towards CG base pairs that was observed for all other top performing methods. On the two structure targets it also performed well, generating a perfect design for about 80% of the targets. Our method illustrates that successful designs for the inverse RNA folding problem does not necessarily have to rely on heavy biases in base pair and unpaired base distributions. The design problem seems to become more difficult on larger structures when the target structures are real structures, while no deterioration was observed for predicted structures. Design for two structure targets is considerably more difficult, but far from impossible, demonstrating the feasibility of automated design of artificial riboswitches. The Python implementation is available at http://www.stats.ox.ac.uk/research/genome/software/frnakenstein.
Frnakenstein: multiple target inverse RNA folding
2012-01-01
Background RNA secondary structure prediction, or folding, is a classic problem in bioinformatics: given a sequence of nucleotides, the aim is to predict the base pairs formed in its three dimensional conformation. The inverse problem of designing a sequence folding into a particular target structure has only more recently received notable interest. With a growing appreciation and understanding of the functional and structural properties of RNA motifs, and a growing interest in utilising biomolecules in nano-scale designs, the interest in the inverse RNA folding problem is bound to increase. However, whereas the RNA folding problem from an algorithmic viewpoint has an elegant and efficient solution, the inverse RNA folding problem appears to be hard. Results In this paper we present a genetic algorithm approach to solve the inverse folding problem. The main aims of the development was to address the hitherto mostly ignored extension of solving the inverse folding problem, the multi-target inverse folding problem, while simultaneously designing a method with superior performance when measured on the quality of designed sequences. The genetic algorithm has been implemented as a Python program called Frnakenstein. It was benchmarked against four existing methods and several data sets totalling 769 real and predicted single structure targets, and on 292 two structure targets. It performed as well as or better at finding sequences which folded in silico into the target structure than all existing methods, without the heavy bias towards CG base pairs that was observed for all other top performing methods. On the two structure targets it also performed well, generating a perfect design for about 80% of the targets. Conclusions Our method illustrates that successful designs for the inverse RNA folding problem does not necessarily have to rely on heavy biases in base pair and unpaired base distributions. The design problem seems to become more difficult on larger structures when the target structures are real structures, while no deterioration was observed for predicted structures. Design for two structure targets is considerably more difficult, but far from impossible, demonstrating the feasibility of automated design of artificial riboswitches. The Python implementation is available at http://www.stats.ox.ac.uk/research/genome/software/frnakenstein. PMID:23043260
NASA Astrophysics Data System (ADS)
Rundell, William; Somersalo, Erkki
2008-07-01
The Inverse Problems International Association (IPIA) awarded the first Calderón Prize to Matti Lassas for his outstanding contributions to the field of inverse problems, especially in geometric inverse problems. The Calderón Prize is given to a researcher under the age of 40 who has made distinguished contributions to the field of inverse problems broadly defined. The first Calderón Prize Committee consisted of Professors Adrian Nachman, Lassi Päivärinta, William Rundell (chair), and Michael Vogelius. William Rundell For the Calderón Prize Committee Prize ceremony The ceremony awarding the Calderón Prize. Matti Lassas is on the left. He and William Rundell are on the right. Photos by P Stefanov. Brief Biography of Matti Lassas Matti Lassas was born in 1969 in Helsinki, Finland, and studied at the University of Helsinki. He finished his Master's studies in 1992 in three years and earned his PhD in 1996. His PhD thesis, written under the supervision of Professor Erkki Somersalo was entitled `Non-selfadjoint inverse spectral problems and their applications to random bodies'. Already in his thesis, Matti demonstrated a remarkable command of different fields of mathematics, bringing together the spectral theory of operators, geometry of Riemannian surfaces, Maxwell's equations and stochastic analysis. He has continued to develop all of these branches in the framework of inverse problems, the most remarkable results perhaps being in the field of differential geometry and inverse problems. Matti has always been a very generous researcher, sharing his ideas with his numerous collaborators. He has authored over sixty scientific articles, among which a monograph on inverse boundary spectral problems with Alexander Kachalov and Yaroslav Kurylev and over forty articles in peer reviewed journals of the highest standards. To get an idea of the wide range of Matti's interests, it is enough to say that he also has three US patents on medical imaging applications. Matti is currently professor of mathematics at Helsinki University of Technology, where he has created his own line of research with young talented researchers around him. He is a central person in the Centre of Excellence in Inverse Problems Research of the Academy of Finland. Previously, Matti Lassas has won several awards in his home country, including the prestigious Vaisala price of the Finnish Academy of Science and Letters in 2004. He is a highly esteemed colleague, teacher and friend, and the Great Diving Beetle of the Finnish Inverse Problems Society (http://venda.uku.fi/research/FIPS/), an honorary title for a person who has no fear of the deep. Erkki Somersalo
Adaptive eigenspace method for inverse scattering problems in the frequency domain
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nahum, Uri
2017-02-01
A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.
Clinical knowledge-based inverse treatment planning
NASA Astrophysics Data System (ADS)
Yang, Yong; Xing, Lei
2004-11-01
Clinical IMRT treatment plans are currently made using dose-based optimization algorithms, which do not consider the nonlinear dose-volume effects for tumours and normal structures. The choice of structure specific importance factors represents an additional degree of freedom of the system and makes rigorous optimization intractable. The purpose of this work is to circumvent the two problems by developing a biologically more sensible yet clinically practical inverse planning framework. To implement this, the dose-volume status of a structure was characterized by using the effective volume in the voxel domain. A new objective function was constructed with the incorporation of the volumetric information of the system so that the figure of merit of a given IMRT plan depends not only on the dose deviation from the desired distribution but also the dose-volume status of the involved organs. The conventional importance factor of an organ was written into a product of two components: (i) a generic importance that parametrizes the relative importance of the organs in the ideal situation when the goals for all the organs are met; (ii) a dose-dependent factor that quantifies our level of clinical/dosimetric satisfaction for a given plan. The generic importance can be determined a priori, and in most circumstances, does not need adjustment, whereas the second one, which is responsible for the intractable behaviour of the trade-off seen in conventional inverse planning, was determined automatically. An inverse planning module based on the proposed formalism was implemented and applied to a prostate case and a head-neck case. A comparison with the conventional inverse planning technique indicated that, for the same target dose coverage, the critical structure sparing was substantially improved for both cases. The incorporation of clinical knowledge allows us to obtain better IMRT plans and makes it possible to auto-select the importance factors, greatly facilitating the inverse planning process. The new formalism proposed also reveals the relationship between different inverse planning schemes and gives important insight into the problem of therapeutic plan optimization. In particular, we show that the EUD-based optimization is a special case of the general inverse planning formalism described in this paper.
Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics
Petrov, Yury
2012-01-01
EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497
A deep learning framework for causal shape transformation.
Lore, Kin Gwn; Stoecklein, Daniel; Davies, Michael; Ganapathysubramanian, Baskar; Sarkar, Soumik
2018-02-01
Recurrent neural network (RNN) and Long Short-term Memory (LSTM) networks are the common go-to architecture for exploiting sequential information where the output is dependent on a sequence of inputs. However, in most considered problems, the dependencies typically lie in the latent domain which may not be suitable for applications involving the prediction of a step-wise transformation sequence that is dependent on the previous states only in the visible domain with a known terminal state. We propose a hybrid architecture of convolution neural networks (CNN) and stacked autoencoders (SAE) to learn a sequence of causal actions that nonlinearly transform an input visual pattern or distribution into a target visual pattern or distribution with the same support and demonstrated its practicality in a real-world engineering problem involving the physics of fluids. We solved a high-dimensional one-to-many inverse mapping problem concerning microfluidic flow sculpting, where the use of deep learning methods as an inverse map is very seldom explored. This work serves as a fruitful use-case to applied scientists and engineers in how deep learning can be beneficial as a solution for high-dimensional physical problems, and potentially opening doors to impactful advance in fields such as material sciences and medical biology where multistep topological transformations is a key element. Copyright © 2017 Elsevier Ltd. All rights reserved.
Biohybrid Control of General Linear Systems Using the Adaptive Filter Model of Cerebellum.
Wilson, Emma D; Assaf, Tareq; Pearson, Martin J; Rossiter, Jonathan M; Dean, Paul; Anderson, Sean R; Porrill, John
2015-01-01
The adaptive filter model of the cerebellar microcircuit has been successfully applied to biological motor control problems, such as the vestibulo-ocular reflex (VOR), and to sensory processing problems, such as the adaptive cancelation of reafferent noise. It has also been successfully applied to problems in robotics, such as adaptive camera stabilization and sensor noise cancelation. In previous applications to inverse control problems, the algorithm was applied to the velocity control of a plant dominated by viscous and elastic elements. Naive application of the adaptive filter model to the displacement (as opposed to velocity) control of this plant results in unstable learning and control. To be more generally useful in engineering problems, it is essential to remove this restriction to enable the stable control of plants of any order. We address this problem here by developing a biohybrid model reference adaptive control (MRAC) scheme, which stabilizes the control algorithm for strictly proper plants. We evaluate the performance of this novel cerebellar-inspired algorithm with MRAC scheme in the experimental control of a dielectric electroactive polymer, a class of artificial muscle. The results show that the augmented cerebellar algorithm is able to accurately control the displacement response of the artificial muscle. The proposed solution not only greatly extends the practical applicability of the cerebellar-inspired algorithm, but may also shed light on cerebellar involvement in a wider range of biological control tasks.
Ground-based microwave radiometric remote sensing of the tropical atmosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Yong.
1992-01-01
A partially developed 9-channel ground-based microwave radiometer for the Department of Meteorology at Penn State was completed and tested. Complementary units were added, corrections to both hardware and software were made, and system software was corrected and upgraded. Measurements from this radiometer were used to infer tropospheric temperature, water vapor and cloud liquid water. The various weighting functions at each of the 9 channels were calculated and analyzed to estimate the sensitivities of the brightness temperature to the desired atmospheric variables. The mathematical inversion problem, in a linear form, was viewed in terms of the theory of linear algebra. Severalmore » methods for solving the inversion problem were reviewed. Radiometric observations were conducted during the 1990 Tropical Cyclone Motion Experiment. The radiometer was installed on the island of Saipan in a tropical region. The radiometer was calibrated using tipping curve and radiosonde data as well as measurements of the radiation from a blackbody absorber. A linear statistical method was applied for the data inversion. The inversion coefficients in the equation were obtained using a large number of radiosonde profiles from Guam and a radiative transfer model. Retrievals were compared with those from local, Saipan, radiosonde measurements. Water vapor profiles, integrated water vapor, and integrated liquid water were retrieved successfully. For temperature profile retrievals, however, the radiometric measurements with experimental noises added no more profile information to the inversion than that they were determined mainly by the surface pressure measurements. A method was developed to derive the integrated water vapor and liquid water from combined radiometer and ceilometer measurements. Significant improvement on radiometric measurements of the integrated liquid water can be gained with this method.« less
Numerical convergence and validation of the DIMP inverse particle transport model
Nelson, Noel; Azmy, Yousry
2017-09-01
The data integration with modeled predictions (DIMP) model is a promising inverse radiation transport method for solving the special nuclear material (SNM) holdup problem. Unlike previous methods, DIMP is a completely passive nondestructive assay technique that requires no initial assumptions regarding the source distribution or active measurement time. DIMP predicts the most probable source location and distribution through Bayesian inference and quasi-Newtonian optimization of predicted detector re-sponses (using the adjoint transport solution) with measured responses. DIMP performs well with for-ward hemispherical collimation and unshielded measurements, but several considerations are required when using narrow-view collimated detectors. DIMP converged well to themore » correct source distribution as the number of synthetic responses increased. DIMP also performed well for the first experimental validation exercise after applying a collimation factor, and sufficiently reducing the source search vol-ume's extent to prevent the optimizer from getting stuck in local minima. DIMP's simple point detector response function (DRF) is being improved to address coplanar false positive/negative responses, and an angular DRF is being considered for integration with the next version of DIMP to account for highly collimated responses. Overall, DIMP shows promise for solving the SNM holdup inverse problem, especially once an improved optimization algorithm is implemented.« less
Inverse design of a proper number, shapes, sizes, and locations of coolant flow passages
NASA Technical Reports Server (NTRS)
Dulikravich, George S.
1992-01-01
During the past several years we have developed an inverse method that allows a thermal cooling system designer to determine proper sizes, shapes, and locations of coolant passages (holes) in, say, an internally cooled turbine blade, a scram jet strut, a rocket chamber wall, etc. Using this method the designer can enforce a desired heat flux distribution on the hot outer surface of the object, while simultaneously enforcing desired temperature distributions on the same hot outer surface as well as on the cooled interior surfaces of each of the coolant passages. This constitutes an over-specified problem which is solved by allowing the number, sizes, locations and shapes of the holes to adjust iteratively until the final internally cooled configuration satisfies the over-specified surface thermal conditions and the governing equation for the steady temperature field. The problem is solved by minimizing an error function expressing the difference between the specified and the computed hot surface heat fluxes. The temperature field analysis was performed using our highly accurate boundary integral element code with linearly varying temperature along straight surface panels. Examples of the inverse design applied to internally cooled turbine blades and scram jet struts (coated and non-coated) having circular and non-circular coolant flow passages will be shown.
NASA Astrophysics Data System (ADS)
Li, Guo-Yang; Zheng, Yang; Liu, Yanlin; Destrade, Michel; Cao, Yanping
2016-11-01
A body force concentrated at a point and moving at a high speed can induce shear-wave Mach cones in dusty-plasma crystals or soft materials, as observed experimentally and named the elastic Cherenkov effect (ECE). The ECE in soft materials forms the basis of the supersonic shear imaging (SSI) technique, an ultrasound-based dynamic elastography method applied in clinics in recent years. Previous studies on the ECE in soft materials have focused on isotropic material models. In this paper, we investigate the existence and key features of the ECE in anisotropic soft media, by using both theoretical analysis and finite element (FE) simulations, and we apply the results to the non-invasive and non-destructive characterization of biological soft tissues. We also theoretically study the characteristics of the shear waves induced in a deformed hyperelastic anisotropic soft material by a source moving with high speed, considering that contact between the ultrasound probe and the soft tissue may lead to finite deformation. On the basis of our theoretical analysis and numerical simulations, we propose an inverse approach to infer both the anisotropic and hyperelastic parameters of incompressible transversely isotropic (TI) soft materials. Finally, we investigate the properties of the solutions to the inverse problem by deriving the condition numbers in analytical form and performing numerical experiments. In Part II of the paper, both ex vivo and in vivo experiments are conducted to demonstrate the applicability of the inverse method in practical use.
NASA Astrophysics Data System (ADS)
Wang, Jun; Meng, Xiaohong; Li, Fang
2017-11-01
Generalized inversion is one of the important steps in the quantitative interpretation of gravity data. With appropriate algorithm and parameters, it gives a view of the subsurface which characterizes different geological bodies. However, generalized inversion of gravity data is time consuming due to the large amount of data points and model cells adopted. Incorporating of various prior information as constraints deteriorates the above situation. In the work discussed in this paper, a method for fast nonlinear generalized inversion of gravity data is proposed. The fast multipole method is employed for forward modelling. The inversion objective function is established with weighted data misfit function along with model objective function. The total objective function is solved by a dataspace algorithm. Moreover, depth weighing factor is used to improve depth resolution of the result, and bound constraint is incorporated by a transfer function to limit the model parameters in a reliable range. The matrix inversion is accomplished by a preconditioned conjugate gradient method. With the above algorithm, equivalent density vectors can be obtained, and interpolation is performed to get the finally density model on the fine mesh in the model domain. Testing on synthetic gravity data demonstrated that the proposed method is faster than conventional generalized inversion algorithm to produce an acceptable solution for gravity inversion problem. The new developed inversion method was also applied for inversion of the gravity data collected over Sichuan basin, southwest China. The established density structure in this study helps understanding the crustal structure of Sichuan basin and provides reference for further oil and gas exploration in this area.
NASA Astrophysics Data System (ADS)
Mai, P. M.; Schorlemmer, D.; Page, M.
2012-04-01
Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.
Solvability of the electrocardiology inverse problem for a moving dipole.
Tolkachev, V; Bershadsky, B; Nemirko, A
1993-01-01
New formulations of the direct and inverse problems for the moving dipole are offered. It has been suggested to limit the study by a small area on the chest surface. This lowers the role of the medium inhomogeneity. When formulating the direct problem, irregular components are considered. The algorithm of simultaneous determination of the dipole and regular noise parameters has been described and analytically investigated. It is shown that temporal overdetermination of the equations offers a single solution of the inverse problem for the four leads.
NASA Astrophysics Data System (ADS)
Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh
2015-11-01
The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunert, Sebastian; Hammer, Hans; Lou, Jijie
2016-11-01
The common definition of the diffusion coeffcient as the inverse of three times the transport cross section is not compat- ible with voids. Morel introduced a non-local tensor diffusion coeffcient that remains finite in voids[1]. It can be obtained by solving an auxiliary transport problem without scattering or fission. Larsen and Trahan successfully applied this diffusion coeffcient for enhancing the accuracy of diffusion solutions of very high temperature reactor (VHTR) problems that feature large, optically thin channels in the z-direction [2]. It is demonstrated that a significant reduction of error can be achieved in particular in the optically thin region.more » Along the same line of thought, non-local diffusion tensors are applied modeling the TREAT reactor confirming the findings of Larsen and Trahan [3]. Previous work of the authors have introduced a flexible Nonlinear-Diffusion Acceleration (NDA) method for the first order S N equations discretized with the discontinuous finite element method (DFEM), [4], [5], [6]. This NDA method uses a scalar diffusion coeffcient in the low-order system that is obtained as the flux weighted average of the inverse transport cross section. Hence, it su?ers from very large and potentially unbounded diffusion coeffcients in the low order problem. However, it was noted that the choice of the diffusion coeffcient does not influence consistency of the method at convergence and hence the di?usion coeffcient is essentially a free parameter. The choice of the di?usion coeffcient does, however, affect the convergence behavior of the nonlinear di?usion iterations. Within this work we use Morel’s non-local di?usion coef- ficient in the aforementioned NDA formulation in lieu of the flux weighted inverse of three times the transport cross section. The goal of this paper is to demonstrate that significant en- hancement of the spectral properties of NDA can be achieved in near void regions. For testing the spectral properties of the NDA with non-local diffusion coeffcients, the periodical horizontal interface problem is used [7]. This problem consists of alternating stripes of optically thin and thick materials both of which feature scattering ratios close to unity.« less
Inferring Spatial Variations of Microstructural Properties from Macroscopic Mechanical Response
Liu, Tengxiao; Hall, Timothy J.; Barbone, Paul E.; Oberai, Assad A.
2016-01-01
Disease alters tissue microstructure, which in turn affects the macroscopic mechanical properties of tissue. In elasticity imaging, the macroscopic response is measured and is used to infer the spatial distribution of the elastic constitutive parameters. When an empirical constitutive model is used these parameters cannot be linked to the microstructure. However, when the constitutive model is derived from a microstructural representation of the material, it allows for the possibility of inferring the local averages of the spatial distribution of the microstructural parameters. This idea forms the basis of this study. In particular, we first derive a constitutive model by homogenizing the mechanical response of a network of elastic, tortuous fibers. Thereafter, we use this model in an inverse problem to determine the spatial distribution of the microstructural parameters. We solve the inverse problem as a constrained minimization problem, and develop efficient methods for solving it. We apply these methods to displacement fields obtained by deforming gelatin-agar co-gels, and determine the spatial distribution of agar concentration and fiber tortuosity, thereby demonstrating that it is possible to image local averages of microstructural parameters from macroscopic measurements of deformation. PMID:27655420
MAP Estimators for Piecewise Continuous Inversion
2016-08-08
MAP estimators for piecewise continuous inversion M M Dunlop1 and A M Stuart Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK E...Published 8 August 2016 Abstract We study the inverse problem of estimating a field ua from data comprising a finite set of nonlinear functionals of ua...then natural to study maximum a posterior (MAP) estimators. Recently (Dashti et al 2013 Inverse Problems 29 095017) it has been shown that MAP
Time-domain full waveform inversion using instantaneous phase information with damping
NASA Astrophysics Data System (ADS)
Luo, Jingrui; Wu, Ru-Shan; Gao, Fuchun
2018-06-01
In time domain, the instantaneous phase can be obtained from the complex seismic trace using Hilbert transform. The instantaneous phase information has great potential in overcoming the local minima problem and improving the result of full waveform inversion. However, the phase wrapping problem, which comes from numerical calculation, prevents its application. In order to avoid the phase wrapping problem, we choose to use the exponential phase combined with the damping method, which gives instantaneous phase-based multi-stage inversion. We construct the objective functions based on the exponential instantaneous phase, and also derive the corresponding gradient operators. Conventional full waveform inversion and the instantaneous phase-based inversion are compared with numerical examples, which indicates that in the case without low frequency information in seismic data, our method is an effective and efficient approach for initial model construction for full waveform inversion.
Structural-change localization and monitoring through a perturbation-based inverse problem.
Roux, Philippe; Guéguen, Philippe; Baillet, Laurent; Hamze, Alaa
2014-11-01
Structural-change detection and characterization, or structural-health monitoring, is generally based on modal analysis, for detection, localization, and quantification of changes in structure. Classical methods combine both variations in frequencies and mode shapes, which require accurate and spatially distributed measurements. In this study, the detection and localization of a local perturbation are assessed by analysis of frequency changes (in the fundamental mode and overtones) that are combined with a perturbation-based linear inverse method and a deconvolution process. This perturbation method is applied first to a bending beam with the change considered as a local perturbation of the Young's modulus, using a one-dimensional finite-element model for modal analysis. Localization is successful, even for extended and multiple changes. In a second step, the method is numerically tested under ambient-noise vibration from the beam support with local changes that are shifted step by step along the beam. The frequency values are revealed using the random decrement technique that is applied to the time-evolving vibrations recorded by one sensor at the free extremity of the beam. Finally, the inversion method is experimentally demonstrated at the laboratory scale with data recorded at the free end of a Plexiglas beam attached to a metallic support.
Three Dimensional Inverse Synthetic Aperture Radar Imaging
1995-12-01
unfortunately produces a blurred image. To correct this problem, a deblurring filter must be applied to the data. It is preferred in some applications to...when the pulse is an impulse in time. So in order to get a high degree of downrange resolution directly it would be necessary to transmit the entire...bandwidth of frequencies simultaneously such as in an Impulse Radar. This would prove to be extremely difficult if not impossible. Luckily, the same
Solutions to inverse plume in a crosswind problem using a predictor - corrector method
NASA Astrophysics Data System (ADS)
Vanderveer, Joseph; Jaluria, Yogesh
2013-11-01
Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.
A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion
NASA Astrophysics Data System (ADS)
CUI, C.; Hou, W.
2017-12-01
Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (< 3Hz) in field data is still bottleneck in the FWI. By extracting ultra low-frequency data from field data, envelope inversion is able to recover low wavenumber model with a demodulation operator (envelope operator), though the low frequency data does not really exist in field data. To improve the efficiency and viability of the inversion, in this study, we proposed a joint method of envelope inversion combined with hybrid-domain FWI. First, we developed 3D elastic envelope inversion, and the misfit function and the corresponding gradient operator were derived. Then we performed hybrid-domain FWI with envelope inversion result as initial model which provides low wavenumber component of model. Here, forward modeling is implemented in the time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.
On the recovery of missing low and high frequency information from bandlimited reflectivity data
NASA Astrophysics Data System (ADS)
Sacchi, M. D.; Ulrych, T. J.
2007-12-01
During the last two decades, an important effort in the seismic exploration community has been made to retrieve broad-band seismic data by means of deconvolution and inversion. In general, the problem can be stated as a spectral reconstruction problem. In other words, given limited spectral information about the earth's reflectivity sequence, one attempts to create a broadband estimate of the Fourier spectra of the unknown reflectivity. Techniques based on the principle of parsimony can be effectively used to retrieve a sparse spike sequence and, consequently, a broad band signal. Alternatively, continuation methods, e.g., autoregressive modeling, can be used to extrapolate the recorded bandwidth of the seismic signal. The goal of this paper is to examine under what conditions the recovery of low and high frequencies from band-limited and noisy signals is possible. At the heart of the methods we discuss, is the celebrated non-Gaussian assumption so important in many modern signal processing methods, such as ICA, for example. Spectral recovery from limited information tends to work when the reflectivity consist of a few well isolated events. Results degrade with the number of reflectors, decreasing SNR and decreasing bandwidth of the source wavelet. Constrains and information-based priors can be used to stabilize the recovery but, as in all inverse problems, the solution is nonunique and effort is required to understand the level of recovery that is achievable, always keeping the physics of the problem in mind. We provide in this paper, a survey of methods to recover broad-band reflectivity sequences and examine the role that these techniques can play in the processing and inversion as applied to exploration and global seismology.
Acoustic Inversion in Optoacoustic Tomography: A Review
Rosenthal, Amir; Ntziachristos, Vasilis; Razansky, Daniel
2013-01-01
Optoacoustic tomography enables volumetric imaging with optical contrast in biological tissue at depths beyond the optical mean free path by the use of optical excitation and acoustic detection. The hybrid nature of optoacoustic tomography gives rise to two distinct inverse problems: The optical inverse problem, related to the propagation of the excitation light in tissue, and the acoustic inverse problem, which deals with the propagation and detection of the generated acoustic waves. Since the two inverse problems have different physical underpinnings and are governed by different types of equations, they are often treated independently as unrelated problems. From an imaging standpoint, the acoustic inverse problem relates to forming an image from the measured acoustic data, whereas the optical inverse problem relates to quantifying the formed image. This review focuses on the acoustic aspects of optoacoustic tomography, specifically acoustic reconstruction algorithms and imaging-system practicalities. As these two aspects are intimately linked, and no silver bullet exists in the path towards high-performance imaging, we adopt a holistic approach in our review and discuss the many links between the two aspects. Four classes of reconstruction algorithms are reviewed: time-domain (so called back-projection) formulae, frequency-domain formulae, time-reversal algorithms, and model-based algorithms. These algorithms are discussed in the context of the various acoustic detectors and detection surfaces which are commonly used in experimental studies. We further discuss the effects of non-ideal imaging scenarios on the quality of reconstruction and review methods that can mitigate these effects. Namely, we consider the cases of finite detector aperture, limited-view tomography, spatial under-sampling of the acoustic signals, and acoustic heterogeneities and losses. PMID:24772060
Review of the inverse scattering problem at fixed energy in quantum mechanics
NASA Technical Reports Server (NTRS)
Sabatier, P. C.
1972-01-01
Methods of solution of the inverse scattering problem at fixed energy in quantum mechanics are presented. Scattering experiments of a beam of particles at a nonrelativisitic energy by a target made up of particles are analyzed. The Schroedinger equation is used to develop the quantum mechanical description of the system and one of several functions depending on the relative distance of the particles. The inverse problem is the construction of the potentials from experimental measurements.
Iterative algorithms for a non-linear inverse problem in atmospheric lidar
NASA Astrophysics Data System (ADS)
Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto
2017-08-01
We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.
Eddy Current Testing and Sizing of Deep Cracks in a Thick Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, H.; Endo, H.; Uchimoto, T.
2004-02-26
Due to the skin effect of eddy current testing, target of ECT restricts to thin structure such as steam generator tubes with 1.27mm thickness. Detecting and sizing of a deep crack in a thick structure remains a problem. In this paper, an ECT probe is presented to solve this problem with the help of numerical analysis. The parameters such as frequency, coil size etc. are discussed. The inverse problem of crack sizing is solved by applying a fast simulator of ECT based on an edge based finite element method and steepest descent method, and reconstructed results of 5, 10 andmore » 15mm depth cracks from experimental signals are shown.« less
The incomplete inverse and its applications to the linear least squares problem
NASA Technical Reports Server (NTRS)
Morduch, G. E.
1977-01-01
A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.
A new stochastic algorithm for inversion of dust aerosol size distribution
NASA Astrophysics Data System (ADS)
Wang, Li; Li, Feng; Yang, Ma-ying
2015-08-01
Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.
An inverse method to estimate the flow through a levee breach
NASA Astrophysics Data System (ADS)
D'Oria, Marco; Mignosa, Paolo; Tanda, Maria Giovanna
2015-08-01
We propose a procedure to estimate the flow through a levee breach based on water levels recorded in river stations downstream and/or upstream of the failure site. The inverse problem is solved using a Bayesian approach and requires the execution of several forward unsteady flow simulations. For this purpose, we have used the well-known 1-D HEC-RAS model, but any unsteady flow model could be adopted in the same way. The procedure has been tested using four synthetic examples. Levee breaches with different characteristics (free flow, flow with tailwater effects, etc.) have been simulated to collect the synthetic level data used at a later stage in the inverse procedure. The method was able to accurately reproduce the flow through the breach in all cases. The practicability of the procedure was then confirmed applying it to the inundation of the Polesine Region (Northern Italy) which occurred in 1951 and was caused by three contiguous and almost simultaneous breaches on the left embankment of the Po River.
Parallelized Three-Dimensional Resistivity Inversion Using Finite Elements And Adjoint State Methods
NASA Astrophysics Data System (ADS)
Schaa, Ralf; Gross, Lutz; Du Plessis, Jaco
2015-04-01
The resistivity method is one of the oldest geophysical exploration methods, which employs one pair of electrodes to inject current into the ground and one or more pairs of electrodes to measure the electrical potential difference. The potential difference is a non-linear function of the subsurface resistivity distribution described by an elliptic partial differential equation (PDE) of the Poisson type. Inversion of measured potentials solves for the subsurface resistivity represented by PDE coefficients. With increasing advances in multichannel resistivity acquisition systems (systems with more than 60 channels and full waveform recording are now emerging), inversion software require efficient storage and solver algorithms. We developed the finite element solver Escript, which provides a user-friendly programming environment in Python to solve large-scale PDE-based problems (see https://launchpad.net/escript-finley). Using finite elements, highly irregular shaped geology and topography can readily be taken into account. For the 3D resistivity problem, we have implemented the secondary potential approach, where the PDE is decomposed into a primary potential caused by the source current and the secondary potential caused by changes in subsurface resistivity. The primary potential is calculated analytically, and the boundary value problem for the secondary potential is solved using nodal finite elements. This approach removes the singularity caused by the source currents and provides more accurate 3D resistivity models. To solve the inversion problem we apply a 'first optimize then discretize' approach using the quasi-Newton scheme in form of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method (see Gross & Kemp 2013). The evaluation of the cost function requires the solution of the secondary potential PDE for each source current and the solution of the corresponding adjoint-state PDE for the cost function gradients with respect to the subsurface resistivity. The Hessian of the regularization term is used as preconditioner which requires an additional PDE solution in each iteration step. As it turns out, the relevant PDEs are naturally formulated in the finite element framework. Using the domain decomposition method provided in Escript, the inversion scheme has been parallelized for distributed memory computers with multi-core shared memory nodes. We show numerical examples from simple layered models to complex 3D models and compare with the results from other methods. The inversion scheme is furthermore tested on a field data example to characterise localised freshwater discharge in a coastal environment.. References: L. Gross and C. Kemp (2013) Large Scale Joint Inversion of Geophysical Data using the Finite Element Method in escript. ASEG Extended Abstracts 2013, http://dx.doi.org/10.1071/ASEG2013ab306
Analytic semigroups: Applications to inverse problems for flexible structures
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rebnord, D. A.
1990-01-01
Convergence and stability results for least squares inverse problems involving systems described by analytic semigroups are presented. The practical importance of these results is demonstrated by application to several examples from problems of estimation of material parameters in flexible structures using accelerometer data.
A gradient based algorithm to solve inverse plane bimodular problems of identification
NASA Astrophysics Data System (ADS)
Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing
2018-02-01
This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.
Gravity inversion of a fault by Particle swarm optimization (PSO).
Toushmalani, Reza
2013-01-01
Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.
The Inverse Problem in Jet Acoustics
NASA Technical Reports Server (NTRS)
Wooddruff, S. L.; Hussaini, M. Y.
2001-01-01
The inverse problem for jet acoustics, or the determination of noise sources from far-field pressure information, is proposed as a tool for understanding the generation of noise by turbulence and for the improved prediction of jet noise. An idealized version of the problem is investigated first to establish the extent to which information about the noise sources may be determined from far-field pressure data and to determine how a well-posed inverse problem may be set up. Then a version of the industry-standard MGB code is used to predict a jet noise source spectrum from experimental noise data.
NASA Astrophysics Data System (ADS)
Szabó, Norbert Péter
2018-03-01
An evolutionary inversion approach is suggested for the interpretation of nuclear and resistivity logs measured by direct-push tools in shallow unsaturated sediments. The efficiency of formation evaluation is improved by estimating simultaneously (1) the petrophysical properties that vary rapidly along a drill hole with depth and (2) the zone parameters that can be treated as constant, in one inversion procedure. In the workflow, the fractional volumes of water, air, matrix and clay are estimated in adjacent depths by linearized inversion, whereas the clay and matrix properties are updated using a float-encoded genetic meta-algorithm. The proposed inversion method provides an objective estimate of the zone parameters that appear in the tool response equations applied to solve the forward problem, which can significantly increase the reliability of the petrophysical model as opposed to setting these parameters arbitrarily. The global optimization meta-algorithm not only assures the best fit between the measured and calculated data but also gives a reliable solution, practically independent of the initial model, as laboratory data are unnecessary in the inversion procedure. The feasibility test uses engineering geophysical sounding logs observed in an unsaturated loessy-sandy formation in Hungary. The multi-borehole extension of the inversion technique is developed to determine the petrophysical properties and their estimation errors along a profile of drill holes. The genetic meta-algorithmic inversion method is recommended for hydrogeophysical logging applications of various kinds to automatically extract the volumetric ratios of rock and fluid constituents as well as the most important zone parameters in a reliable inversion procedure.
Study of multi-dimensional radiative energy transfer in molecular gases
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, S. N.
1993-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.
EDITORIAL: Inverse Problems in Engineering
NASA Astrophysics Data System (ADS)
West, Robert M.; Lesnic, Daniel
2007-01-01
Presented here are 11 noteworthy papers selected from the Fifth International Conference on Inverse Problems in Engineering: Theory and Practice held in Cambridge, UK during 11-15 July 2005. The papers have been peer-reviewed to the usual high standards of this journal and the contributions of reviewers are much appreciated. The conference featured a good balance of the fundamental mathematical concepts of inverse problems with a diverse range of important and interesting applications, which are represented here by the selected papers. Aspects of finite-element modelling and the performance of inverse algorithms are investigated by Autrique et al and Leduc et al. Statistical aspects are considered by Emery et al and Watzenig et al with regard to Bayesian parameter estimation and inversion using particle filters. Electrostatic applications are demonstrated by van Berkel and Lionheart and also Nakatani et al. Contributions to the applications of electrical techniques and specifically electrical tomographies are provided by Wakatsuki and Kagawa, Kim et al and Kortschak et al. Aspects of inversion in optical tomography are investigated by Wright et al and Douiri et al. The authors are representative of the worldwide interest in inverse problems relating to engineering applications and their efforts in producing these excellent papers will be appreciated by many readers of this journal.
Inverse problem for multispecies ferromagneticlike mean-field models in phase space with many states
NASA Astrophysics Data System (ADS)
Fedele, Micaela; Vernia, Cecilia
2017-10-01
In this paper we solve the inverse problem for the Curie-Weiss model and its multispecies version when multiple thermodynamic states are present as in the low temperature phase where the phase space is clustered. The inverse problem consists of reconstructing the model parameters starting from configuration data generated according to the distribution of the model. We demonstrate that, without taking into account the presence of many states, the application of the inversion procedure produces very poor inference results. To overcome this problem, we use the clustering algorithm. When the system has two symmetric states of positive and negative magnetizations, the parameter reconstruction can also be obtained with smaller computational effort simply by flipping the sign of the magnetizations from positive to negative (or vice versa). The parameter reconstruction fails when the system undergoes a phase transition: In that case we give the correct inversion formulas for the Curie-Weiss model and we show that they can be used to measure how close the system gets to being critical.
NASA Technical Reports Server (NTRS)
Liu, Gao-Lian
1991-01-01
Advances in inverse design and optimization theory in engineering fields in China are presented. Two original approaches, the image-space approach and the variational approach, are discussed in terms of turbomachine aerodynamic inverse design. Other areas of research in turbomachine aerodynamic inverse design include the improved mean-streamline (stream surface) method and optimization theory based on optimal control. Among the additional engineering fields discussed are the following: the inverse problem of heat conduction, free-surface flow, variational cogeneration of optimal grid and flow field, and optimal meshing theory of gears.
Dynamic data integration and stochastic inversion of a confined aquifer
NASA Astrophysics Data System (ADS)
Wang, D.; Zhang, Y.; Irsa, J.; Huang, H.; Wang, L.
2013-12-01
Much work has been done in developing and applying inverse methods to aquifer modeling. The scope of this paper is to investigate the applicability of a new direct method for large inversion problems and to incorporate uncertainty measures in the inversion outcomes (Wang et al., 2013). The problem considered is a two-dimensional inverse model (50×50 grid) of steady-state flow for a heterogeneous ground truth model (500×500 grid) with two hydrofacies. From the ground truth model, decreasing number of wells (12, 6, 3) were sampled for facies types, based on which experimental indicator histograms and directional variograms were computed. These parameters and models were used by Sequential Indicator Simulation to generate 100 realizations of hydrofacies patterns in a 100×100 (geostatistical) grid, which were conditioned to the facies measurements at wells. These realizations were smoothed with Simulated Annealing, coarsened to the 50×50 inverse grid, before they were conditioned with the direct method to the dynamic data, i.e., observed heads and groundwater fluxes at the same sampled wells. A set of realizations of estimated hydraulic conductivities (Ks), flow fields, and boundary conditions were created, which centered on the 'true' solutions from solving the ground truth model. Both hydrofacies conductivities were computed with an estimation accuracy of ×10% (12 wells), ×20% (6 wells), ×35% (3 wells) of the true values. For boundary condition estimation, the accuracy was within × 15% (12 wells), 30% (6 wells), and 50% (3 wells) of the true values. The inversion system of equations was solved with LSQR (Paige et al, 1982), for which coordinate transform and matrix scaling preprocessor were used to improve the condition number (CN) of the coefficient matrix. However, when the inverse grid was refined to 100×100, Gaussian Noise Perturbation was used to limit the growth of the CN before the matrix solve. To scale the inverse problem up (i.e., without smoothing and coarsening and therefore reducing the associated estimation uncertainty), a parallel LSQR solver was written and verified. For the 50×50 grid, the parallel solver sped up the serial solution time by 14X using 4 CPUs (research on parallel performance and scaling is ongoing). A sensitivity analysis was conducted to examine the relation between the observed data and the inversion outcomes, where measurement errors of increasing magnitudes (i.e., ×1, 2, 5, 10% of the total head variation and up to ×2% of the total flux variation) were imposed on the observed data. Inversion results were stable but the accuracy of Ks and boundary estimation degraded with increasing errors, as expected. In particular, quality of the observed heads is critical to hydraulic head recovery, while quality of the observed fluxes plays a dominant role in K estimation. References: Wang, D., Y. Zhang, J. Irsa, H. Huang, and L. Wang (2013), Data integration and stochastic inversion of a confined aquifer with high performance computing, Advances in Water Resources, in preparation. Paige, C. C., and M. A. Saunders (1982), LSQR: an algorithm for sparse linear equations and sparse least squares, ACM Transactions on Mathematical Software, 8(1), 43-71.
A gradiometric version of contactless inductive flow tomography: theory and first applications
Wondrak, Thomas; Stefani, Frank
2016-01-01
The contactless inductive flow tomography (CIFT) is a measurement technique that allows reconstructing the flow of electrically conducting fluids by measuring the flow-induced perturbations of one or various applied magnetic fields and solving the underlying inverse problem. One of the most promising application fields of CIFT is the continuous casting of steel, for which the online monitoring of the flow in the mould would be highly desirable. In previous experiments at a small-scale model of continuous casting, CIFT has been applied to various industrially relevant problems, including the sudden changes of flow structures in case of argon injection and the influence of a magnetic stirrer at the submerged entry nozzle. The application of CIFT in the presence of electromagnetic brakes, which are widely used to stabilize the flow in the mould, has turned out to be more challenging due to the extreme dynamic range between the strong applied brake field and the weak flow-induced perturbations of the measuring field. In this paper, we present a gradiometric version of CIFT, relying on gradiometric field measurements, that is capable to overcome those problems and which seems, therefore, a promising candidate for applying CIFT in the steel casting industry. This article is part of the themed issue ‘Supersensing through industrial process tomography’. PMID:27185963
Computational structures for robotic computations
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chang, P. R.
1987-01-01
The computational problem of inverse kinematics and inverse dynamics of robot manipulators by taking advantage of parallelism and pipelining architectures is discussed. For the computation of inverse kinematic position solution, a maximum pipelined CORDIC architecture has been designed based on a functional decomposition of the closed-form joint equations. For the inverse dynamics computation, an efficient p-fold parallel algorithm to overcome the recurrence problem of the Newton-Euler equations of motion to achieve the time lower bound of O(log sub 2 n) has also been developed.
Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrell J; Wang, Dafang F; Steffen, Michael; Brooks, Dana H; van Dam, Peter M; Macleod, Rob S
2012-01-01
Computational modeling in electrocardiography often requires the examination of cardiac forward and inverse problems in order to non-invasively analyze physiological events that are otherwise inaccessible or unethical to explore. The study of these models can be performed in the open-source SCIRun problem solving environment developed at the Center for Integrative Biomedical Computing (CIBC). A new toolkit within SCIRun provides researchers with essential frameworks for constructing and manipulating electrocardiographic forward and inverse models in a highly efficient and interactive way. The toolkit contains sample networks, tutorials and documentation which direct users through SCIRun-specific approaches in the assembly and execution of these specific problems. PMID:22254301
Iterative electromagnetic Born inversion applied to earth conductivity imaging
NASA Astrophysics Data System (ADS)
Alumbaugh, D. L.
1993-08-01
This thesis investigates the use of a fast imaging technique to deduce the spatial conductivity distribution in the earth from low frequency (less than 1 MHz), cross well electromagnetic (EM) measurements. The theory embodied in this work is the extension of previous strategies and is based on the Born series approximation to solve both the forward and inverse problem. Nonlinear integral equations are employed to derive the series expansion which accounts for the scattered magnetic fields that are generated by inhomogeneities embedded in either a homogenous or a layered earth. A sinusoidally oscillating, vertically oriented magnetic dipole is employed as a source, and it is assumed that the scattering bodies are azimuthally symmetric about the source dipole axis. The use of this model geometry reduces the 3-D vector problem to a more manageable 2-D scalar form. The validity of the cross well EM method is tested by applying the imaging scheme to two sets of field data. Images of the data collected at the Devine, Texas test site show excellent correlation with the well logs. Unfortunately there is a drift error present in the data that limits the accuracy of the results. A more complete set of data collected at the Richmond field station in Richmond, California demonstrates that cross well EM can be successfully employed to monitor the position of an injected mass of salt water. Both the data and the resulting images clearly indicate the plume migrates toward the north-northwest. The plausibility of these conclusions is verified by applying the imaging code to synthetic data generated by a 3-D sheet model.
Dushaw, Brian D; Sagen, Hanne
2017-12-01
Ocean acoustic tomography depends on a suitable reference ocean environment with which to set the basic parameters of the inverse problem. Some inverse problems may require a reference ocean that includes the small-scale variations from internal waves, small mesoscale, or spice. Tomographic inversions that employ data of stable shadow zone arrivals, such as those that have been observed in the North Pacific and Canary Basin, are an example. Estimating temperature from the unique acoustic data that have been obtained in Fram Strait is another example. The addition of small-scale variability to augment a smooth reference ocean is essential to understanding the acoustic forward problem in these cases. Rather than a hindrance, the stochastic influences of the small scale can be exploited to obtain accurate inverse estimates. Inverse solutions are readily obtained, and they give computed arrival patterns that matched the observations. The approach is not ad hoc, but universal, and it has allowed inverse estimates for ocean temperature variations in Fram Strait to be readily computed on several acoustic paths for which tomographic data were obtained.
An inverse dynamics approach to trajectory optimization and guidance for an aerospace plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1992-01-01
The optimal ascent problem for an aerospace planes is formulated as an optimal inverse dynamic problem. Both minimum-fuel and minimax type of performance indices are considered. Some important features of the optimal trajectory and controls are used to construct a nonlinear feedback midcourse controller, which not only greatly simplifies the difficult constrained optimization problem and yields improved solutions, but is also suited for onboard implementation. Robust ascent guidance is obtained by using combination of feedback compensation and onboard generation of control through the inverse dynamics approach. Accurate orbital insertion can be achieved with near-optimal control of the rocket through inverse dynamics even in the presence of disturbances.
Lattice enumeration for inverse molecular design using the signature descriptor.
Martin, Shawn
2012-07-23
We describe an inverse quantitative structure-activity relationship (QSAR) framework developed for the design of molecular structures with desired properties. This framework uses chemical fragments encoded with a molecular descriptor known as a signature. It solves a system of linear constrained Diophantine equations to reorganize the fragments into novel molecular structures. The method has been previously applied to problems in drug and materials design but has inherent computational limitations due to the necessity of solving the Diophantine constraints. We propose a new approach to overcome these limitations using the Fincke-Pohst algorithm for lattice enumeration. We benchmark the new approach against previous results on LFA-1/ICAM-1 inhibitory peptides, linear homopolymers, and hydrofluoroether foam blowing agents. Software implementing the new approach is available at www.cs.otago.ac.nz/homepages/smartin.
Mass, heat and nutrient fluxes in the Atlantic Ocean determined by inverse methods. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rintoul, Stephen Rich
1988-01-01
Inverse methods are applied to historical hydrographic data to address two aspects of the general circulation of the Atlantic Ocean. The method allows conservation statements for mass and other properties, along with a variety of other constraints, to be combined in a dynamically consistent way to estimate the absolute velocity field and associated property transports. The method was first used to examine the exchange of mass and heat between the South Atlantic and the neighboring ocean basins. The second problem addressed concerns the circulation and property fluxes across the 24 and 36 deg N in the subtropical North Atlantic. Conservation statements are considered for the nutrients as well as mass, and the nutrients are found to contribute significant information independent of temperature and salinity.
Performance limits for maritime Inverse Synthetic Aperture Radar (ISAR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin Walter
2013-11-01
The performance of an Inverse Synthetic Aperture Radar (ISAR) system depends on a variety of factors, many which are interdependent in some manner. In this report we specifically examine ISAR as applied to maritime targets (e.g. ships). It is often difficult to get your arms around the problem of ascertaining achievable performance limits, and yet those limits exist and are dictated by physics. This report identifies and explores those limits, and how they depend on hardware system parameters and environmental conditions. Ultimately, this leads to a characterization of parameters that offer optimum performance for the overall ISAR system. While themore » information herein is not new to the literature, its collection into a single report hopes to offer some value in reducing the seek time.« less
NASA Astrophysics Data System (ADS)
Gosselin, J.; Audet, P.; Schaeffer, A. J.
2017-12-01
The seismic velocity structure in the forearc of subduction zones provides important constraints on material properties, with implications for seismogenesis. In Cascadia, previous studies have imaged a downgoing low-velocity zone (LVZ) characterized by an elevated P-to-S velocity ratio (Vp/Vs) down to 45 km depth, near the intersection with the mantle wedge corner, beyond which the signature of the LVZ disappears. These results, combined with the absence of a "normal" continental Moho, indicate that the down-going oceanic crust likely carries large amounts of overpressured free fluids that are released downdip at the onset of crustal eclogitization, and are further stored in the mantle wedge as serpentinite. These overpressured free fluids affect the stability of the plate interface and facilitate slow slip. These results are based on the inversion and migration of scattered teleseismic data for individual layer properties; a methodology which suffers from regularization and smoothing, non-uniqueness, and does not consider model uncertainty. This study instead applies trans-dimensional Bayesian inversion of teleseismic data collected in the forearc of northern Cascadia (the CAFÉ experiment in northern Washington) to provide rigorous, quantitative estimates of local velocity structure, and associated uncertainties (particularly Vp/Vs structure and depth to the plate interface). Trans-dimensional inversion is a generalization of fixed-dimensional inversion that includes the number (and type) of parameters required to describe the velocity model (or data error model) as unknown in the problem. This allows model complexity to be inherently determined by data information content, not by subjective regularization. The inversion is implemented here using the reversible-jump Markov chain Monte Carlo algorithm. The result is an ensemble set of candidate velocity-structure models which approximate the posterior probability density (PPD) of the model parameters. The solution to the inverse problem, and associated uncertainties, are described by properties of the PPD. The results obtained here will eventually be integrated with teleseismic data from OBS stations from the Cascadia Initiative to provide constraints across the entire seismogenic portion of the plate interface.
NASA Astrophysics Data System (ADS)
Pankratov, Oleg; Kuvshinov, Alexey
2016-01-01
Despite impressive progress in the development and application of electromagnetic (EM) deterministic inverse schemes to map the 3-D distribution of electrical conductivity within the Earth, there is one question which remains poorly addressed—uncertainty quantification of the recovered conductivity models. Apparently, only an inversion based on a statistical approach provides a systematic framework to quantify such uncertainties. The Metropolis-Hastings (M-H) algorithm is the most popular technique for sampling the posterior probability distribution that describes the solution of the statistical inverse problem. However, all statistical inverse schemes require an enormous amount of forward simulations and thus appear to be extremely demanding computationally, if not prohibitive, if a 3-D set up is invoked. This urges development of fast and scalable 3-D modelling codes which can run large-scale 3-D models of practical interest for fractions of a second on high-performance multi-core platforms. But, even with these codes, the challenge for M-H methods is to construct proposal functions that simultaneously provide a good approximation of the target density function while being inexpensive to be sampled. In this paper we address both of these issues. First we introduce a variant of the M-H method which uses information about the local gradient and Hessian of the penalty function. This, in particular, allows us to exploit adjoint-based machinery that has been instrumental for the fast solution of deterministic inverse problems. We explain why this modification of M-H significantly accelerates sampling of the posterior probability distribution. In addition we show how Hessian handling (inverse, square root) can be made practicable by a low-rank approximation using the Lanczos algorithm. Ultimately we discuss uncertainty analysis based on stochastic inversion results. In addition, we demonstrate how this analysis can be performed within a deterministic approach. In the second part, we summarize modern trends in the development of efficient 3-D EM forward modelling schemes with special emphasis on recent advances in the integral equation approach.
A Hybrid Algorithm for Non-negative Matrix Factorization Based on Symmetric Information Divergence
Devarajan, Karthik; Ebrahimi, Nader; Soofi, Ehsan
2017-01-01
The objective of this paper is to provide a hybrid algorithm for non-negative matrix factorization based on a symmetric version of Kullback-Leibler divergence, known as intrinsic information. The convergence of the proposed algorithm is shown for several members of the exponential family such as the Gaussian, Poisson, gamma and inverse Gaussian models. The speed of this algorithm is examined and its usefulness is illustrated through some applied problems. PMID:28868206
Tomographic Neutron Imaging using SIRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregor, Jens; FINNEY, Charles E A; Toops, Todd J
2013-01-01
Neutron imaging is complementary to x-ray imaging in that materials such as water and plastic are highly attenuating while material such as metal is nearly transparent. We showcase tomographic imaging of a diesel particulate filter. Reconstruction is done using a modified version of SIRT called PSIRT. We expand on previous work and introduce Tikhonov regularization. We show that near-optimal relaxation can still be achieved. The algorithmic ideas apply to cone beam x-ray CT and other inverse problems.
NASA Astrophysics Data System (ADS)
Fujii, M.
2017-07-01
Two variations of a depth-selective back-projection filter for functional near-infrared spectroscopy (fNIRS) systems are introduced. The filter comprises a depth-selective algorithm that uses inverse problems applied to an optically diffusive multilayer medium. In this study, simultaneous signal reconstruction of both superficial and deep tissue from fNIRS experiments of the human forehead using a prototype of a CW-NIRS system is demonstrated.
Investigation of inversion polymorphisms in the human genome using principal components analysis.
Ma, Jianzhong; Amos, Christopher I
2012-01-01
Despite the significant advances made over the last few years in mapping inversions with the advent of paired-end sequencing approaches, our understanding of the prevalence and spectrum of inversions in the human genome has lagged behind other types of structural variants, mainly due to the lack of a cost-efficient method applicable to large-scale samples. We propose a novel method based on principal components analysis (PCA) to characterize inversion polymorphisms using high-density SNP genotype data. Our method applies to non-recurrent inversions for which recombination between the inverted and non-inverted segments in inversion heterozygotes is suppressed due to the loss of unbalanced gametes. Inside such an inversion region, an effect similar to population substructure is thus created: two distinct "populations" of inversion homozygotes of different orientations and their 1:1 admixture, namely the inversion heterozygotes. This kind of substructure can be readily detected by performing PCA locally in the inversion regions. Using simulations, we demonstrated that the proposed method can be used to detect and genotype inversion polymorphisms using unphased genotype data. We applied our method to the phase III HapMap data and inferred the inversion genotypes of known inversion polymorphisms at 8p23.1 and 17q21.31. These inversion genotypes were validated by comparing with literature results and by checking Mendelian consistency using the family data whenever available. Based on the PCA-approach, we also performed a preliminary genome-wide scan for inversions using the HapMap data, which resulted in 2040 candidate inversions, 169 of which overlapped with previously reported inversions. Our method can be readily applied to the abundant SNP data, and is expected to play an important role in developing human genome maps of inversions and exploring associations between inversions and susceptibility of diseases.
NASA Astrophysics Data System (ADS)
Tian, Xiang-Dong
The purpose of this research is to simulate induction and measuring-while-drilling (MWD) logs. In simulation of logs, there are two tasks. The first task, the forward modeling procedure, is to compute the logs from known formation. The second task, the inversion procedure, is to determine the unknown properties of the formation from the measured field logs. In general, the inversion procedure requires the solution of a forward model. In this study, a stable numerical method to simulate induction and MWD logs is presented. The proposed algorithm is based on a horizontal eigenmode expansion method. Vertical propagation of modes is modeled by a three-layer module. The multilayer cases are treated as a cascade of these modules. The mode tracing algorithm possesses stable characteristics that are superior to other methods. This method is applied to simulate the logs in the formations with both vertical and horizontal layers, and also used to study the groove effects of the MWD tool. The results are very good. Two-dimensional inversion of induction logs is an nonlinear problem. Nonlinear functions of the apparent conductivity are expanded into a Taylor series. After truncating the high order terms in this Taylor series, the nonlinear functions are linearized. An iterative procedure is then devised to solve the inversion problem. In each iteration, the Jacobian matrix is calculated, and a small variation computed using the least-squares method is used to modify the background medium. Finally, the inverted medium is obtained. The horizontal eigenstate method is used to solve the forward problem. It is found that a good inverted formation can be obtained by using measurements. In order to help the user simulate the induction logs conveniently, a Wellog Simulator, based on the X-window system, is developed. The application software (FORTRAN codes) embedded in the Simulator is designed to simulate the responses of the induction tools in the layered formation with dipping beds. The graphic user-interface part of the Wellog Simulator is implemented with C and Motif. Through the user interface, the user can prepare the simulation data, select the tools, simulate the logs and plot the results.
Vibrato in Singing Voice: The Link between Source-Filter and Sinusoidal Models
NASA Astrophysics Data System (ADS)
Arroabarren, Ixone; Carlosena, Alfonso
2004-12-01
The application of inverse filtering techniques for high-quality singing voice analysis/synthesis is discussed. In the context of source-filter models, inverse filtering provides a noninvasive method to extract the voice source, and thus to study voice quality. Although this approach is widely used in speech synthesis, this is not the case in singing voice. Several studies have proved that inverse filtering techniques fail in the case of singing voice, the reasons being unclear. In order to shed light on this problem, we will consider here an additional feature of singing voice, not present in speech: the vibrato. Vibrato has been traditionally studied by sinusoidal modeling. As an alternative, we will introduce here a novel noninteractive source filter model that incorporates the mechanisms of vibrato generation. This model will also allow the comparison of the results produced by inverse filtering techniques and by sinusoidal modeling, as they apply to singing voice and not to speech. In this way, the limitations of these conventional techniques, described in previous literature, will be explained. Both synthetic signals and singer recordings are used to validate and compare the techniques presented in the paper.
On the Development of Multi-Step Inverse FEM with Shell Model
NASA Astrophysics Data System (ADS)
Huang, Y.; Du, R.
2005-08-01
The inverse or one-step finite element approach is increasingly used in the sheet metal stamping industry to predict strain distribution and the initial blank shape in the preliminary design stage. Based on the existing theory, there are two types of method: one is based on the principle of virtual work and the other is based on the principle of extreme work. Much research has been conducted to improve the accuracy of simulation results. For example, based on the virtual work principle, Batoz et al. developed a new method using triangular DKT shell elements. In this new method, the bending and unbending effects are considered. Based on the principle of extreme work, Majlessi and et al. proposed the multi-step inverse approach with membrane elements and applied it to an axis-symmetric part. Lee and et al. presented an axis-symmetric shell element model to solve the similar problem. In this paper, a new multi-step inverse method is introduced with no limitation on the workpiece shape. It is a shell element model based on the virtual work principle. The new method is validated by means of comparing to the commercial software system (PAMSTAMP®). The comparison results indicate that the accuracy is good.
Application of Dynamic Logic Algorithm to Inverse Scattering Problems Related to Plasma Diagnostics
NASA Astrophysics Data System (ADS)
Perlovsky, L.; Deming, R. W.; Sotnikov, V.
2010-11-01
In plasma diagnostics scattering of electromagnetic waves is widely used for identification of density and wave field perturbations. In the present work we use a powerful mathematical approach, dynamic logic (DL), to identify the spectra of scattered electromagnetic (EM) waves produced by the interaction of the incident EM wave with a Langmuir soliton in the presence of noise. The problem is especially difficult since the spectral amplitudes of the noise pattern are comparable with the amplitudes of the scattered waves. In the past DL has been applied to a number of complex problems in artificial intelligence, pattern recognition, and signal processing, resulting in revolutionary improvements. Here we demonstrate its application to plasma diagnostic problems. [4pt] Perlovsky, L.I., 2001. Neural Networks and Intellect: using model-based concepts. Oxford University Press, New York, NY.
Inverse random source scattering for the Helmholtz equation in inhomogeneous media
NASA Astrophysics Data System (ADS)
Li, Ming; Chen, Chuchu; Li, Peijun
2018-01-01
This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.
2012-04-01
We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to create technology «no frost», realizing a steady stream of direct and inverse problems: solving the direct problem, the visualization and comparison with observed data, to solve the inverse problem (correction of the model parameters). The main objective of further work is the creation of a workstation operating emergency tool that could be used by an emergency duty person in real time.
NASA Astrophysics Data System (ADS)
Tandon, K.; Egbert, G.; Siripunvaraporn, W.
2003-12-01
We are developing a modular system for three-dimensional inversion of electromagnetic (EM) induction data, using an object oriented programming approach. This approach allows us to modify the individual components of the inversion scheme proposed, and also reuse the components for variety of problems in earth science computing howsoever diverse they might be. In particular, the modularity allows us to (a) change modeling codes independently of inversion algorithm details; (b) experiment with new inversion algorithms; and (c) modify the way prior information is imposed in the inversion to test competing hypothesis and techniques required to solve an earth science problem. Our initial code development is for EM induction equations on a staggered grid, using iterative solution techniques in 3D. An example illustrated here is an experiment with the sensitivity of 3D magnetotelluric inversion to uncertainties in the boundary conditions required for regional induction problems. These boundary conditions should reflect the large-scale geoelectric structure of the study area, which is usually poorly constrained. In general for inversion of MT data, one fixes boundary conditions at the edge of the model domain, and adjusts the earth?s conductivity structure within the modeling domain. Allowing for errors in specification of the open boundary values is simple in principle, but no existing inversion codes that we are aware of have this feature. Adding a feature such as this is straightforward within the context of the modular approach. More generally, a modular approach provides an efficient methodology for setting up earth science computing problems to test various ideas. As a concrete illustration relevant to EM induction problems, we investigate the sensitivity of MT data near San Andreas Fault at Parkfield (California) to uncertainties in the regional geoelectric structure.
Laterally constrained inversion for CSAMT data interpretation
NASA Astrophysics Data System (ADS)
Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun
2015-10-01
Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.
Angular velocity of gravitational radiation from precessing binaries and the corotating frame
NASA Astrophysics Data System (ADS)
Boyle, Michael
2013-05-01
This paper defines an angular velocity for time-dependent functions on the sphere and applies it to gravitational waveforms from compact binaries. Because it is geometrically meaningful and has a clear physical motivation, the angular velocity is uniquely useful in helping to solve an important—and largely ignored—problem in models of compact binaries: the inverse problem of deducing the physical parameters of a system from the gravitational waves alone. It is also used to define the corotating frame of the waveform. When decomposed in this frame, the waveform has no rotational dynamics and is therefore as slowly evolving as possible. The resulting simplifications lead to straightforward methods for accurately comparing waveforms and constructing hybrids. As formulated in this paper, the methods can be applied robustly to both precessing and nonprecessing waveforms, providing a clear, comprehensive, and consistent framework for waveform analysis. Explicit implementations of all these methods are provided in accompanying computer code.
A Riemann-Hilbert approach to the inverse problem for the Stark operator on the line
NASA Astrophysics Data System (ADS)
Its, A.; Sukhanov, V.
2016-05-01
The paper is concerned with the inverse scattering problem for the Stark operator on the line with a potential from the Schwartz class. In our study of the inverse problem, we use the Riemann-Hilbert formalism. This allows us to overcome the principal technical difficulties which arise in the more traditional approaches based on the Gel’fand-Levitan-Marchenko equations, and indeed solve the problem. We also produce a complete description of the relevant scattering data (which have not been obtained in the previous works on the Stark operator) and establish the bijection between the Schwartz class potentials and the scattering data.
SU-E-J-161: Inverse Problems for Optical Parameters in Laser Induced Thermal Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahrenholtz, SJ; Stafford, RJ; Fuentes, DT
Purpose: Magnetic resonance-guided laser-induced thermal therapy (MRgLITT) is investigated as a neurosurgical intervention for oncological applications throughout the body by active post market studies. Real-time MR temperature imaging is used to monitor ablative thermal delivery in the clinic. Additionally, brain MRgLITT could improve through effective planning for laser fiber's placement. Mathematical bioheat models have been extensively investigated but require reliable patient specific physical parameter data, e.g. optical parameters. This abstract applies an inverse problem algorithm to characterize optical parameter data obtained from previous MRgLITT interventions. Methods: The implemented inverse problem has three primary components: a parameter-space search algorithm, a physicsmore » model, and training data. First, the parameter-space search algorithm uses a gradient-based quasi-Newton method to optimize the effective optical attenuation coefficient, μ-eff. A parameter reduction reduces the amount of optical parameter-space the algorithm must search. Second, the physics model is a simplified bioheat model for homogeneous tissue where closed-form Green's functions represent the exact solution. Third, the training data was temperature imaging data from 23 MRgLITT oncological brain ablations (980 nm wavelength) from seven different patients. Results: To three significant figures, the descriptive statistics for μ-eff were 1470 m{sup −1} mean, 1360 m{sup −1} median, 369 m{sup −1} standard deviation, 933 m{sup −1} minimum and 2260 m{sup −1} maximum. The standard deviation normalized by the mean was 25.0%. The inverse problem took <30 minutes to optimize all 23 datasets. Conclusion: As expected, the inferred average is biased by underlying physics model. However, the standard deviation normalized by the mean is smaller than literature values and indicates an increased precision in the characterization of the optical parameters needed to plan MRgLITT procedures. This investigation demonstrates the potential for the optimization and validation of more sophisticated bioheat models that incorporate the uncertainty of the data into the predictions, e.g. stochastic finite element methods.« less
Van Regenmortel, Marc H. V.
2018-01-01
Hypotheses and theories are essential constituents of the scientific method. Many vaccinologists are unaware that the problems they try to solve are mostly inverse problems that consist in imagining what could bring about a desired outcome. An inverse problem starts with the result and tries to guess what are the multiple causes that could have produced it. Compared to the usual direct scientific problems that start with the causes and derive or calculate the results using deductive reasoning and known mechanisms, solving an inverse problem uses a less reliable inductive approach and requires the development of a theoretical model that may have different solutions or none at all. Unsuccessful attempts to solve inverse problems in HIV vaccinology by reductionist methods, systems biology and structure-based reverse vaccinology are described. The popular strategy known as rational vaccine design is unable to solve the multiple inverse problems faced by HIV vaccine developers. The term “rational” is derived from “rational drug design” which uses the 3D structure of a biological target for designing molecules that will selectively bind to it and inhibit its biological activity. In vaccine design, however, the word “rational” simply means that the investigator is concentrating on parts of the system for which molecular information is available. The economist and Nobel laureate Herbert Simon introduced the concept of “bounded rationality” to explain why the complexity of the world economic system makes it impossible, for instance, to predict an event like the financial crash of 2007–2008. Humans always operate under unavoidable constraints such as insufficient information, a limited capacity to process huge amounts of data and a limited amount of time available to reach a decision. Such limitations always prevent us from achieving the complete understanding and optimization of a complex system that would be needed to achieve a truly rational design process. This is why the complexity of the human immune system prevents us from rationally designing an HIV vaccine by solving inverse problems. PMID:29387066
Mathematical inference in one point microrheology
NASA Astrophysics Data System (ADS)
Hohenegger, Christel; McKinley, Scott
2016-11-01
Pioneered by the work of Mason and Weitz, one point passive microrheology has been successfully applied to obtaining estimates of the loss and storage modulus of viscoelastic fluids when the mean-square displacement obeys a local power law. Using numerical simulations of a fluctuating viscoelastic fluid model, we study the problem of recovering the mechanical parameters of the fluid's memory kernel using statistical inference like mean-square displacements and increment auto-correlation functions. Seeking a better understanding of the influence of the assumptions made in the inversion process, we mathematically quantify the uncertainty in traditional one point microrheology for simulated data and demonstrate that a large family of memory kernels yields the same statistical signature. We consider both simulated data obtained from a full viscoelastic fluid simulation of the unsteady Stokes equations with fluctuations and from a Generalized Langevin Equation of the particle's motion described by the same memory kernel. From the theory of inverse problems, we propose an alternative method that can be used to recover information about the loss and storage modulus and discuss its limitations and uncertainties. NSF-DMS 1412998.
NASA Astrophysics Data System (ADS)
Strąk, Kinga; Maciejewska, Beata; Piasecka, Magdalena
2018-06-01
In this paper, the solution of the two-dimensional inverse heat transfer problem with the use of the Beck method coupled with the Trefftz method is proposed. This method was applied for solving an inverse heat conduction problem. The aim of the calculation was to determine the boiling heat transfer coefficient on the basis of temperature measurements taken by infrared thermography. The experimental data of flow boiling heat transfer in a single vertical minichannel of 1.7 mm depth, heated asymmetrically, were used in calculations. The heating element for two refrigerants (FC-72 and HFE-7100, 3M) flowing in the minichannel was the plate enhanced on the side contacting with the fluid. The analysis of the results was performed on the basis of experimental series obtained for the same heat flux and two different mass flow velocities. The results were presented as infrared thermographs, heated wall temperature and heat transfer coefficient as a function of the distance from the minichannel inlet. The results was discussed for the subcooled and saturated boiling regions separately.
Probing clouds in planets with a simple radiative transfer model: the Jupiter case
NASA Astrophysics Data System (ADS)
Mendikoa, Iñigo; Pérez-Hoyos, Santiago; Sánchez-Lavega, Agustín
2012-11-01
Remote sensing of planets evokes using expensive on-orbit satellites and gathering complex data from space. However, the basic properties of clouds in planetary atmospheres can be successfully estimated with small telescopes even from an urban environment using currently available and affordable technology. This makes the process accessible for undergraduate students while preserving most of the physics and mathematics involved. This paper presents the methodology for carrying out a photometric study of planetary atmospheres, focused on the planet Jupiter. The method introduces the basics of radiative transfer in planetary atmospheres, some notions on inverse problem theory and the fundamentals of planetary photometry. As will be shown, the procedure allows the student to derive the spectral reflectivity and top altitude of clouds from observations at different wavelengths by applying a simple but enlightening ‘reflective layer model’. In this way, the planet's atmospheric structure is estimated by students as an inverse problem from the observed photometry. Web resources are also provided to help those unable to obtain telescopic observations of the planets.
NASA Astrophysics Data System (ADS)
Blajer, W.; Dziewiecki, K.; Kołodziejczyk, K.; Mazur, Z.
2011-05-01
Underactuated systems are featured by fewer control inputs than the degrees-of-freedom, m < n. The determination of an input control strategy that forces such a system to complete a set of m specified motion tasks is a challenging task, and the explicit solution existence is conditioned to differential flatness of the problem. The flatness-based solution denotes that all the 2 n states and m control inputs can be algebraically expressed in terms of the m specified outputs and their time derivatives up to a certain order, which is in practice attainable only for simple systems. In this contribution the problem is posed in a more practical way as a set of index-three differential-algebraic equations, and the solution is obtained numerically. The formulation is then illustrated by a two-degree-of-freedom underactuated system composed of two rotating discs connected by a torsional spring, in which the pre-specified motion of one of the discs is actuated by the torque applied to the other disc, n = 2 and m = 1. Experimental verification of the inverse simulation control methodology is reported.
NASA Astrophysics Data System (ADS)
Audebert, M.; Clément, R.; Touze-Foltz, N.; Günther, T.; Moreau, S.; Duquennoi, C.
2014-12-01
Leachate recirculation is a key process in municipal waste landfills functioning as bioreactors. To quantify the water content and to assess the leachate injection system, in-situ methods are required to obtain spatially distributed information, usually electrical resistivity tomography (ERT). This geophysical method is based on the inversion process, which presents two major problems in terms of delimiting the infiltration area. First, it is difficult for ERT users to choose an appropriate inversion parameter set. Indeed, it might not be sufficient to interpret only the optimum model (i.e. the model with the chosen regularisation strength) because it is not necessarily the model which best represents the physical process studied. Second, it is difficult to delineate the infiltration front based on resistivity models because of the smoothness of the inversion results. This paper proposes a new methodology called MICS (multiple inversions and clustering strategy), which allows ERT users to improve the delimitation of the infiltration area in leachate injection monitoring. The MICS methodology is based on (i) a multiple inversion step by varying the inversion parameter values to take a wide range of resistivity models into account and (ii) a clustering strategy to improve the delineation of the infiltration front. In this paper, MICS was assessed on two types of data. First, a numerical assessment allows us to optimise and test MICS for different infiltration area sizes, contrasts and shapes. Second, MICS was applied to a field data set gathered during leachate recirculation on a bioreactor.
NASA Astrophysics Data System (ADS)
Barnoud, Anne; Coutant, Olivier; Bouligand, Claire; Gunawan, Hendra; Deroussi, Sébastien
2016-04-01
We use a Bayesian formalism combined with a grid node discretization for the linear inversion of gravimetric data in terms of 3-D density distribution. The forward modelling and the inversion method are derived from seismological inversion techniques in order to facilitate joint inversion or interpretation of density and seismic velocity models. The Bayesian formulation introduces covariance matrices on model parameters to regularize the ill-posed problem and reduce the non-uniqueness of the solution. This formalism favours smooth solutions and allows us to specify a spatial correlation length and to perform inversions at multiple scales. We also extract resolution parameters from the resolution matrix to discuss how well our density models are resolved. This method is applied to the inversion of data from the volcanic island of Basse-Terre in Guadeloupe, Lesser Antilles. A series of synthetic tests are performed to investigate advantages and limitations of the methodology in this context. This study results in the first 3-D density models of the island of Basse-Terre for which we identify: (i) a southward decrease of densities parallel to the migration of volcanic activity within the island, (ii) three dense anomalies beneath Petite Plaine Valley, Beaugendre Valley and the Grande-Découverte-Carmichaël-Soufrière Complex that may reflect the trace of former major volcanic feeding systems, (iii) shallow low-density anomalies in the southern part of Basse-Terre, especially around La Soufrière active volcano, Piton de Bouillante edifice and along the western coast, reflecting the presence of hydrothermal systems and fractured and altered rocks.
NASA Astrophysics Data System (ADS)
Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves
2009-03-01
This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.
Parsimony and goodness-of-fit in multi-dimensional NMR inversion
NASA Astrophysics Data System (ADS)
Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos
2017-01-01
Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.
Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data
Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.
2015-01-01
We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.
Comparison of iterative inverse coarse-graining methods
NASA Astrophysics Data System (ADS)
Rosenberger, David; Hanke, Martin; van der Vegt, Nico F. A.
2016-10-01
Deriving potentials for coarse-grained Molecular Dynamics (MD) simulations is frequently done by solving an inverse problem. Methods like Iterative Boltzmann Inversion (IBI) or Inverse Monte Carlo (IMC) have been widely used to solve this problem. The solution obtained by application of these methods guarantees a match in the radial distribution function (RDF) between the underlying fine-grained system and the derived coarse-grained system. However, these methods often fail in reproducing thermodynamic properties. To overcome this deficiency, additional thermodynamic constraints such as pressure or Kirkwood-Buff integrals (KBI) may be added to these methods. In this communication we test the ability of these methods to converge to a known solution of the inverse problem. With this goal in mind we have studied a binary mixture of two simple Lennard-Jones (LJ) fluids, in which no actual coarse-graining is performed. We further discuss whether full convergence is actually needed to achieve thermodynamic representability.
NASA Astrophysics Data System (ADS)
Ravenna, Matteo; Lebedev, Sergei; Celli, Nicolas
2017-04-01
We develop a Markov Chain Monte Carlo inversion of fundamental and higher mode phase-velocity curves for radially and azimuthally anisotropic structure of the crust and upper mantle. In the inversions of Rayleigh- and Love-wave dispersion curves for radially anisotropic structure, we obtain probabilistic 1D radially anisotropic shear-velocity profiles of the isotropic average Vs and anisotropy (or Vsv and Vsh) as functions of depth. In the inversions for azimuthal anisotropy, Rayleigh-wave dispersion curves at different azimuths are inverted for the vertically polarized shear-velocity structure (Vsv) and the 2-phi component of azimuthal anisotropy. The strength and originality of the method is in its fully non-linear approach. Each model realization is computed using exact forward calculations. The uncertainty of the models is a part of the output. In the inversions for azimuthal anisotropy, in particular, the computation of the forward problem is performed separately at different azimuths, with no linear approximations on the relation of the Earth's elastic parameters to surface wave phase velocities. The computations are performed in parallel in order reduce the computing time. We compare inversions of the fundamental mode phase-velocity curves alone with inversions that also include overtones. The addition of higher modes enhances the resolving power of the anisotropic structure of the deep upper mantle. We apply the inversion method to phase-velocity curves in a few regions, including the Hangai dome region in Mongolia. Our models provide constraints on the Moho depth, the Lithosphere-Asthenosphere Boundary, and the alignment of the anisotropic fabric and the direction of current and past flow, from the crust down to the deep asthenosphere.
pyGIMLi: An open-source library for modelling and inversion in geophysics
NASA Astrophysics Data System (ADS)
Rücker, Carsten; Günther, Thomas; Wagner, Florian M.
2017-12-01
Many tasks in applied geosciences cannot be solved by single measurements, but require the integration of geophysical, geotechnical and hydrological methods. Numerical simulation techniques are essential both for planning and interpretation, as well as for the process understanding of modern geophysical methods. These trends encourage open, simple, and modern software architectures aiming at a uniform interface for interdisciplinary and flexible modelling and inversion approaches. We present pyGIMLi (Python Library for Inversion and Modelling in Geophysics), an open-source framework that provides tools for modelling and inversion of various geophysical but also hydrological methods. The modelling component supplies discretization management and the numerical basis for finite-element and finite-volume solvers in 1D, 2D and 3D on arbitrarily structured meshes. The generalized inversion framework solves the minimization problem with a Gauss-Newton algorithm for any physical forward operator and provides opportunities for uncertainty and resolution analyses. More general requirements, such as flexible regularization strategies, time-lapse processing and different sorts of coupling individual methods are provided independently of the actual methods used. The usage of pyGIMLi is first demonstrated by solving the steady-state heat equation, followed by a demonstration of more complex capabilities for the combination of different geophysical data sets. A fully coupled hydrogeophysical inversion of electrical resistivity tomography (ERT) data of a simulated tracer experiment is presented that allows to directly reconstruct the underlying hydraulic conductivity distribution of the aquifer. Another example demonstrates the improvement of jointly inverting ERT and ultrasonic data with respect to saturation by a new approach that incorporates petrophysical relations in the inversion. Potential applications of the presented framework are manifold and include time-lapse, constrained, joint, and coupled inversions of various geophysical and hydrological data sets.
Quantifying uncertainties of seismic Bayesian inversion of Northern Great Plains
NASA Astrophysics Data System (ADS)
Gao, C.; Lekic, V.
2017-12-01
Elastic waves excited by earthquakes are the fundamental observations of the seismological studies. Seismologists measure information such as travel time, amplitude, and polarization to infer the properties of earthquake source, seismic wave propagation, and subsurface structure. Across numerous applications, seismic imaging has been able to take advantage of complimentary seismic observables to constrain profiles and lateral variations of Earth's elastic properties. Moreover, seismic imaging plays a unique role in multidisciplinary studies of geoscience by providing direct constraints on the unreachable interior of the Earth. Accurate quantification of uncertainties of inferences made from seismic observations is of paramount importance for interpreting seismic images and testing geological hypotheses. However, such quantification remains challenging and subjective due to the non-linearity and non-uniqueness of geophysical inverse problem. In this project, we apply a reverse jump Markov chain Monte Carlo (rjMcMC) algorithm for a transdimensional Bayesian inversion of continental lithosphere structure. Such inversion allows us to quantify the uncertainties of inversion results by inverting for an ensemble solution. It also yields an adaptive parameterization that enables simultaneous inversion of different elastic properties without imposing strong prior information on the relationship between them. We present retrieved profiles of shear velocity (Vs) and radial anisotropy in Northern Great Plains using measurements from USArray stations. We use both seismic surface wave dispersion and receiver function data due to their complementary constraints of lithosphere structure. Furthermore, we analyze the uncertainties of both individual and joint inversion of those two data types to quantify the benefit of doing joint inversion. As an application, we infer the variation of Moho depths and crustal layering across the northern Great Plains.
Joint inversion of NMR and SIP data to estimate pore size distribution of geomaterials
NASA Astrophysics Data System (ADS)
Niu, Qifei; Zhang, Chi
2018-03-01
There are growing interests in using geophysical tools to characterize the microstructure of geomaterials because of the non-invasive nature and the applicability in field. In these applications, multiple types of geophysical data sets are usually processed separately, which may be inadequate to constrain the key feature of target variables. Therefore, simultaneous processing of multiple data sets could potentially improve the resolution. In this study, we propose a method to estimate pore size distribution by joint inversion of nuclear magnetic resonance (NMR) T2 relaxation and spectral induced polarization (SIP) spectra. The petrophysical relation between NMR T2 relaxation time and SIP relaxation time is incorporated in a nonlinear least squares problem formulation, which is solved using Gauss-Newton method. The joint inversion scheme is applied to a synthetic sample and a Berea sandstone sample. The jointly estimated pore size distributions are very close to the true model and results from other experimental method. Even when the knowledge of the petrophysical models of the sample is incomplete, the joint inversion can still capture the main features of the pore size distribution of the samples, including the general shape and relative peak positions of the distribution curves. It is also found from the numerical example that the surface relaxivity of the sample could be extracted with the joint inversion of NMR and SIP data if the diffusion coefficient of the ions in the electrical double layer is known. Comparing to individual inversions, the joint inversion could improve the resolution of the estimated pore size distribution because of the addition of extra data sets. The proposed approach might constitute a first step towards a comprehensive joint inversion that can extract the full pore geometry information of a geomaterial from NMR and SIP data.
Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics
NASA Astrophysics Data System (ADS)
Kordy, Michal Adam
The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the case of the right hand analytically dependent on frequency. The operator's null space is treated by decomposing the solution into the part in the null space and orthogonal to it.
Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data
NASA Astrophysics Data System (ADS)
Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.
2017-10-01
The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.
Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography
NASA Astrophysics Data System (ADS)
Chu, Pan; Lei, Jing
2017-11-01
The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.
Termination Proofs for String Rewriting Systems via Inverse Match-Bounds
NASA Technical Reports Server (NTRS)
Butler, Ricky (Technical Monitor); Geser, Alfons; Hofbauer, Dieter; Waldmann, Johannes
2004-01-01
Annotating a letter by a number, one can record information about its history during a reduction. A string rewriting system is called match-bounded if there is a global upper bound to these numbers. In earlier papers we established match-boundedness as a strong sufficient criterion for both termination and preservation of regular languages. We show now that the string rewriting system whose inverse (left and right hand sides exchanged) is match-bounded, also have exceptional properties, but slightly different ones. Inverse match-bounded systems effectively preserve context-free languages; their sets of normalized strings and their sets of immortal strings are effectively regular. These sets of strings can be used to decide the normalization, the termination and the uniform termination problems of inverse match-bounded systems. We also show that the termination problem is decidable in linear time, and that a certain strong reachability problem is deciable, thus solving two open problems of McNaughton's.
The inverse Wiener polarity index problem for chemical trees.
Du, Zhibin; Ali, Akbar
2018-01-01
The Wiener polarity number (which, nowadays, known as the Wiener polarity index and usually denoted by Wp) was devised by the chemist Harold Wiener, for predicting the boiling points of alkanes. The index Wp of chemical trees (chemical graphs representing alkanes) is defined as the number of unordered pairs of vertices (carbon atoms) at distance 3. The inverse problems based on some well-known topological indices have already been addressed in the literature. The solution of such inverse problems may be helpful in speeding up the discovery of lead compounds having the desired properties. This paper is devoted to solving a stronger version of the inverse problem based on Wiener polarity index for chemical trees. More precisely, it is proved that for every integer t ∈ {n - 3, n - 2,…,3n - 16, 3n - 15}, n ≥ 6, there exists an n-vertex chemical tree T such that Wp(T) = t.
Assimilating data into open ocean tidal models
NASA Astrophysics Data System (ADS)
Kivman, Gennady A.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.
The boundary element method applied to 3D magneto-electro-elastic dynamic problems
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Markov, I. P.; Kuznetsov, Iu A.
2017-11-01
Due to the coupling properties, the magneto-electro-elastic materials possess a wide number of applications. They exhibit general anisotropic behaviour. Three-dimensional transient analyses of magneto-electro-elastic solids can hardly be found in the literature. 3D direct boundary element formulation based on the weakly-singular boundary integral equations in Laplace domain is presented in this work for solving dynamic linear magneto-electro-elastic problems. Integral expressions of the three-dimensional fundamental solutions are employed. Spatial discretization is based on a collocation method with mixed boundary elements. Convolution quadrature method is used as a numerical inverse Laplace transform scheme to obtain time domain solutions. Numerical examples are provided to illustrate the capability of the proposed approach to treat highly dynamic problems.
NASA Astrophysics Data System (ADS)
Yu, Jiang-Bo; Zhao, Yan; Wu, Yu-Qiang
2014-04-01
This article considers the global robust output regulation problem via output feedback for a class of cascaded nonlinear systems with input-to-state stable inverse dynamics. The system uncertainties depend not only on the measured output but also all the unmeasurable states. By introducing an internal model, the output regulation problem is converted into a stabilisation problem for an appropriately augmented system. The designed dynamic controller could achieve the global asymptotic tracking control for a class of time-varying reference signals for the system output while keeping all other closed-loop signals bounded. It is of interest to note that the developed control approach can be applied to the speed tracking control of the fan speed control system. The simulation results demonstrate its effectiveness.
NASA Astrophysics Data System (ADS)
Fargier, Yannick; Dore, Ludovic; Antoine, Raphael; Palma Lopes, Sérgio; Fauchard, Cyrille
2016-04-01
The extraction of subsurface materials is a key element for the economy of a nation. However, natural degradation of underground quarries is a major issue from an economic and public safety point of view. Consequently, the quarries stakeholders require relevant tools to define hazards associated to these structures. Safety assessment methods of underground quarries are recent and mainly based on rock physical properties. This kind of method leads to a certain homogeneity assumption of pillar internal properties that can cause an underestimation of the risk. Electrical Resistivity Imaging (ERI) is a widely used method that possesses two advantages to overcome this limitation. The first is to provide a qualitative understanding for the detection and monitoring of anomalies in the pillar body (e.g. faults). The second is to provide a quantitative description of the electrical resistivity distribution inside the pillar. This quantitative description can be interpreted with constitutive laws to help decision support (water content decreases the mechanical resistance of a chalk). However, conventional 2D and 3D Imaging techniques are usually applied to flat surface surveys or to surfaces with moderate topography. A 3D inversion of more complex media (case of the pillar) requires a full consideration of the geometry that was never taken into account before. The Photogrammetric technique presents a cost effective solution to obtain an accurate description of the external geometry of a complex media. However, this method has never been fully coupled with a geophysical method to enhance/improve the inversion process. Consequently we developed a complete procedure showing that photogrammetric and ERI tools can be efficiently combined to assess a complex 3D structure. This procedure includes in a first part a photogrammetric survey, a processing stage with an open source software and a post-processing stage finalizing a 3D surface model. The second part necessitates the production of a complete 3D mesh of the previous surface model to operate some forward modelization of the geo-electrical problem. To solve the inverse problem and obtain a 3D resistivity distribution we use a double grid method associated with a regularized Gauss-Newton inversion scheme. We applied this procedure to a synthetic case to demonstrate the impact of the geometry on the inversion result. This study shows that geometrical information in between electrodes are necessary to reconstruct finely the "true model". Finally, we apply the methodology to a real underground quarry pillar, implying one photogrammetric survey and three ERI surveys. The results show that the procedure can greatly improve the reconstruction and avoid some artifacts due to strong geometry variations.
Liu, Chun; Kroll, Andreas
2016-01-01
Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.
NASA Astrophysics Data System (ADS)
Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas
2018-06-01
In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.
Convex blind image deconvolution with inverse filtering
NASA Astrophysics Data System (ADS)
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
NASA Astrophysics Data System (ADS)
An, M.; Assumpcao, M.
2003-12-01
The joint inversion of receiver function and surface wave is an effective way to diminish the influences of the strong tradeoff among parameters and the different sensitivity to the model parameters in their respective inversions, but the inversion problem becomes more complex. Multi-objective problems can be much more complicated than single-objective inversion in the model selection and optimization. If objectives are involved and conflicting, models can be ordered only partially. In this case, Pareto-optimal preference should be used to select solutions. On the other hand, the inversion to get only a few optimal solutions can not deal properly with the strong tradeoff between parameters, the uncertainties in the observation, the geophysical complexities and even the incompetency of the inversion technique. The effective way is to retrieve the geophysical information statistically from many acceptable solutions, which requires more competent global algorithms. Competent genetic algorithms recently proposed are far superior to the conventional genetic algorithm and can solve hard problems quickly, reliably and accurately. In this work we used one of competent genetic algorithms, Bayesian Optimization Algorithm as the main inverse procedure. This algorithm uses Bayesian networks to draw out inherited information and can use Pareto-optimal preference in the inversion. With this algorithm, the lithospheric structure of Paran"› basin is inverted to fit both the observations of inter-station surface wave dispersion and receiver function.
A musculoskeletal shoulder model based on pseudo-inverse and null-space optimization.
Terrier, Alexandre; Aeberhard, Martin; Michellod, Yvan; Mullhaupt, Philippe; Gillet, Denis; Farron, Alain; Pioletti, Dominique P
2010-11-01
The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.
Numerical approach for ECT by using boundary element method with Laplace transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enokizono, M.; Todaka, T.; Shibao, K.
1997-03-01
This paper presents an inverse analysis by using BEM with Laplace transform. The method is applied to a simple problem in the eddy current testing (ECT). Some crack shapes in a conductive specimen are estimated from distributions of the transient eddy current on its sensing surface and magnetic flux density in the liftoff space. Because the transient behavior includes information on various frequency components, the method is applicable to the shape estimation of a comparative small crack.
a 2d Model of Ultrasonic Testing for Cracks Near a Nonplanar Surface
NASA Astrophysics Data System (ADS)
Westlund, Jonathan; Boström, Anders
2010-02-01
2D P-SV elastic wave scattering by a crack near a non-planar surface is investigated. The wave scattering problem is solved in the frequency domain using a combination of the boundary element method (BEM) for the back surface displacement and a Fourier series expansion of the crack opening displacement (COD). The model accounts for the action of the transmitting and receiving ultrasonic contact probes, and the time traces are obtained by applying an inverse temporal Fourier transform.
Aspects of the inverse problem for the Toda chain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozlowski, K. K., E-mail: karol.kozlowski@u-bourgogne.fr
We generalize Babelon's approach to equations in dual variables so as to be able to treat new types of operators which we build out of the sub-constituents of the model's monodromy matrix. Further, we also apply Sklyanin's recent monodromy matrix identities so as to obtain equations in dual variables for yet other operators. The schemes discussed in this paper appear to be universal and thus, in principle, applicable to many models solvable through the quantum separation of variables.
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.
An ambiguity of information content and error in an ill-posed satellite inversion
NASA Astrophysics Data System (ADS)
Koner, Prabhat
According to Rodgers (2000, stochastic approach), the averaging kernel (AK) is the representational matrix to understand the information content in a scholastic inversion. On the other hand, in deterministic approach this is referred to as model resolution matrix (MRM, Menke 1989). The analysis of AK/MRM can only give some understanding of how much regularization is imposed on the inverse problem. The trace of the AK/MRM matrix, which is the so-called degree of freedom from signal (DFS; stochastic) or degree of freedom in retrieval (DFR; deterministic). There are no physical/mathematical explanations in the literature: why the trace of the matrix is a valid form to calculate this quantity? We will present an ambiguity between information and error using a real life problem of SST retrieval from GOES13. The stochastic information content calculation is based on the linear assumption. The validity of such mathematics in satellite inversion will be questioned because it is based on the nonlinear radiative transfer and ill-conditioned inverse problems. References: Menke, W., 1989: Geophysical data analysis: discrete inverse theory. San Diego academic press. Rodgers, C.D., 2000: Inverse methods for atmospheric soundings: theory and practice. Singapore :World Scientific.
IPDO-2007: Inverse Problems, Design and Optimization Symposium
2007-08-01
Kanevce, G. H., Kanevce, Lj. P., and Mitrevski , V. B.), International Symposium on Inverse Problems, Design and Optimization (IPDO-2007), (eds...107 Gligor Kanevce Ljubica Kanevce Vangelce Mitrevski George Dulikravich 108 Gligor Kanevce Ljubica Kanevce Igor Andreevski George Dulikravich
Stochastic Gabor reflectivity and acoustic impedance inversion
NASA Astrophysics Data System (ADS)
Hariri Naghadeh, Diako; Morley, Christopher Keith; Ferguson, Angus John
2018-02-01
To delineate subsurface lithology to estimate petrophysical properties of a reservoir, it is possible to use acoustic impedance (AI) which is the result of seismic inversion. To change amplitude to AI, removal of wavelet effects from the seismic signal in order to get a reflection series, and subsequently transforming those reflections to AI, is vital. To carry out seismic inversion correctly it is important to not assume that the seismic signal is stationary. However, all stationary deconvolution methods are designed following that assumption. To increase temporal resolution and interpretation ability, amplitude compensation and phase correction are inevitable. Those are pitfalls of stationary reflectivity inversion. Although stationary reflectivity inversion methods are trying to estimate reflectivity series, because of incorrect assumptions their estimations will not be correct, but may be useful. Trying to convert those reflection series to AI, also merging with the low frequency initial model, can help us. The aim of this study was to apply non-stationary deconvolution to eliminate time variant wavelet effects from the signal and to convert the estimated reflection series to the absolute AI by getting bias from well logs. To carry out this aim, stochastic Gabor inversion in the time domain was used. The Gabor transform derived the signal’s time-frequency analysis and estimated wavelet properties from different windows. Dealing with different time windows gave an ability to create a time-variant kernel matrix, which was used to remove matrix effects from seismic data. The result was a reflection series that does not follow the stationary assumption. The subsequent step was to convert those reflections to AI using well information. Synthetic and real data sets were used to show the ability of the introduced method. The results highlight that the time cost to get seismic inversion is negligible related to general Gabor inversion in the frequency domain. Also, obtaining bias could help the method to estimate reliable AI. To justify the effect of random noise on deterministic and stochastic inversion results, a stationary noisy trace with signal-to-noise ratio equal to 2 was used. The results highlight the inability of deterministic inversion in dealing with a noisy data set even using a high number of regularization parameters. Also, despite the low level of signal, stochastic Gabor inversion not only can estimate correctly the wavelet’s properties but also, because of bias from well logs, the inversion result is very close to the real AI. Comparing deterministic and introduced inversion results on a real data set shows that low resolution results, especially in the deeper parts of seismic sections using deterministic inversion, creates significant reliability problems for seismic prospects, but this pitfall is solved completely using stochastic Gabor inversion. The estimated AI using Gabor inversion in the time domain is much better and faster than general Gabor inversion in the frequency domain. This is due to the extra number of windows required to analyze the time-frequency information and also the amount of temporal increment between windows. In contrast, stochastic Gabor inversion can estimate trustable physical properties close to the real characteristics. Applying to a real data set could give an ability to detect the direction of volcanic intrusion and the ability of lithology distribution delineation along the fan. Comparing the inversion results highlights the efficiency of stochastic Gabor inversion to delineate lateral lithology changes because of the improved frequency content and zero phasing of the final inversion volume.
A preprocessing strategy for helioseismic inversions
NASA Astrophysics Data System (ADS)
Christensen-Dalsgaard, J.; Thompson, M. J.
1993-05-01
Helioseismic inversion in general involves considerable computational expense, due to the large number of modes that is typically considered. This is true in particular of the widely used optimally localized averages (OLA) inversion methods, which require the inversion of one or more matrices whose order is the number of modes in the set. However, the number of practically independent pieces of information that a large helioseismic mode set contains is very much less than the number of modes, suggesting that the set might first be reduced before the expensive inversion is performed. We demonstrate with a model problem that by first performing a singular value decomposition the original problem may be transformed into a much smaller one, reducing considerably the cost of the OLA inversion and with no significant loss of information.
NASA Astrophysics Data System (ADS)
Jiang, Daijun; Li, Zhiyuan; Liu, Yikan; Yamamoto, Masahiro
2017-05-01
In this paper, we first establish a weak unique continuation property for time-fractional diffusion-advection equations. The proof is mainly based on the Laplace transform and the unique continuation properties for elliptic and parabolic equations. The result is weaker than its parabolic counterpart in the sense that we additionally impose the homogeneous boundary condition. As a direct application, we prove the uniqueness for an inverse problem on determining the spatial component in the source term by interior measurements. Numerically, we reformulate our inverse source problem as an optimization problem, and propose an iterative thresholding algorithm. Finally, several numerical experiments are presented to show the accuracy and efficiency of the algorithm.
Comparing multiple statistical methods for inverse prediction in nuclear forensics applications
Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela
2017-10-29
Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less
Comparing multiple statistical methods for inverse prediction in nuclear forensics applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela
Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less
2015-12-15
UXO community . NAME Total Number: PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: Irma Shamatava 0.50 0.50 1 Resolving and Discriminating...Distinguishing an object of interest from innocuous items is the main problem that the UXO community is facing currently. This inverse problem...innocuous items is the main problem that the UXO community is facing currently. This inverse problem demands fast and accurate representation of
Investigation of Inversion Polymorphisms in the Human Genome Using Principal Components Analysis
Ma, Jianzhong; Amos, Christopher I.
2012-01-01
Despite the significant advances made over the last few years in mapping inversions with the advent of paired-end sequencing approaches, our understanding of the prevalence and spectrum of inversions in the human genome has lagged behind other types of structural variants, mainly due to the lack of a cost-efficient method applicable to large-scale samples. We propose a novel method based on principal components analysis (PCA) to characterize inversion polymorphisms using high-density SNP genotype data. Our method applies to non-recurrent inversions for which recombination between the inverted and non-inverted segments in inversion heterozygotes is suppressed due to the loss of unbalanced gametes. Inside such an inversion region, an effect similar to population substructure is thus created: two distinct “populations” of inversion homozygotes of different orientations and their 1∶1 admixture, namely the inversion heterozygotes. This kind of substructure can be readily detected by performing PCA locally in the inversion regions. Using simulations, we demonstrated that the proposed method can be used to detect and genotype inversion polymorphisms using unphased genotype data. We applied our method to the phase III HapMap data and inferred the inversion genotypes of known inversion polymorphisms at 8p23.1 and 17q21.31. These inversion genotypes were validated by comparing with literature results and by checking Mendelian consistency using the family data whenever available. Based on the PCA-approach, we also performed a preliminary genome-wide scan for inversions using the HapMap data, which resulted in 2040 candidate inversions, 169 of which overlapped with previously reported inversions. Our method can be readily applied to the abundant SNP data, and is expected to play an important role in developing human genome maps of inversions and exploring associations between inversions and susceptibility of diseases. PMID:22808122
Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry
Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.
2014-01-01
Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.
NASA Technical Reports Server (NTRS)
1973-01-01
Research consisted of computations toward the solution of the problem of the current distribution on a cylindrical antenna in a magnetoplasma. The case of an antenna parallel to the applied magnetic field was investigated. A systematic method of asymptotic expansion was found which simplifies the solution in the general case by giving the field of a dipole even at relatively short range. Some useful properties of the dispersion surfaces in a lossy medium have also been found. A laboratory experiment was directed toward evaluating nonlinear effects, such as those due to power level, bias voltage and electron heating. The problem of reflection and transmission of waves in an electron heated plasma was treated theoretically. The profile inversion problem has been pursued. Some results are very encouraging, however, the general question of stability of the solution remains unsolved.
NASA Astrophysics Data System (ADS)
Hetmaniok, Edyta; Hristov, Jordan; Słota, Damian; Zielonka, Adam
2017-05-01
The paper presents the procedure for solving the inverse problem for the binary alloy solidification in a two-dimensional space. This is a continuation of some previous works of the authors investigating a similar problem but in the one-dimensional domain. Goal of the problem consists in identification of the heat transfer coefficient on boundary of the region and in reconstruction of the temperature distribution inside the considered region in case when the temperature measurements in selected points of the alloy are known. Mathematical model of the problem is based on the heat conduction equation with the substitute thermal capacity and with the liquidus and solidus temperatures varying in dependance on the concentration of the alloy component. For describing this concentration the Scheil model is used. Investigated procedure involves also the parallelized Ant Colony Optimization algorithm applied for minimizing a functional expressing the error of approximate solution.
Numerical optimization in Hilbert space using inexact function and gradient evaluations
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.
Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model
NASA Astrophysics Data System (ADS)
Mejer Hansen, Thomas
2017-04-01
Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.
Evaluation of concrete cover by surface wave technique: Identification procedure
NASA Astrophysics Data System (ADS)
Piwakowski, Bogdan; Kaczmarek, Mariusz; Safinowski, Paweł
2012-05-01
Concrete cover degradation is induced by aggressive agents in ambiance, such as moisture, chemicals or temperature variations. Due to degradation usually a thin (a few millimeters thick) surface layer has porosity slightly higher than the deeper sound material. The non destructive evaluation of concrete cover is vital to monitor the integrity of concrete structures and prevent their irreversible damage. In this paper the methodology applied by the classical technique used for ground structure recovery called Multichanel Analysis of Surface Waves is discussed as the NDT tool in civil engineering domain to characterize the concrete cover. In order to obtain the velocity as a function of sample depth the dispersion of surface waves is used as an input for solving inverse problem. The paper describes the inversion procedure and provides the practical example of use of developed system.
A multi-frequency iterative imaging method for discontinuous inverse medium problem
NASA Astrophysics Data System (ADS)
Zhang, Lei; Feng, Lixin
2018-06-01
The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Stritzel, J.; Melchert, O.; Wollweber, M.; Roth, B.
2017-09-01
The direct problem of optoacoustic signal generation in biological media consists of solving an inhomogeneous three-dimensional (3D) wave equation for an initial acoustic stress profile. In contrast, the more defiant inverse problem requires the reconstruction of the initial stress profile from a proper set of observed signals. In this article, we consider an effectively 1D approach, based on the assumption of a Gaussian transverse irradiation source profile and plane acoustic waves, in which the effects of acoustic diffraction are described in terms of a linear integral equation. The respective inverse problem along the beam axis can be cast into a Volterra integral equation of the second kind for which we explore here efficient numerical schemes in order to reconstruct initial stress profiles from observed signals, constituting a methodical progress of computational aspects of optoacoustics. In this regard, we explore the validity as well as the limits of the inversion scheme via numerical experiments, with parameters geared toward actual optoacoustic problem instances. The considered inversion input consists of synthetic data, obtained in terms of the effectively 1D approach, and, more generally, a solution of the 3D optoacoustic wave equation. Finally, we also analyze the effect of noise and different detector-to-sample distances on the optoacoustic signal and the reconstructed pressure profiles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yun; Zhang, Yin
2016-06-08
The mass sensing superiority of a micro/nanomechanical resonator sensor over conventional mass spectrometry has been, or at least, is being firmly established. Because the sensing mechanism of a mechanical resonator sensor is the shifts of resonant frequencies, how to link the shifts of resonant frequencies with the material properties of an analyte formulates an inverse problem. Besides the analyte/adsorbate mass, many other factors such as position and axial force can also cause the shifts of resonant frequencies. The in-situ measurement of the adsorbate position and axial force is extremely difficult if not impossible, especially when an adsorbate is as smallmore » as a molecule or an atom. Extra instruments are also required. In this study, an inverse problem of using three resonant frequencies to determine the mass, position and axial force is formulated and solved. The accuracy of the inverse problem solving method is demonstrated and how the method can be used in the real application of a nanomechanical resonator is also discussed. Solving the inverse problem is helpful to the development and application of mechanical resonator sensor on two things: reducing extra experimental equipments and achieving better mass sensing by considering more factors.« less
Kuo, Chen-Chen; Li, Chi-Yen; Lee, Chi-Hung; Li, Hsiao-Chi; Li, Wen-Hsien
2015-08-25
We report on the design and observation of huge inverse magnetizations pointing in the direction opposite to the applied magnetic field, induced in nano-sized amorphous Ni shells deposited on crystalline Au nanoparticles by turning the applied magnetic field off. The magnitude of the induced inverse magnetization is very sensitive to the field reduction rate as well as to the thermal and field processes before turning the magnetic field off, and can be as high as 54% of the magnetization prior to cutting off the applied magnetic field. Memory effect of the induced inverse magnetization is clearly revealed in the relaxation measurements. The relaxation of the inverse magnetization can be described by an exponential decay profile, with a critical exponent that can be effectively tuned by the wait time right after reaching the designated temperature and before the applied magnetic field is turned off. The key to these effects is to have the induced eddy current running beneath the amorphous Ni shells through Faraday induction.
Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata
Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.
2012-01-01
Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).
Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media
NASA Astrophysics Data System (ADS)
Jakobsen, Morten; Tveit, Svenn
2018-05-01
We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.
NASA Astrophysics Data System (ADS)
López-Comino, José Ángel; Stich, Daniel; Ferreira, Ana M. G.; Morales, Jose
2015-09-01
Inversions for the full slip distribution of earthquakes provide detailed models of earthquake sources, but stability and non-uniqueness of the inversions is a major concern. The problem is underdetermined in any realistic setting, and significantly different slip distributions may translate to fairly similar seismograms. In such circumstances, inverting for a single best model may become overly dependent on the details of the procedure. Instead, we propose to perform extended fault inversion trough falsification. We generate a representative set of heterogeneous slipmaps, compute their forward predictions, and falsify inappropriate trial models that do not reproduce the data within a reasonable level of mismodelling. The remainder of surviving trial models forms our set of coequal solutions. The solution set may contain only members with similar slip distributions, or else uncover some fundamental ambiguity such as, for example, different patterns of main slip patches. For a feasibility study, we use teleseismic body wave recordings from the 2012 September 5 Nicoya, Costa Rica earthquake, although the inversion strategy can be applied to any type of seismic, geodetic or tsunami data for which we can handle the forward problem. We generate 10 000 pseudo-random, heterogeneous slip distributions assuming a von Karman autocorrelation function, keeping the rake angle, rupture velocity and slip velocity function fixed. The slip distribution of the 2012 Nicoya earthquake turns out to be relatively well constrained from 50 teleseismic waveforms. Two hundred fifty-two slip models with normalized L1-fit within 5 per cent from the global minimum from our solution set. They consistently show a single dominant slip patch around the hypocentre. Uncertainties are related to the details of the slip maximum, including the amount of peak slip (2-3.5 m), as well as the characteristics of peripheral slip below 1 m. Synthetic tests suggest that slip patterns such as Nicoya may be a fortunate case, while it may be more difficult to unambiguously reconstruct more distributed slip from teleseismic data.
NASA Astrophysics Data System (ADS)
Honarvar, M.; Lobo, J.; Mohareri, O.; Salcudean, S. E.; Rohling, R.
2015-05-01
To produce images of tissue elasticity, the vibro-elastography technique involves applying a steady-state multi-frequency vibration to tissue, estimating displacements from ultrasound echo data, and using the estimated displacements in an inverse elasticity problem with the shear modulus spatial distribution as the unknown. In order to fully solve the inverse problem, all three displacement components are required. However, using ultrasound, the axial component of the displacement is measured much more accurately than the other directions. Therefore, simplifying assumptions must be used in this case. Usually, the equations of motion are transformed into a Helmholtz equation by assuming tissue incompressibility and local homogeneity. The local homogeneity assumption causes significant imaging artifacts in areas of varying elasticity. In this paper, we remove the local homogeneity assumption. In particular we introduce a new finite element based direct inversion technique in which only the coupling terms in the equation of motion are ignored, so it can be used with only one component of the displacement. Both Cartesian and cylindrical coordinate systems are considered. The use of multi-frequency excitation also allows us to obtain multiple measurements and reduce artifacts in areas where the displacement of one frequency is close to zero. The proposed method was tested in simulations and experiments against a conventional approach in which the local homogeneity is used. The results show significant improvements in elasticity imaging with the new method compared to previous methods that assumes local homogeneity. For example in simulations, the contrast to noise ratio (CNR) for the region with spherical inclusion increases from an average value of 1.5-17 after using the proposed method instead of the local inversion with homogeneity assumption, and similarly in the prostate phantom experiment, the CNR improved from an average value of 1.6 to about 20.
A trade-off between model resolution and variance with selected Rayleigh-wave data
Xia, J.; Miller, R.D.; Xu, Y.
2008-01-01
Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (??? 2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. First, we employed a data-resolution matrix to select data that would be well predicted and to explain advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher mode data are normally more accurately predicted than fundamental mode data because of restrictions on the data kernel for the inversion system. Second, we obtained an optimal damping vector in a vicinity of an inverted model by the singular value decomposition of a trade-off function of model resolution and variance. In the end of the paper, we used a real-world example to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher mode data in inversion can provide better results. We also calculated model-resolution matrices of these examples to show the potential of increasing model resolution with selected surface-wave data. With the optimal damping vector, we can improve and assess an inverted model obtained by a damped least-square method.
NASA Technical Reports Server (NTRS)
Green, M. J.; Nachtsheim, P. R.
1972-01-01
A numerical method for the solution of large systems of nonlinear differential equations of the boundary-layer type is described. The method is a modification of the technique for satisfying asymptotic boundary conditions. The present method employs inverse interpolation instead of the Newton method to adjust the initial conditions of the related initial-value problem. This eliminates the so-called perturbation equations. The elimination of the perturbation equations not only reduces the user's preliminary work in the application of the method, but also reduces the number of time-consuming initial-value problems to be numerically solved at each iteration. For further ease of application, the solution of the overdetermined system for the unknown initial conditions is obtained automatically by applying Golub's linear least-squares algorithm. The relative ease of application of the proposed numerical method increases directly as the order of the differential-equation system increases. Hence, the method is especially attractive for the solution of large-order systems. After the method is described, it is applied to a fifth-order problem from boundary-layer theory.
Basis set expansion for inverse problems in plasma diagnostic analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, B.; Ruiz, C. L.
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20–25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M.more » Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.« less
Basis set expansion for inverse problems in plasma diagnostic analysis
NASA Astrophysics Data System (ADS)
Jones, B.; Ruiz, C. L.
2013-07-01
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)], 10.1063/1.1482156 is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Two-way WKB Approximation Applied to GPR - COST Action TU1208
NASA Astrophysics Data System (ADS)
Prokopovich, Igor; Popov, Alexei; Marciniak, Marian; Pajewski, Lara
2016-04-01
The main goal of subsurface radio wave probing consists in reconstruction of the shape and the electrical properties of buried objects in material media. For this purpose the knowledge of the laws of EM pulse excitation and propagation in non-uniform subsurface medium is required, as well as the methods and algorithms of solving the inverse problem. Two ways of treating this problem exist. On the one hand, one can describe EM wave propagation by solving the Maxwell's equations with finite difference methods implemented in computer codes. However, when solving inverse problems, pure numerical algorithms require huge amount of calculation and, as a consequence, long calculation time. In this respect, more promising are analytical approaches. Here, we apply couple wave theory ("two-way WKB" approximation) to the problem of subsurface wave propagation. The derived formulas can be used in GPR design and for fast data processing of the experimental data. We start from the 1D model problem of GPR probing. Classical WKB method [1] allows one to describe wave propagation through non-uniform media with slowly varying dielectric permittivity. A principal shortcoming of this approximation is that it does not take into account backward reflection from permittivity gradients. Consequently, WKB method as such can not be used for the purposes of GPR sounding. An extension of this approximation consists in solving two coupled WKB-type equations by iterations. This approach properly describes backward reflections and provides good accuracy in a wide frequency range [2]. In our previous work [3] a time-domain counterpart of the Bremmer-Brekhovkikh approximation has been derived and applied to a 1D inverse problem of subsurface medium probing by an ultra-wide band EM pulse. In order to convert this approach into a practical GPR algorithm, a more realistic model is required: 2D or 3D propagation from a localized source with the effects of wave divergence and refraction taken into account. In this work we study bistatic EM pulse probing of a horizontally layered medium in a 2D case. Coupled WKB equations set describing both forward and backward waves are derived and solved analytically. The comparison of our semi-analytical solutions with numerical calculations by gprMax software [4] demonstrates a good agreement, being hundreds of times faster than the letter. Our numerical results explain the protracted return pulses in the low-frequency GPR data. As an example, we discuss the experimental data obtained during the GPR mission in search of a big fragment of Chelyabinsk meteorite under a thick silt layer at the bottom of Chebarcul' Lake [5]. Acknowledgement The Authors are grateful to the European Cooperation in Science and Technology (www.cost.eu) facilitating this work by a Short-Term Scientific Missions (STSM) within the framework of the Action TU1208 "Civil engineering applications of Ground Penetrating Radar" (www.GPRadar.eu). References 1. H. Bremmer "Propagation of electromagnetic waves", in Handbuch der Physik, S. Flugge, Ed. Berlin-Goettingen-Heidelberg: Springer, 1958, pp. 423-639 2. L.M. Brekhovskikh, Waves in Stratified Media (in Russian). Moscow: USSR Academy of Sciences, 1957. 3. V.A.Vinogradov, V.V. Kopeikin, A.V. Popov, "An Approximate Solution of 1D Inverse Problem", in Proc. 10th Internat. Conf. on GPR, 21-24 June, 2004, Delft, The Netherlands 4. A. Giannopoulos, "Modelling ground penetrating radar by GprMax", Construction and Building Materials, vol. 19, no. 10, pp. 755-762, 2005, doi: 10.1016/j.conbuildmat.2005.06.007 5. V. V. Kopeikin , V. D. Kuznetsov, P. A. Morozov, A. V. Popov et al., "Ground penetrating radar investigation of the supposed fall site of a fragment of the Chelyabinsk meteorite in Lake Chebarkul'", Geochemistry International, vol. 51, no. 7, pp. 575-582, 2013, doi: 10.1134/S0016702913070112
The shifting zoom: new possibilities for inverse scattering on electrically large domains
NASA Astrophysics Data System (ADS)
Persico, Raffaele; Ludeno, Giovanni; Soldovieri, Francesco; De Coster, Alberic; Lambot, Sebastien
2017-04-01
Inverse scattering is a subject of great interest in diagnostic problems, which are in their turn of interest for many applicative problems as investigation of cultural heritage, characterization of foundations or subservices, identification of unexploded ordnances and so on [1-4]. In particular, GPR data are usually focused by means of migration algorithms, essentially based on a linear approximation of the scattering phenomenon. Migration algorithms are popular because they are computationally efficient and do not require the inversion of a matrix, neither the calculation of the elements of a matrix. In fact, they are essentially based on the adjoint of the linearised scattering operator, which allows in the end to write the inversion formula as a suitably weighted integral of the data [5]. In particular, this makes a migration algorithm more suitable than a linear microwave tomography inversion algorithm for the reconstruction of an electrically large investigation domain. However, this computational challenge can be overcome by making use of investigation domains joined side by side, as proposed e.g. in ref. [3]. This allows to apply a microwave tomography algorithm even to large investigation domains. However, the joining side by side of sequential investigation domains introduces a problem of limited (and asymmetric) maximum view angle with regard to the targets occurring close to the edges between two adjacent domains, or possibly crossing these edges. The shifting zoom is a method that allows to overcome this difficulty by means of overlapped investigation and observation domains [6-7]. It requires more sequential inversion with respect to adjacent investigation domains, but the really required extra-time is minimal because the matrix to be inverted is calculated ones and for all, as well as its singular value decomposition: what is repeated more time is only a fast matrix-vector multiplication. References [1] M. Pieraccini, L. Noferini, D. Mecatti, C. Atzeni, R. Persico, F. Soldovieri, Advanced Processing Techniques for Step-frequency Continuous-Wave Penetrating Radar: the Case Study of "Palazzo Vecchio" Walls (Firenze, Italy), Research on Nondestructive Evaluation, vol. 17, pp. 71-83, 2006. [2] N. Masini, R. Persico, E. Rizzo, A. Calia, M. T. Giannotta, G. Quarta, A. Pagliuca, "Integrated Techniques for Analysis and Monitoring of Historical Monuments: the case of S.Giovanni al Sepolcro in Brindisi (Southern Italy)." Near Surface Geophysics, vol. 8 (5), pp. 423-432, 2010. [3] E. Pettinelli, A. Di Matteo, E. Mattei, L. Crocco, F. Soldovieri, J. D. Redman, and A. P. Annan, "GPR response from buried pipes: Measurement on field site and tomographic reconstructions", IEEE Transactions on Geoscience and Remote Sensing, vol. 47, n. 8, 2639-2645, Aug. 2009. [4] O. Lopera, E. C. Slob, N. Milisavljevic and S. Lambot, "Filtering soil surface and antenna effects from GPR data to enhance landmine detection", IEEE Transactions on Geoscience and Remote Sensing, vol. 45, n. 3, pp.707-717, 2007. [5] R. Persico, "Introduction to Ground Penetrating Radar: Inverse Scattering and Data Processing". Wiley, 2014. [6] R. Persico, J. Sala, "The problem of the investigation domain subdivision in 2D linear inversions for large scale GPR data", IEEE Geoscience and Remote Sensing Letters, vol. 11, n. 7, pp. 1215-1219, doi 10.1109/LGRS.2013.2290008, July 2014. [7] R. Persico, F. Soldovieri, S. Lambot, Shifting zoom in 2D linear inversions performed on GPR data gathered along an electrically large investigation domain, Proc. 16th International Conference on Ground Penetrating Radar GPR2016, Honk-Kong, June 13-16, 2016
NASA Astrophysics Data System (ADS)
Kelbert, A.; Egbert, G. D.; Sun, J.
2011-12-01
Poleward of 45-50 degrees (geomagnetic) observatory data are influenced significantly by auroral ionospheric current systems, invalidating the simplifying zonal dipole source assumption traditionally used for long period (T > 2 days) geomagnetic induction studies. Previous efforts to use these data to obtain the global electrical conductivity distribution in Earth's mantle have omitted high-latitude sites (further thinning an already sparse dataset) and/or corrected the affected transfer functions using a highly simplified model of auroral source currents. Although these strategies are partly effective, there remain clear suggestions of source contamination in most recent 3D inverse solutions - specifically, bands of conductive features are found near auroral latitudes. We report on a new approach to this problem, based on adjusting both external field structure and 3D Earth conductivity to fit observatory data. As an initial step towards full joint inversion we are using a two step procedure. In the first stage, we adopt a simplified conductivity model, with a thin-sheet of variable conductance (to represent the oceans) overlying a 1D Earth, to invert observed magnetic fields for external source spatial structure. Input data for this inversion are obtained from frequency domain principal components (PC) analysis of geomagnetic observatory hourly mean values. To make this (essentially linear) inverse problem well-posed we regularize using covariances for source field structure that are consistent with well-established properties of auroral ionospheric (and magnetospheric) current systems, and basic physics of the EM fields. In the second stage, we use a 3D finite difference inversion code, with source fields estimated from the first stage, to further fit the observatory PC modes. We incorporate higher latitude data into the inversion, and maximize the amount of available information by directly inverting the magnetic field components of the PC modes, instead of transfer functions such as C-responses used previously. Recent improvements in accuracy and speed of the forward and inverse finite difference codes (a secondary field formulation and parallelization over frequencies) allow us to use finer computational grid for inversion, and thus to model finer scale features, making full use of the expanded data set. Overall, our approach presents an improvement over earlier observatory data interpretation techniques, making better use of the available data, and allowing to explore the trade-offs between complications in source structure, and heterogeneities in mantle conductivity. We will also report on progress towards applying the same approach to simultaneous source/conductivity inversion of shorter period observatory data, focusing especially on the daily variation band.
NASA Astrophysics Data System (ADS)
Świrniak, Grzegorz; Głomb, Grzegorz
2017-06-01
This study reports an application of a fiber-optic LED-based illumination system to solve an inverse problem in optical measurements of characteristics of a single-mode fiber. The illumination system has the advantages of low temporal coherence, high intensity, collimation, and thermal stability of the emission spectrum. The inverse analysis is investigated to predict the values of the diameter and refractive index of a single-mode fiber and applies to the far field scattering pattern in the vicinity of a polychromatic rainbow. As the inversion possibility depends considerably on the properties of the incident radiation, a detailed discussion is provided on both the specification of the illumination system as well as preliminary characteristics of the produced radiation. The illumination system uses a direct coupling between a thermally-stabilized LED junction and a plastic optical fiber, which transmits light to an optical collimator. A numerical study of fiber-to-LED coupling efficiency helps to understand the influence of lateral and longitudinal misalignments on the output power.
Sakaguchi, Masayuki; Takano, Tamaki
2016-08-02
Hemolysis related to a kinked prosthetic graft or inner felt strip is a very rare complication after aortic surgery. We describe herein a case of hemolytic anemia that developed due to aortic flap of the dissection and inversion of an inner felt strip that was applied at the proximal anastomosis of a replaced ascending aorta 10 years previously. A 74-year-old woman presented with consistent hemolytic anemia 10 years after replacement of the ascending aorta to treat Stanford type A acute aortic dissection. The cause of hemolysis was attributed to mechanical injury of red blood cells at a site of stenosis caused by aortic flap of the dissection and inversion of the felt strip used for the proximal anastomosis. Repeated resection of the strip and graft replacement of the ascending aorta resolved this problem. We considered that blood flow disrupted by a jet of blood at the site of the proximal inner felt strip was the cause of severe hemolysis, we describe rare hemolytic anemia at the site of aortic flap and inverted felt strip after replacement of the ascending aorta.
NASA Astrophysics Data System (ADS)
Tian, Yu-Kun; Zhou, Hui; Chen, Han-Ming; Zou, Ya-Ming; Guan, Shou-Jun
2013-12-01
Seismic inversion is a highly ill-posed problem, due to many factors such as the limited seismic frequency bandwidth and inappropriate forward modeling. To obtain a unique solution, some smoothing constraints, e.g., the Tikhonov regularization are usually applied. The Tikhonov method can maintain a global smooth solution, but cause a fuzzy structure edge. In this paper we use Huber-Markov random-field edge protection method in the procedure of inverting three parameters, P-velocity, S-velocity and density. The method can avoid blurring the structure edge and resist noise. For the parameter to be inverted, the Huber-Markov random-field constructs a neighborhood system, which further acts as the vertical and lateral constraints. We use a quadratic Huber edge penalty function within the layer to suppress noise and a linear one on the edges to avoid a fuzzy result. The effectiveness of our method is proved by inverting the synthetic data without and with noises. The relationship between the adopted constraints and the inversion results is analyzed as well.
An optimization method for the problems of thermal cloaking of material bodies
NASA Astrophysics Data System (ADS)
Alekseev, G. V.; Levin, V. A.
2016-11-01
Inverse heat-transfer problems related to constructing special thermal devices such as cloaking shells, thermal-illusion or thermal-camouflage devices, and heat-flux concentrators are studied. The heatdiffusion equation with a variable heat-conductivity coefficient is used as the initial heat-transfer model. An optimization method is used to reduce the above inverse problems to the respective control problem. The solvability of the above control problem is proved, an optimality system that describes necessary extremum conditions is derived, and a numerical algorithm for solving the control problem is proposed.
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
ERIC Educational Resources Information Center
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
Optical property measurement from layered biological media
NASA Astrophysics Data System (ADS)
Muller, Matthew R.
1998-12-01
Near infrared (NIR) photon reflectance spectroscopy is applied to measurement of blood concentration and its oxygen saturation within biological tissue. The measurement relies upon the changes in photon absorption of hemoglobin in the tissue as changes occur in the hemoglobin concentration and oxygen content. In the present study, NIR light is introduced at the skin surface and the optical properties (absorption and scattering) within the underlying tissue are determined from the resulting surface reflectance. Typically the tissue is modeled as a homogeneous mixture of bloodless tissue and blood, and the model incorporates the physical relationship between the surface reflectance and the optical properties of the tissue. The skin and underlying tissue, although heterogeneous, have a characteristic layered structure. These layers can be differentiated optically. The modeling and the inverse problem of measuring the optical properties in each of the tissue layers from the surface reflectance have been the subject of much attention by a number of investigators. Nonetheless, quantification of the relationship between surface reflectance and the optical properties of layered tissue has not been well understood nor well described. In the forward problem, tissue optical properties yield surface reflectance profiles (SRPs). Surface reflectance profiles, or SRPs, from diffusive media consisting of two layers are calculated using numerical solutions to the Boltzmann equation. Experimental SRPs are also measured in vitro from a test medium and in vivo from the calf of human subjects. This study provides a new approach to solving the inverse problem of determining optical properties from SRPs. To solve the inverse problem, an effective diffusion constant (Ke) is determined for the layered media. The Ke is the diffusion constant of an equivalent homogeneous medium which best fits the SRP of the layered medium. The departure from Ke of the SRP for a layered media is captured concisely, and Ke becomes a tool in describing the layered optical properties. This approach is applied clinically to measure changes in the blood concentration and oxygenation measured in vivo from normals and patients with peripheral vascular disease. A significant finding from the modeling was to identify the functional relationship of Ke to the top and lower layer diffusion constants, and the top layer thickness. When applied to in vitro measurements from media containing homogeneous layers with known optical properties, this functional relationship predicted Ke within the 95% confidence interval of the measured Ke. For the in vivo measurements, changes in K e with exercise are consistent with expected exercise physiology. With the incorporation of the known optical absorbance of hemoglobin in the presence of oxygen, the SRPs provide a means to measure the oxygen saturation of a deep tissue layer from the surface light reflectance.
Frechet derivatives for shallow water ocean acoustic inverse problems
NASA Astrophysics Data System (ADS)
Odom, Robert I.
2003-04-01
For any inverse problem, finding a model fitting the data is only half the problem. Most inverse problems of interest in ocean acoustics yield nonunique model solutions, and involve inevitable trade-offs between model and data resolution and variance. Problems of uniqueness and resolution and variance trade-offs can be addressed by examining the Frechet derivatives of the model-data functional with respect to the model variables. Tarantola [Inverse Problem Theory (Elsevier, Amsterdam, 1987), p. 613] published analytical formulas for the basic derivatives, e.g., derivatives of pressure with respect to elastic moduli and density. Other derivatives of interest, such as the derivative of transmission loss with respect to attenuation, can be easily constructed using the chain rule. For a range independent medium the analytical formulas involve only the Green's function and the vertical derivative of the Green's function for the medium. A crucial advantage of the analytical formulas for the Frechet derivatives over numerical differencing is that they can be computed with a single pass of any program which supplies the Green's function. Various derivatives of interest in shallow water ocean acoustics are presented and illustrated by an application to the sensitivity of measured pressure to shallow water sediment properties. [Work supported by ONR.
NASA Astrophysics Data System (ADS)
Irving, J.; Koepke, C.; Elsheikh, A. H.
2017-12-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.
NASA Astrophysics Data System (ADS)
Marinoni, Marianna; Delay, Frederick; Ackerer, Philippe; Riva, Monica; Guadagnini, Alberto
2016-08-01
We investigate the effect of considering reciprocal drawdown curves for the characterization of hydraulic properties of aquifer systems through inverse modeling based on interference well testing. Reciprocity implies that drawdown observed in a well B when pumping takes place from well A should strictly coincide with the drawdown observed in A when pumping in B with the same flow rate as in A. In this context, a critical point related to applications of hydraulic tomography is the assessment of the number of available independent drawdown data and their impact on the solution of the inverse problem. The issue arises when inverse modeling relies upon mathematical formulations of the classical single-continuum approach to flow in porous media grounded on Darcy's law. In these cases, introducing reciprocal drawdown curves in the database of an inverse problem is equivalent to duplicate some information, to a certain extent. We present a theoretical analysis of the way a Least-Square objective function and a Levenberg-Marquardt minimization algorithm are affected by the introduction of reciprocal information in the inverse problem. We also investigate the way these reciprocal data, eventually corrupted by measurement errors, influence model parameter identification in terms of: (a) the convergence of the inverse model, (b) the optimal values of parameter estimates, and (c) the associated estimation uncertainty. Our theoretical findings are exemplified through a suite of computational examples focused on block-heterogeneous systems with increased complexity level. We find that the introduction of noisy reciprocal information in the objective function of the inverse problem has a very limited influence on the optimal parameter estimates. Convergence of the inverse problem improves when adding diverse (nonreciprocal) drawdown series, but does not improve when reciprocal information is added to condition the flow model. The uncertainty on optimal parameter estimates is influenced by the strength of measurement errors and it is not significantly diminished or increased by adding noisy reciprocal information.
NASA Astrophysics Data System (ADS)
West, Michael; Gao, Wei; Grand, Stephen
2004-08-01
Body and surface wave tomography have complementary strengths when applied to regional-scale studies of the upper mantle. We present a straight-forward technique for their joint inversion which hinges on treating surface waves as horizontally-propagating rays with deep sensitivity kernels. This formulation allows surface wave phase or group measurements to be integrated directly into existing body wave tomography inversions with modest effort. We apply the joint inversion to a synthetic case and to data from the RISTRA project in the southwest U.S. The data variance reductions demonstrate that the joint inversion produces a better fit to the combined dataset, not merely a compromise. For large arrays, this method offers an improvement over augmenting body wave tomography with a one-dimensional model. The joint inversion combines the absolute velocity of a surface wave model with the high resolution afforded by body waves-both qualities that are required to understand regional-scale mantle phenomena.
Resolution analysis of marine seismic full waveform data by Bayesian inversion
NASA Astrophysics Data System (ADS)
Ray, A.; Sekar, A.; Hoversten, G. M.; Albertin, U.
2015-12-01
The Bayesian posterior density function (PDF) of earth models that fit full waveform seismic data convey information on the uncertainty with which the elastic model parameters are resolved. In this work, we apply the trans-dimensional reversible jump Markov Chain Monte Carlo method (RJ-MCMC) for the 1D inversion of noisy synthetic full-waveform seismic data in the frequency-wavenumber domain. While seismic full waveform inversion (FWI) is a powerful method for characterizing subsurface elastic parameters, the uncertainty in the inverted models has remained poorly known, if at all and is highly initial model dependent. The Bayesian method we use is trans-dimensional in that the number of model layers is not fixed, and flexible such that the layer boundaries are free to move around. The resulting parameterization does not require regularization to stabilize the inversion. Depth resolution is traded off with the number of layers, providing an estimate of uncertainty in elastic parameters (compressional and shear velocities Vp and Vs as well as density) with depth. We find that in the absence of additional constraints, Bayesian inversion can result in a wide range of posterior PDFs on Vp, Vs and density. These PDFs range from being clustered around the true model, to those that contain little resolution of any particular features other than those in the near surface, depending on the particular data and target geometry. We present results for a suite of different frequencies and offset ranges, examining the differences in the posterior model densities thus derived. Though these results are for a 1D earth, they are applicable to areas with simple, layered geology and provide valuable insight into the resolving capabilities of FWI, as well as highlight the challenges in solving a highly non-linear problem. The RJ-MCMC method also presents a tantalizing possibility for extension to 2D and 3D Bayesian inversion of full waveform seismic data in the future, as it objectively tackles the problem of model selection (i.e., the number of layers or cells for parameterization), which could ease the computational burden of evaluating forward models with many parameters.
Ground-Based Microwave Radiometric Remote Sensing of the Tropical Atmosphere
NASA Astrophysics Data System (ADS)
Han, Yong
A partially developed 9-channel ground-based microwave radiometer for the Department of Meteorology at Penn State was completed and tested. Complementary units were added, corrections to both hardware and software were made, and system software was corrected and upgraded. Measurements from this radiometer were used to infer tropospheric temperature, water vapor and cloud liquid water. The various weighting functions at each of the 9 channels were calculated and analyzed to estimate the sensitivities of the brightness temperatures to the desired atmospheric variables. The mathematical inversion problem, in a linear form, was viewed in terms of the theory of linear algebra. Several methods for solving the inversion problem were reviewed. Radiometric observations were conducted during the 1990 Tropical Cyclone Motion Experiment. The radiometer was installed on the island of Saipan in a tropical region. During this experiment, the radiometer was calibrated by using tipping curve and radiosonde data as well as measurements of the radiation from a blackbody absorber. A linear statistical method was first applied for the data inversion. The inversion coefficients in the equation were obtained using a large number of radiosonde profiles from Guam and a radiative transfer model. Retrievals were compared with those from local, Saipan, radiosonde measurements. Water vapor profiles, integrated water vapor, and integrated liquid water were retrieved successfully. For temperature profile retrievals, however, it was shown that the radiometric measurements with experimental noises added no more profile information to the inversion than that which was available from a climatological mean. Although successful retrievals of the geopotential heights were made, it was shown that they were determined mainly by the surface pressure measurements. The reasons why the radiometer did not contribute to the retrievals of temperature profiles and geopotential heights were discussed. A method was developed to derive the integrated water vapor and liquid water from combined radiometer and ceilometer measurements. Under certain assumptions, the cloud absorption coefficients and mean radiating temperature, used in the physical or statistical inversion equation, were determined from the measurements. It was shown that significant improvement on radiometric measurements of the integrated liquid water can be gained with this method.
An approach to quantum-computational hydrologic inverse analysis
O'Malley, Daniel
2018-05-02
Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less
An approach to quantum-computational hydrologic inverse analysis.
O'Malley, Daniel
2018-05-02
Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealer to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.
Coupling of Large Amplitude Inversion with Other States
NASA Astrophysics Data System (ADS)
Pearson, John; Yu, Shanshan
2016-06-01
The coupling of a large amplitude motion with a small amplitude vibration remains one of the least well characterized problems in molecular physics. Molecular inversion poses a few unique and not intuitively obvious challenges to the large amplitude motion problem. In spite of several decades of theoretical work numerous challenges in calculation of transition frequencies and more importantly intensities persist. The most challenging aspect of this problem is that the inversion coordinate is a unique function of the overall vibrational state including both the large and small amplitude modes. As a result, the r-axis system and the meaning of the K-quantum number in the rotational basis set are unique to each vibrational state of large or small amplitude motion. This unfortunate reality has profound consequences to calculation of intensities and the coupling of nearly degenerate vibrational states. The case of NH3 inversion and inversion through a plane of symmetry in alcohols will be examined to find a general path forward.
An approach to quantum-computational hydrologic inverse analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Malley, Daniel
Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less
Marine magnetotelluric inversion with an unstructured tetrahedral mesh
NASA Astrophysics Data System (ADS)
Usui, Yoshiya; Kasaya, Takafumi; Ogawa, Yasuo; Iwamoto, Hisanori
2018-05-01
The finite element method using an unstructured tetrahedral mesh is one of the most effective methods for the three-dimensional modelling of marine magnetotelluric data which are strongly affected by bathymetry, because it enables us to incorporate both small-scale and regional-scale bathymetry into a computational mesh with a practical number of elements. The authors applied a three-dimensional inversion scheme using mesh of this type to marine magnetotelluric problems for the first time and verified its applicability. Forward calculations for two bathymetry models demonstrated that the results obtained with an unstructured tetrahedral mesh are close to the reference solutions. To evaluate the forward calculation results, we developed a general TM-mode analytical formulation for a two-dimensional sinusoidal topography. Moreover, synthetic inversion test results confirmed that a three-dimensional inversion scheme with an unstructured tetrahedral mesh enables us to recover subseafloor resistivity structure properly even for a model including a land-sea boundary as well as seafloor undulations. The verified inversion scheme was subsequently applied to a set of marine magnetotelluric data observed around the Iheya North Knoll, the middle Okinawa Trough. Three-dimensional modelling using a mesh with precise bathymetry demonstrated that the data observed around the Iheya North Knoll are strongly affected by bathymetry, especially by the sea-depth differences between the depression of the trough and the shallow East China Sea. The estimated resistivity structure under the knoll is characterized by a conductive surface layer underlain by a resistive layer. The conductive layer implies permeable pelagic/hemi-pelagic sediments, which are consistent with a previous seismological study. Furthermore, the conductive layer has a resistive part immediately below the knoll, which is regarded as the consolidated magma intrusion that formed the knoll. Furthermore, at depth of 10 km, we found that the resistor underneath the knoll extends to the southeast, implying that subseafloor resistivity under the Volcanic Arc Migration Phenomenon (VAMP) area is more resistive than the surroundings due to the presence of consolidated magma.
NASA Astrophysics Data System (ADS)
Khachaturov, R. V.
2014-06-01
A mathematical model of X-ray reflection and scattering by multilayered nanostructures in the quasi-optical approximation is proposed. X-ray propagation and the electric field distribution inside the multilayered structure are considered with allowance for refraction, which is taken into account via the second derivative with respect to the depth of the structure. This model is used to demonstrate the possibility of solving inverse problems in order to determine the characteristics of irregularities not only over the depth (as in the one-dimensional problem) but also over the length of the structure. An approximate combinatorial method for system decomposition and composition is proposed for solving the inverse problems.
a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.
2017-12-01
We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.
Towards quantifying uncertainty in Greenland's contribution to 21st century sea-level rise
NASA Astrophysics Data System (ADS)
Perego, M.; Tezaur, I.; Price, S. F.; Jakeman, J.; Eldred, M.; Salinger, A.; Hoffman, M. J.
2015-12-01
We present recent work towards developing a methodology for quantifying uncertainty in Greenland's 21st century contribution to sea-level rise. While we focus on uncertainties associated with the optimization and calibration of the basal sliding parameter field, the methodology is largely generic and could be applied to other (or multiple) sets of uncertain model parameter fields. The first step in the workflow is the solution of a large-scale, deterministic inverse problem, which minimizes the mismatch between observed and computed surface velocities by optimizing the two-dimensional coefficient field in a linear-friction sliding law. We then expand the deviation in this coefficient field from its estimated "mean" state using a reduced basis of Karhunen-Loeve Expansion (KLE) vectors. A Bayesian calibration is used to determine the optimal coefficient values for this expansion. The prior for the Bayesian calibration can be computed using the Hessian of the deterministic inversion or using an exponential covariance kernel. The posterior distribution is then obtained using Markov Chain Monte Carlo run on an emulator of the forward model. Finally, the uncertainty in the modeled sea-level rise is obtained by performing an ensemble of forward propagation runs. We present and discuss preliminary results obtained using a moderate-resolution model of the Greenland Ice sheet. As demonstrated in previous work, the primary difficulty in applying the complete workflow to realistic, high-resolution problems is that the effective dimension of the parameter space is very large.
Seismic waveform inversion best practices: regional, global and exploration test cases
NASA Astrophysics Data System (ADS)
Modrak, Ryan; Tromp, Jeroen
2016-09-01
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.
Empirical investigation into depth-resolution of Magnetotelluric data
NASA Astrophysics Data System (ADS)
Piana Agostinetti, N.; Ogaya, X.
2017-12-01
We investigate the depth-resolution of MT data comparing reconstructed 1D resistivity profiles with measured resistivity and lithostratigraphy from borehole data. Inversion of MT data has been widely used to reconstruct the 1D fine-layered resistivity structure beneath an isolated Magnetotelluric (MT) station. Uncorrelated noise is generally assumed to be associated to MT data. However, wrong assumptions on error statistics have been proved to strongly bias the results obtained in geophysical inversions. In particular the number of resolved layers at depth strongly depends on error statistics. In this study, we applied a trans-dimensional McMC algorithm for reconstructing the 1D resistivity profile near-by the location of a 1500 m-deep borehole, using MT data. We resolve the MT inverse problem imposing different models for the error statistics associated to the MT data. Following a Hierachical Bayes' approach, we also inverted for the hyper-parameters associated to each error statistics model. Preliminary results indicate that assuming un-correlated noise leads to a number of resolved layers larger than expected from the retrieved lithostratigraphy. Moreover, comparing the inversion of synthetic resistivity data obtained from the "true" resistivity stratification measured along the borehole shows that a consistent number of resistivity layers can be obtained using a Gaussian model for the error statistics, with substantial correlation length.
NASA Astrophysics Data System (ADS)
Zhang, Hua; He, Zhen-Hua; Li, Ya-Lin; Li, Rui; He, Guamg-Ming; Li, Zhong
2017-06-01
Multi-wave exploration is an effective means for improving precision in the exploration and development of complex oil and gas reservoirs that are dense and have low permeability. However, converted wave data is characterized by a low signal-to-noise ratio and low resolution, because the conventional deconvolution technology is easily affected by the frequency range limits, and there is limited scope for improving its resolution. The spectral inversion techniques is used to identify λ/8 thin layers and its breakthrough regarding band range limits has greatly improved the seismic resolution. The difficulty associated with this technology is how to use the stable inversion algorithm to obtain a high-precision reflection coefficient, and then to use this reflection coefficient to reconstruct broadband data for processing. In this paper, we focus on how to improve the vertical resolution of the converted PS-wave for multi-wave data processing. Based on previous research, we propose a least squares inversion algorithm with a total variation constraint, in which we uses the total variance as a priori information to solve under-determined problems, thereby improving the accuracy and stability of the inversion. Here, we simulate the Gaussian fitting amplitude spectrum to obtain broadband wavelet data, which we then process to obtain a higher resolution converted wave. We successfully apply the proposed inversion technology in the processing of high-resolution data from the Penglai region to obtain higher resolution converted wave data, which we then verify in a theoretical test. Improving the resolution of converted PS-wave data will provide more accurate data for subsequent velocity inversion and the extraction of reservoir reflection information.
On domain symmetry and its use in homogenization
Barbarosie, Cristian A.; Tortorelli, Daniel A.; Watts, Seth E.
2017-03-08
The present study focuses on solving partial differential equations in domains exhibiting symmetries and periodic boundary conditions for the purpose of homogenization. We show in a systematic manner how the symmetry can be exploited to significantly reduce the complexity of the problem and the computational burden. This is especially relevant in inverse problems, when one needs to solve the partial differential equation (the primal problem) many times in an optimization algorithm. The main motivation of our study is inverse homogenization used to design architected composite materials with novel properties which are being fabricated at ever increasing rates thanks to recentmore » advances in additive manufacturing. For example, one may optimize the morphology of a two-phase composite unit cell to achieve isotropic homogenized properties with maximal bulk modulus and minimal Poisson ratio. Typically, the isotropy is enforced by applying constraints to the optimization problem. However, in two dimensions, one can alternatively optimize the morphology of an equilateral triangle and then rotate and reflect the triangle to form a space filling D 3 symmetric hexagonal unit cell that necessarily exhibits isotropic homogenized properties. One can further use this D 3 symmetry to reduce the computational expense by performing the “unit strain” periodic boundary condition simulations on the single triangle symmetry sector rather than the six fold larger hexagon. In this paper we use group representation theory to derive the necessary periodic boundary conditions on the symmetry sectors of unit cells. The developments are done in a general setting, and specialized to the two-dimensional dihedral symmetries of the abelian D 2, i.e. orthotropic, square unit cell and nonabelian D 3, i.e. trigonal, hexagon unit cell. We then demonstrate how this theory can be applied by evaluating the homogenized properties of a two-phase planar composite over the triangle symmetry sector of a D 3 symmetric hexagonal unit cell.« less
NASA Astrophysics Data System (ADS)
Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.
2018-06-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.
Control and System Theory, Optimization, Inverse and Ill-Posed Problems
1988-09-14
Justlfleatlen Distribut ion/ Availability Codes # AFOSR-87-0350 Avat’ and/or1987-1988 Dist Special *CONTROL AND SYSTEM THEORY , ~ * OPTIMIZATION, * INVERSE...considerable va- riety of research investigations within the grant areas (Control and system theory , Optimization, and Ill-posed problems]. The
A constrained reconstruction technique of hyperelasticity parameters for breast cancer assessment
NASA Astrophysics Data System (ADS)
Mehrabian, Hatef; Campbell, Gordon; Samani, Abbas
2010-12-01
In breast elastography, breast tissue usually undergoes large compression resulting in significant geometric and structural changes. This implies that breast elastography is associated with tissue nonlinear behavior. In this study, an elastography technique is presented and an inverse problem formulation is proposed to reconstruct parameters characterizing tissue hyperelasticity. Such parameters can potentially be used for tumor classification. This technique can also have other important clinical applications such as measuring normal tissue hyperelastic parameters in vivo. Such parameters are essential in planning and conducting computer-aided interventional procedures. The proposed parameter reconstruction technique uses a constrained iterative inversion; it can be viewed as an inverse problem. To solve this problem, we used a nonlinear finite element model corresponding to its forward problem. In this research, we applied Veronda-Westmann, Yeoh and polynomial models to model tissue hyperelasticity. To validate the proposed technique, we conducted studies involving numerical and tissue-mimicking phantoms. The numerical phantom consisted of a hemisphere connected to a cylinder, while we constructed the tissue-mimicking phantom from polyvinyl alcohol with freeze-thaw cycles that exhibits nonlinear mechanical behavior. Both phantoms consisted of three types of soft tissues which mimic adipose, fibroglandular tissue and a tumor. The results of the simulations and experiments show feasibility of accurate reconstruction of tumor tissue hyperelastic parameters using the proposed method. In the numerical phantom, all hyperelastic parameters corresponding to the three models were reconstructed with less than 2% error. With the tissue-mimicking phantom, we were able to reconstruct the ratio of the hyperelastic parameters reasonably accurately. Compared to the uniaxial test results, the average error of the ratios of the parameters reconstructed for inclusion to the middle and external layers were 13% and 9.6%, respectively. Given that the parameter ratios of the abnormal tissues to the normal ones range from three times to more than ten times, this accuracy is sufficient for tumor classification.
NASA Astrophysics Data System (ADS)
Babier, Aaron; Boutilier, Justin J.; Sharpe, Michael B.; McNiven, Andrea L.; Chan, Timothy C. Y.
2018-05-01
We developed and evaluated a novel inverse optimization (IO) model to estimate objective function weights from clinical dose-volume histograms (DVHs). These weights were used to solve a treatment planning problem to generate ‘inverse plans’ that had similar DVHs to the original clinical DVHs. Our methodology was applied to 217 clinical head and neck cancer treatment plans that were previously delivered at Princess Margaret Cancer Centre in Canada. Inverse plan DVHs were compared to the clinical DVHs using objective function values, dose-volume differences, and frequency of clinical planning criteria satisfaction. Median differences between the clinical and inverse DVHs were within 1.1 Gy. For most structures, the difference in clinical planning criteria satisfaction between the clinical and inverse plans was at most 1.4%. For structures where the two plans differed by more than 1.4% in planning criteria satisfaction, the difference in average criterion violation was less than 0.5 Gy. Overall, the inverse plans were very similar to the clinical plans. Compared with a previous inverse optimization method from the literature, our new inverse plans typically satisfied the same or more clinical criteria, and had consistently lower fluence heterogeneity. Overall, this paper demonstrates that DVHs, which are essentially summary statistics, provide sufficient information to estimate objective function weights that result in high quality treatment plans. However, as with any summary statistic that compresses three-dimensional dose information, care must be taken to avoid generating plans with undesirable features such as hotspots; our computational results suggest that such undesirable spatial features were uncommon. Our IO-based approach can be integrated into the current clinical planning paradigm to better initialize the planning process and improve planning efficiency. It could also be embedded in a knowledge-based planning or adaptive radiation therapy framework to automatically generate a new plan given a predicted or updated target DVH, respectively.
Babier, Aaron; Boutilier, Justin J; Sharpe, Michael B; McNiven, Andrea L; Chan, Timothy C Y
2018-05-10
We developed and evaluated a novel inverse optimization (IO) model to estimate objective function weights from clinical dose-volume histograms (DVHs). These weights were used to solve a treatment planning problem to generate 'inverse plans' that had similar DVHs to the original clinical DVHs. Our methodology was applied to 217 clinical head and neck cancer treatment plans that were previously delivered at Princess Margaret Cancer Centre in Canada. Inverse plan DVHs were compared to the clinical DVHs using objective function values, dose-volume differences, and frequency of clinical planning criteria satisfaction. Median differences between the clinical and inverse DVHs were within 1.1 Gy. For most structures, the difference in clinical planning criteria satisfaction between the clinical and inverse plans was at most 1.4%. For structures where the two plans differed by more than 1.4% in planning criteria satisfaction, the difference in average criterion violation was less than 0.5 Gy. Overall, the inverse plans were very similar to the clinical plans. Compared with a previous inverse optimization method from the literature, our new inverse plans typically satisfied the same or more clinical criteria, and had consistently lower fluence heterogeneity. Overall, this paper demonstrates that DVHs, which are essentially summary statistics, provide sufficient information to estimate objective function weights that result in high quality treatment plans. However, as with any summary statistic that compresses three-dimensional dose information, care must be taken to avoid generating plans with undesirable features such as hotspots; our computational results suggest that such undesirable spatial features were uncommon. Our IO-based approach can be integrated into the current clinical planning paradigm to better initialize the planning process and improve planning efficiency. It could also be embedded in a knowledge-based planning or adaptive radiation therapy framework to automatically generate a new plan given a predicted or updated target DVH, respectively.
Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow
NASA Astrophysics Data System (ADS)
Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar
2014-09-01
We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.
Deconvolution using a neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehman, S.K.
1990-11-15
Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.
Genetics Home Reference: Koolen-de Vries syndrome
... of Koolen-de Vries syndrome , has undergone an inversion . An inversion involves two breaks in a chromosome; the resulting ... lineage have no health problems related to the inversion. However, genetic material can be lost or duplicated ...
Algorithm for lens calculations in the geometrized Maxwell theory
NASA Astrophysics Data System (ADS)
Kulyabov, Dmitry S.; Korolkova, Anna V.; Sevastianov, Leonid A.; Gevorkyan, Migran N.; Demidova, Anastasia V.
2018-04-01
Nowadays the geometric approach in optics is often used to find out media parameters based on propagation paths of the rays because in this case it is a direct problem. However inverse problem in the framework of geometrized optics is usually not given attention. The aim of this work is to demonstrate the work of the proposed the algorithm in the framework of geometrized approach to optics for solving the problem of finding the propagation path of the electromagnetic radiation depending on environmental parameters. The methods of differential geometry are used for effective metrics construction for isotropic and anisotropic media. For effective metric space ray trajectories are obtained in the form of geodesic curves. The introduced algorithm is applied to well-known objects, Maxwell and Luneburg lenses. The similarity of results obtained by classical and geometric approach is demonstrated.
Regularized magnetotelluric inversion based on a minimum support gradient stabilizing functional
NASA Astrophysics Data System (ADS)
Xiang, Yang; Yu, Peng; Zhang, Luolei; Feng, Shaokong; Utada, Hisashi
2017-11-01
Regularization is used to solve the ill-posed problem of magnetotelluric inversion usually by adding a stabilizing functional to the objective functional that allows us to obtain a stable solution. Among a number of possible stabilizing functionals, smoothing constraints are most commonly used, which produce spatially smooth inversion results. However, in some cases, the focused imaging of a sharp electrical boundary is necessary. Although past works have proposed functionals that may be suitable for the imaging of a sharp boundary, such as minimum support and minimum gradient support (MGS) functionals, they involve some difficulties and limitations in practice. In this paper, we propose a minimum support gradient (MSG) stabilizing functional as another possible choice of focusing stabilizer. In this approach, we calculate the gradient of the model stabilizing functional of the minimum support, which affects both the stability and the sharp boundary focus of the inversion. We then apply the discrete weighted matrix form of each stabilizing functional to build a unified form of the objective functional, allowing us to perform a regularized inversion with variety of stabilizing functionals in the same framework. By comparing the one-dimensional and two-dimensional synthetic inversion results obtained using the MSG stabilizing functional and those obtained using other stabilizing functionals, we demonstrate that the MSG results are not only capable of clearly imaging a sharp geoelectrical interface but also quite stable and robust. Overall good performance in terms of both data fitting and model recovery suggests that this stabilizing functional is effective and useful in practical applications.[Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Bobodzhanov, A. A.; Safonov, V. F.
2016-04-01
We consider an algorithm for constructing asymptotic solutions regularized in the sense of Lomov (see [1], [2]). We show that such problems can be reduced to integro-differential equations with inverse time. But in contrast to known papers devoted to this topic (see, for example, [3]), in this paper we study a fundamentally new case, which is characterized by the absence, in the differential part, of a linear operator that isolates, in the asymptotics of the solution, constituents described by boundary functions and by the fact that the integral operator has kernel with diagonal degeneration of high order. Furthermore, the spectrum of the regularization operator A(t) (see below) may contain purely imaginary eigenvalues, which causes difficulties in the application of the methods of construction of asymptotic solutions proposed in the monograph [3]. Based on an analysis of the principal term of the asymptotics, we isolate a class of inhomogeneities and initial data for which the exact solution of the original problem tends to the limit solution (as \\varepsilon\\to+0) on the entire time interval under consideration, also including a boundary-layer zone (that is, we solve the so-called initialization problem). The paper is of a theoretical nature and is designed to lead to a greater understanding of the problems in the theory of singular perturbations. There may be applications in various applied areas where models described by integro-differential equations are used (for example, in elasticity theory, the theory of electrical circuits, and so on).
Monitoring of Cyclic Steam Stimulation by Inversion of Surface Tilt Measurements
NASA Astrophysics Data System (ADS)
Maharramov, M.; Zoback, M. D.
2014-12-01
Temperature and pressure changes associated with the cyclic steam simulation (CSS) used in heavy oil production from sands are accompanied by significant deformation. Inversion of geomechanical data may provide a potentially powerful reservoir monitoring tool where geomechanical effects are significant. Induced pore pressure changes can be inverted from measurable surface deformations by solving an inverse problem of poroelasticity. In this work, we apply this approach to estimating pore pressure changes from surface tilt measurements at a heavy oil reservoir undergoing cyclic steam simulation. Steam was injected from November 2007 through January 2008. Surface tilt measurements were collected from 25 surface tilt stations during this period. The injection ran in two overlapping phases: Phase 1 ran from the beginning of the injection though mid-December, and Phase 2 overlapped with Phase 1 and ran through the beginning of January. During Phase 1 steam was injected in the western part of the reservoir, followed by injection in the eastern part in Phase 2. The pore pressure evolution was inverted from daily tilt measurements using regularized constrained least squares fitting, the results are shown on the plot. Estimated induced pore pressure change (color scale), observed daily incremental tilts (green arrows) and modeled daily incremental tilts (red arrows) are shown in three panels corresponding to two and five weeks of injection, and the end of injection period. DGPS measurements available for a single location were used as an additional inversion constraint. The results indicate that the pore pressure increase in the reservoir follows the same pattern as the steam injection, from west to east. This qualitative behaviour is independent of the amount of regularization, indirectly validating our inversion approach. Patches of lower pressure appear to be stable with regard to regularization and may provide valuable insight into the efficiency of steam injection. Inversion of pore pressure (and surface deformation) from tilts in this case is non-unique, and the DGPS measurement provided an important additional constraint. The method can be applied to inverting pore pressure changes from InSAR observations, and the latter can be expected to reduce limitations due to noise in tilt measurements.
A stochastic approach for model reduction and memory function design in hydrogeophysical inversion
NASA Astrophysics Data System (ADS)
Hou, Z.; Kellogg, A.; Terry, N.
2009-12-01
Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the memory function as a new prior and generate samples from it for further updating when more geophysical data is available. We applied this approach for deep oil reservoir characterization and for shallow subsurface flow monitoring. The model reduction approach reliably helps reduce the joint seismic/EM/radar inversion computational time to reasonable levels. Continuous inversion images are obtained using time-lapse data with the “memory function” applied in the Bayesian inversion.
NASA Astrophysics Data System (ADS)
Horesh, L.; Haber, E.
2009-09-01
The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.
NASA Astrophysics Data System (ADS)
Wu, Jianping; Geng, Xianguo
2017-12-01
The inverse scattering transform of the coupled modified Korteweg-de Vries equation is studied by the Riemann-Hilbert approach. In the direct scattering process, the spectral analysis of the Lax pair is performed, from which a Riemann-Hilbert problem is established for the equation. In the inverse scattering process, by solving Riemann-Hilbert problems corresponding to the reflectionless cases, three types of multi-soliton solutions are obtained. The multi-soliton classification is based on the zero structures of the Riemann-Hilbert problem. In addition, some figures are given to illustrate the soliton characteristics of the coupled modified Korteweg-de Vries equation.
NASA Astrophysics Data System (ADS)
Bogiatzis, P.; Ishii, M.; Davis, T. A.
2016-12-01
Seismic tomography inverse problems are among the largest high-dimensional parameter estimation tasks in Earth science. We show how combinatorics and graph theory can be used to analyze the structure of such problems, and to effectively decompose them into smaller ones that can be solved efficiently by means of the least squares method. In combination with recent high performance direct sparse algorithms, this reduction in dimensionality allows for an efficient computation of the model resolution and covariance matrices using limited resources. Furthermore, we show that a new sparse singular value decomposition method can be used to obtain the complete spectrum of the singular values. This procedure provides the means for more objective regularization and further dimensionality reduction of the problem. We apply this methodology to a moderate size, non-linear seismic tomography problem to image the structure of the crust and the upper mantle beneath Japan using local deep earthquakes recorded by the High Sensitivity Seismograph Network stations.
NASA Astrophysics Data System (ADS)
Kamynin, V. L.; Bukharova, T. I.
2017-01-01
We prove the estimates of stability with respect to perturbations of input data for the solutions of inverse problems for degenerate parabolic equations with unbounded coefficients. An important feature of these estimates is that the constants in these estimates are written out explicitly by the input data of the problem.