NASA Astrophysics Data System (ADS)
Holman, Benjamin R.
In recent years, revolutionary "hybrid" or "multi-physics" methods of medical imaging have emerged. By combining two or three different types of waves these methods overcome limitations of classical tomography techniques and deliver otherwise unavailable, potentially life-saving diagnostic information. Thermoacoustic (and photoacoustic) tomography is the most developed multi-physics imaging modality. Thermo- and photo- acoustic tomography require reconstructing initial acoustic pressure in a body from time series of pressure measured on a surface surrounding the body. For the classical case of free space wave propagation, various reconstruction techniques are well known. However, some novel measurement schemes place the object of interest between reflecting walls that form a de facto resonant cavity. In this case, known methods cannot be used. In chapter 2 we present a fast iterative reconstruction algorithm for measurements made at the walls of a rectangular reverberant cavity with a constant speed of sound. We prove the convergence of the iterations under a certain sufficient condition, and demonstrate the effectiveness and efficiency of the algorithm in numerical simulations. In chapter 3 we consider the more general problem of an arbitrarily shaped resonant cavity with a non constant speed of sound and present the gradual time reversal method for computing solutions to the inverse source problem. It consists in solving back in time on the interval [0, T] the initial/boundary value problem for the wave equation, with the Dirichlet boundary data multiplied by a smooth cutoff function. If T is sufficiently large one obtains a good approximation to the initial pressure; in the limit of large T such an approximation converges (under certain conditions) to the exact solution.
Generalized emissivity inverse problem.
Ming, DengMing; Wen, Tao; Dai, XianXi; Dai, JiXin; Evenson, William E
2002-04-01
Inverse problems have recently drawn considerable attention from the physics community due to of potential widespread applications [K. Chadan and P. C. Sabatier, Inverse Problems in Quantum Scattering Theory, 2nd ed. (Springer Verlag, Berlin, 1989)]. An inverse emissivity problem that determines the emissivity g(nu) from measurements of only the total radiated power J(T) has recently been studied [Tao Wen, DengMing Ming, Xianxi Dai, Jixin Dai, and William E. Evenson, Phys. Rev. E 63, 045601(R) (2001)]. In this paper, a new type of generalized emissivity and transmissivity inverse (GETI) problem is proposed. The present problem differs from our previous work on inverse problems by allowing the unknown (emissivity) function g(nu) to be temperature dependent as well as frequency dependent. Based on published experimental information, we have developed an exact solution formula for this GETI problem. A universal function set suggested for numerical calculation is shown to be robust, making this inversion method practical and convenient for realistic calculations.
Inverse heat conduction problems
NASA Astrophysics Data System (ADS)
Orlande, Helcio Rangel Barreto
We present the solution of the following inverse problems: (1) Inverse Problem of Estimating Interface Conductance Between Periodically Contacting Surfaces; (2) Inverse Problem of Estimating Interface Conductance During Solidification via Conjugate Gradient Method; (3) Determination of the Reaction Function in a Reaction-Diffusion Parabolic Problem; and (4) Simultaneous Estimation of Thermal Diffusivity and Relaxation Time with Hyperbolic Heat Conduction Model. Also, we present the solution of a direct problem entitled: Transient Thermal Constriction Resistance in a Finite Heat Flux Tube. The Conjugate Gradient Method with Adjoint Equation was used in chapters 1-3. The more general function estimation approach was treated in these chapters. In chapter 1, we solve the inverse problem of estimating the timewise variation of the interface conductance between periodically contacting solids, under quasi-steady-state conditions. The present method is found to be more accurate than the B-Spline approach for situations involving small periods, which are the most difficult on which to perform the inverse analysis. In chapter 2, we estimate the timewise variation of the interface conductance between casting and mold during the solidification of aluminum. The experimental apparatus used in this study is described. In chapter 3, we present the estimation of the reaction function in a one dimensional parabolic problem. A comparison of the present function estimation approach with the parameter estimation technique, wing B-Splines to approximate the reaction function, revealed that the use of function estimation reduces the computer time requirements. In chapter 4 we present a finite difference solution for the transient constriction resistance in a cylinder of finite length with a circular contact surface. A numerical grid generation scheme was used to concentrate grid points in the regions of high temperature gradients in order to reduce discretization errors. In chapter 6, we
Aneesur Rahman Prize: The Inverse Ising Problem
NASA Astrophysics Data System (ADS)
Swendsen, Robert
2014-03-01
Many methods are available for carrying out computer simulations of a model Hamiltonian to obtain thermodynamic information by generating a set of configurations. The inverse problem consists of recreating the parameters of the Hamiltonian, given a set of configurations. The problem arises in a variety of contexts, and there has been much interest recently in the inverse Ising problem, in which the configurations consist of Ising spins. I will discuss an efficient method for solving the problem and what it can tell us about the Sherrington-Kirkpatrick model.
Boundary estimation problems arising in thermal tomography
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kojima, Fumio; Winfree, W. P.
1989-01-01
Problems on the identification of two-dimensional spatial domains arising in the detection and characterization of structural flaws in materials are considered. For a thermal diffusion system with external boundary input, observations of the temperature on the surface are used in a output least squares approach. Parameter estimation techniques based on the method of mappings are discussed and approximation schemes are developed based on a finite element Galerkin approach. Theoretical convergence results for computational techniques are given and the results are applied to experimental data for the identification of flaws in the thermal testing of materials.
Inverse Problems of Thermoelectricity
NASA Astrophysics Data System (ADS)
Anatychuk, L. I.; Luste, O. J.; Kuz, R. V.; Strutinsky, M. N.
2011-05-01
Classical thermoelectricity is based on the use of the Seebeck and Thomson effects that occur in the near-contact areas between n- and p-type materials. A conceptually different approach to thermoelectric power converter design that is based on the law of thermoelectric induction of currents is also known. The efficiency of this approach has already been demonstrated by its first applications. More than 10 basically new types of thermoelements were discovered with properties that cannot be achieved by thermocouple power converters. Therefore, further development of this concept is of practical interest. This paper provides a classification and theory for solving the inverse problems of thermoelectricity that form the basis for devising new thermoelement types. Computer methods for their solution for anisotropic and inhomogeneous media are elaborated. Regularities related to thermoelectric current excitation in anisotropic and inhomogeneous media are established. The possibility of obtaining eddy currents of a particular configuration through control of the temperature field and material parameters for the creation of new thermo- element types is demonstrated for three-dimensional (3D) models of anisotropic and inhomogeneous media.
Inverse problem in hydrogeology
NASA Astrophysics Data System (ADS)
Carrera, Jesús; Alcolea, Andrés; Medina, Agustín; Hidalgo, Juan; Slooten, Luit J.
2005-03-01
The state of the groundwater inverse problem is synthesized. Emphasis is placed on aquifer characterization, where modelers have to deal with conceptual model uncertainty (notably spatial and temporal variability), scale dependence, many types of unknown parameters (transmissivity, recharge, boundary conditions, etc.), nonlinearity, and often low sensitivity of state variables (typically heads and concentrations) to aquifer properties. Because of these difficulties, calibration cannot be separated from the modeling process, as it is sometimes done in other fields. Instead, it should be viewed as one step in the process of understanding aquifer behavior. In fact, it is shown that actual parameter estimation methods do not differ from each other in the essence, though they may differ in the computational details. It is argued that there is ample room for improvement in groundwater inversion: development of user-friendly codes, accommodation of variability through geostatistics, incorporation of geological information and different types of data (temperature, occurrence and concentration of isotopes, age, etc.), proper accounting of uncertainty, etc. Despite this, even with existing codes, automatic calibration facilitates enormously the task of modeling. Therefore, it is contended that its use should become standard practice. L'état du problème inverse des eaux souterraines est synthétisé. L'accent est placé sur la caractérisation de l'aquifère, où les modélisateurs doivent jouer avec l'incertitude des modèles conceptuels (notamment la variabilité spatiale et temporelle), les facteurs d'échelle, plusieurs inconnues sur différents paramètres (transmissivité, recharge, conditions aux limites, etc.), la non linéarité, et souvent la sensibilité de plusieurs variables d'état (charges hydrauliques, concentrations) des propriétés de l'aquifère. A cause de ces difficultés, le calibrage ne peut êtreséparé du processus de modélisation, comme c'est le
Inverse scattering problems with multi-frequencies
NASA Astrophysics Data System (ADS)
Bao, Gang; Li, Peijun; Lin, Junshan; Triki, Faouzi
2015-09-01
This paper is concerned with computational approaches and mathematical analysis for solving inverse scattering problems in the frequency domain. The problems arise in a diverse set of scientific areas with significant industrial, medical, and military applications. In addition to nonlinearity, there are two common difficulties associated with the inverse problems: ill-posedness and limited resolution (diffraction limit). Due to the diffraction limit, for a given frequency, only a low spatial frequency part of the desired parameter can be observed from measurements in the far field. The main idea developed here is that if the reconstruction is restricted to only the observable part, then the inversion will become stable. The challenging task is how to design stable numerical methods for solving these inverse scattering problems inspired by the diffraction limit. Recently, novel recursive linearization based algorithms have been presented in an attempt to answer the above question. These methods require multi-frequency scattering data and proceed via a continuation procedure with respect to the frequency from low to high. The objective of this paper is to give a brief review of these methods, their error estimates, and the related mathematical analysis. More attention is paid to the inverse medium and inverse source problems. Numerical experiments are included to illustrate the effectiveness of these methods.
TOPICAL REVIEW: Inverse problems in elasticity
NASA Astrophysics Data System (ADS)
Bonnet, Marc; Constantinescu, Andrei
2005-04-01
This review is devoted to some inverse problems arising in the context of linear elasticity, namely the identification of distributions of elastic moduli, model parameters or buried objects such as cracks. These inverse problems are considered mainly for three-dimensional elastic media under equilibrium or dynamical conditions, and also for thin elastic plates. The main goal is to overview some recent results, in an effort to bridge the gap between studies of a mathematical nature and problems defined from engineering practice. Accordingly, emphasis is given to formulations and solution techniques which are well suited to general-purpose numerical methods for solving elasticity problems on complex configurations, in particular the finite element method and the boundary element method. An underlying thread of the discussion is the fact that useful tools for the formulation, analysis and solution of inverse problems arising in linear elasticity, namely the reciprocity gap and the error in constitutive equation, stem from variational and virtual work principles, i.e., fundamental principles governing the mechanics of deformable solid continua. In addition, the virtual work principle is shown to be instrumental for establishing computationally efficient formulae for parameter or geometrical sensitivity, based on the adjoint solution method. Sensitivity formulae are presented for various situations, especially in connection with contact mechanics, cavity and crack shape perturbations, thus enriching the already extensive known repertoire of such results. Finally, the concept of topological derivative and its implementation for the identification of cavities or inclusions are expounded.
Mathematical problems arising in interfacial electrohydrodynamics
NASA Astrophysics Data System (ADS)
Tseluiko, Dmitri
In this work we consider the nonlinear stability of thin films in the presence of electric fields. We study a perfectly conducting thin film flow down an inclined plane in the presence of an electric field which is uniform in its undisturbed state, and normal to the plate at infinity. In addition, the effect of normal electric fields on films lying above, or hanging from, horizontal substrates is considered. Systematic asymptotic expansions are used to derive fully nonlinear long wave model equations for the scaled interface motion and corresponding flow fields. For the case of an inclined plane, higher order terms are need to be retained to regularize the problem in the sense that the long wave approximation remains valid for long times. For the case of a horizontal plane the fully nonlinear evolution equation which is derived at the leading order, is asymptotically correct and no regularization procedure is required. In both physical situations, the effect of the electric field is to introduce a non-local term which arises from the potential region above the liquid film, and enters through the electric Maxwell stresses at the interface. This term is always linearly destabilizing and produces growth rates proportional to the cubic power of the wavenumber - surface tension is included and provides a short wavelength cut-off, that is, all sufficiently short waves are linearly stable. For the case of film flow down an inclined plane, the fully nonlinear equation can produce singular solutions (for certain parameter values) after a finite time, even in the absence of an electric field. This difficulty is avoided at smaller amplitudes where the weakly nonlinear evolution is governed by an extension of the Kuramoto-Sivashinsky (KS) equation. Global existence and uniqueness results are proved, and refined estimates of the radius of the absorbing ball in L2 are obtained in terms of the parameters of the equations for a generalized class of modified KS equations. The
Kapteyn series arising in radiation problems
NASA Astrophysics Data System (ADS)
Lerche, I.; Tautz, R. C.
2008-01-01
In discussing radiation from multiple point charges or magnetic dipoles, moving in circles or ellipses, a variety of Kapteyn series of the second kind arises. Some of the series have been known in closed form for a hundred years or more, others appear not to be available to analytic persuasion. This paper shows how 12 such generic series can be developed to produce either closed analytic expressions or integrals that are not analytically tractable. In addition, the method presented here may be of benefit when one has other Kapteyn series of the second kind to consider, thereby providing an additional reason to consider such series anew.
Computationally efficient Bayesian inference for inverse problems.
Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.
2007-10-01
Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.
Uncertainty quantification for ice sheet inverse problems
NASA Astrophysics Data System (ADS)
Petra, N.; Ghattas, O.; Stadler, G.; Zhu, H.
2011-12-01
Modeling the dynamics of polar ice sheets is critical for projections of future sea level rise. Yet, there remain large uncertainties in the basal boundary conditions and in the non-Newtonian constitutive relations employed within ice sheet models. In this presentation, we consider the problem of estimating uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem-i.e., the posterior probability density-is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal slipperiness field and the Glen's law exponent field). However, under the assumption of Gaussian noise and prior probability densities, and after linearizing the parameter-to-observable map, the posterior density becomes Gaussian, and can therefore be characterized by its mean and covariance. The mean is given by the solution of a nonlinear least squares optimization problem, which is equivalent to a deterministic inverse problem with appropriate interpretation and weighting of the data misfit and regularization terms. To obtain this mean, we solve a deterministic ice sheet inverse problem; here, we infer parameters arising from discretizations of basal slipperiness and rheological exponent fields. For this purpose, we minimize a regularized misfit functional between observed and modeled surface flow velocities. The resulting least squares minimization problem is solved using an adjoint-based inexact Newton method, which uses first and second derivative information. The posterior covariance matrix is given (in the linear-Gaussian case) by the inverse of the Hessian of the least squares cost functional of the deterministic inverse problem. Direct computation of the Hessian matrix is prohibitive, since it would
Estimating uncertainties in complex joint inverse problems
NASA Astrophysics Data System (ADS)
Afonso, Juan Carlos
2016-04-01
Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related
TOPICAL REVIEW: Inverse problems in systems biology
NASA Astrophysics Data System (ADS)
Engl, Heinz W.; Flamm, Christoph; Kügler, Philipp; Lu, James; Müller, Stefan; Schuster, Peter
2009-12-01
Systems biology is a new discipline built upon the premise that an understanding of how cells and organisms carry out their functions cannot be gained by looking at cellular components in isolation. Instead, consideration of the interplay between the parts of systems is indispensable for analyzing, modeling, and predicting systems' behavior. Studying biological processes under this premise, systems biology combines experimental techniques and computational methods in order to construct predictive models. Both in building and utilizing models of biological systems, inverse problems arise at several occasions, for example, (i) when experimental time series and steady state data are used to construct biochemical reaction networks, (ii) when model parameters are identified that capture underlying mechanisms or (iii) when desired qualitative behavior such as bistability or limit cycle oscillations is engineered by proper choices of parameter combinations. In this paper we review principles of the modeling process in systems biology and illustrate the ill-posedness and regularization of parameter identification problems in that context. Furthermore, we discuss the methodology of qualitative inverse problems and demonstrate how sparsity enforcing regularization allows the determination of key reaction mechanisms underlying the qualitative behavior.
Inverse Problem of Vortex Reconstruction
NASA Astrophysics Data System (ADS)
Protas, Bartosz; Danaila, Ionut
2014-11-01
This study addresses the following question: given incomplete measurements of the velocity field induced by a vortex, can one determine the structure of the vortex? Assuming that the flow is incompressible, inviscid and stationary in the frame of reference moving with the vortex, the ``structure'' of the vortex is uniquely characterized by the functional relation between the streamfunction and vorticity. To focus attention, 3D axisymmetric vortex rings are considered. We show how this inverse problem can be framed as an optimization problem which can then be efficiently solved using variational techniques. More precisely, we use measurements of the tangential velocity on some contour to reconstruct the function defining the streamfunction-vorticity relation in a continuous setting. Two test cases are presented, involving Hill's and Norbury vortices, in which very good reconstructions are obtained. A key result of this study is the application of our approach to obtain an optimal inviscid vortex model in an actual viscous flow problem based on DNS data which leads to a number of nonintuitive findings.
Minimax approach to inverse problems of geophysics
NASA Astrophysics Data System (ADS)
Balk, P. I.; Dolgal, A. S.; Balk, T. V.; Khristenko, L. A.
2016-03-01
A new approach is suggested for solving the inverse problems that arise in the different fields of applied geophysics (gravity, magnetic, and electrical prospecting, geothermy) and require assessing the spatial region occupied by the anomaly-generating masses in the presence of different types of a priori information. The interpretation which provides the maximum guaranteed proximity of the model field sources to the real perturbing object is treated as the best interpretation. In some fields of science (game theory, economics, operations research), the decision-making principle that lies in minimizing the probable losses which cannot be prevented if the situation develops by the worst-case scenario is referred to as minimax. The minimax criterion of choice is interesting as, instead of being confined to the indirect (and sometimes doubtful) signs of the "optimal" solution, it relies on the actual properties of the information in the results of a particular interpretation. In the hierarchy of the approaches to the solution of the inverse problems of geophysics ordered by the volume and quality of the retrieved information about the sources of the field, the minimax approach should take special place.
Inverse problem for Bremsstrahlung radiation
Voss, K.E.; Fisch, N.J.
1991-10-01
For certain predominantly one-dimensional distribution functions, an analytic inversion has been found which yields the velocity distribution of superthermal electrons given their Bremsstrahlung radiation. 5 refs.
BOOK REVIEW: Inverse Problems. Activities for Undergraduates
NASA Astrophysics Data System (ADS)
Yamamoto, Masahiro
2003-06-01
This book is a valuable introduction to inverse problems. In particular, from the educational point of view, the author addresses the questions of what constitutes an inverse problem and how and why we should study them. Such an approach has been eagerly awaited for a long time. Professor Groetsch, of the University of Cincinnati, is a world-renowned specialist in inverse problems, in particular the theory of regularization. Moreover, he has made a remarkable contribution to educational activities in the field of inverse problems, which was the subject of his previous book (Groetsch C W 1993 Inverse Problems in the Mathematical Sciences (Braunschweig: Vieweg)). For this reason, he is one of the most qualified to write an introductory book on inverse problems. Without question, inverse problems are important, necessary and appear in various aspects. So it is crucial to introduce students to exercises in inverse problems. However, there are not many introductory books which are directly accessible by students in the first two undergraduate years. As a consequence, students often encounter diverse concrete inverse problems before becoming aware of their general principles. The main purpose of this book is to present activities to allow first-year undergraduates to learn inverse theory. To my knowledge, this book is a rare attempt to do this and, in my opinion, a great success. The author emphasizes that it is very important to teach inverse theory in the early years. He writes; `If students consider only the direct problem, they are not looking at the problem from all sides .... The habit of always looking at problems from the direct point of view is intellectually limiting ...' (page 21). The book is very carefully organized so that teachers will be able to use it as a textbook. After an introduction in chapter 1, sucessive chapters deal with inverse problems in precalculus, calculus, differential equations and linear algebra. In order to let one gain some insight
An inverse problem in thermal imaging
NASA Technical Reports Server (NTRS)
Bryan, Kurt; Caudill, Lester F., Jr.
1994-01-01
This paper examines uniqueness and stability results for an inverse problem in thermal imaging. The goal is to identify an unknown boundary of an object by applying a heat flux and measuring the induced temperature on the boundary of the sample. The problem is studied both in the case in which one has data at every point on the boundary of the region and the case in which only finitely many measurements are available. An inversion procedure is developed and used to study the stability of the inverse problem for various experimental configurations.
A scatterometry inverse problem in optical mask metrology
NASA Astrophysics Data System (ADS)
Model, R.; Rathsfeld, A.; Gross, H.; Wurm, M.; Bodermann, B.
2008-11-01
We discuss the solution of the inverse problem in scatterometry i.e. the determination of periodic surface structures from light diffraction patterns. With decreasing details of lithography masks, increasing demands on metrology techniques arise. By scatterometry as a non-imaging indirect optical method critical dimensions (CD) like side-wall angles, heights, top and bottom widths are determined. The numerical simulation of diffraction is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. The inverse operator maps efficiencies of diffracted plane wave modes to the grating parameters. We employ a Newton type iterative method to solve the resulting minimum problem. The reconstruction quality surely depends on the angles of incidence, on the wave lengths and/or the number of propagating scattered wave modes and will be discussed by numerical examples.
Analytic solutions of inverse heat conduction problems
NASA Astrophysics Data System (ADS)
Al-Najem, N. M.
A direct analytic approach is systematically developed for solving inverse heat conduction problems in multi-dimensional finite regions. The inverse problems involve the determination of the surface conditions from the knowledge of the time variation of the temperature at an interior point in the region. In the present approach, the unknown surface temperature is represented by a polynominal in time and a splitting-up procedure is employed to develop a rapidly converging inverse solution. The least square technique is then utilized to estimate the unknown parameters associated with the solution. The method is developed first for the analysis of one-dimensional cases, and then it is generalized to handle two- and three-dimensional situations. It provides an efficient, stable and systematic approach for inverse heat condition problems. The stability and accuracy of the current method of analysis are demonstrated by several numerical examples chosen to provide a very strict test.
Molecular seismology: an inverse problem in nanobiology.
Hinow, Peter; Boczko, Erik M
2007-05-01
The density profile of an elastic fiber like DNA will change in space and time as ligands associate with it. This observation affords a new direction in single molecule studies provided that density profiles can be measured in space and time. In fact, this is precisely the objective of seismology, where the mathematics of inverse problems have been employed with success. We argue that inverse problems in elastic media can be directly applied to biophysical problems of fiber-ligand association, and demonstrate that robust algorithms exist to perform density reconstruction in the condensed phase.
The Inverse Problem in Jet Acoustics
NASA Technical Reports Server (NTRS)
Wooddruff, S. L.; Hussaini, M. Y.
2001-01-01
The inverse problem for jet acoustics, or the determination of noise sources from far-field pressure information, is proposed as a tool for understanding the generation of noise by turbulence and for the improved prediction of jet noise. An idealized version of the problem is investigated first to establish the extent to which information about the noise sources may be determined from far-field pressure data and to determine how a well-posed inverse problem may be set up. Then a version of the industry-standard MGB code is used to predict a jet noise source spectrum from experimental noise data.
The role of nonlinearity in inverse problems
NASA Astrophysics Data System (ADS)
Snieder, Roel
1998-06-01
In many practical inverse problems, one aims to retrieve a model that has infinitely many degrees of freedom from a finite amount of data. It follows from a simple variable count that this cannot be done in a unique way. Therefore, inversion entails more than estimating a model: any inversion is not complete without a description of the class of models that is consistent with the data; this is called the appraisal problem. Nonlinearity makes the appraisal problem particularly difficult. The first reason for this is that nonlinear error propagation is a difficult problem. The second reason is that for some nonlinear problems the model parameters affect the way in which the model is being interrogated by the data. Two examples are given of this, and it is shown how the nonlinearity may make the problem more ill-posed. Finally, three attempts are shown to carry out the model appraisal for nonlinear inverse problems that are based on an analytical approach, a numerical approach and a common sense approach.
Direct and Inverse problems in Electrocardiography
NASA Astrophysics Data System (ADS)
Boulakia, M.; Fernández, M. A.; Gerbeau, J. F.; Zemzemi, N.
2008-09-01
We present numerical results related to the direct and the inverse problems in electrocardiography. The electrical activity of the heart is described by the bidomain equations. The electrocardiograms (ECGs) recorded in different points on the body surface are obtained by coupling the bidomain equation to a Laplace equation in the torso. The simulated ECGs are quite satisfactory. As regards the inverse problem, our goal is to estimate the parameters of the bidomain-torso model. Here we present some preliminary results of a parameter estimation for the torso model.
Inverse source problems for eddy current equations
NASA Astrophysics Data System (ADS)
Alonso Rodríguez, Ana; Camaño, Jessika; Valli, Alberto
2012-01-01
We study the inverse source problem for the eddy current approximation of Maxwell equations. As for the full system of Maxwell equations, we show that a volume current source cannot be uniquely identified by knowledge of the tangential components of the electromagnetic fields on the boundary, and we characterize the space of non-radiating sources. On the other hand, we prove that the inverse source problem has a unique solution if the source is supported on the boundary of a subdomain or if it is the sum of a finite number of dipoles. We address the applicability of this result for the localization of brain activity from electroencephalography and magnetoencephalography measurements.
Inverse problem of electro-seismic conversion
NASA Astrophysics Data System (ADS)
Chen, Jie; Yang, Yang
2013-11-01
When a porous rock is saturated with an electrolyte, electrical fields are coupled with seismic waves via the electro-seismic conversion. Pride (1994 Phys. Rev. B 50 15678-96) derived the governing models, in which Maxwell equations are coupled with Biot's equations through the electro-kinetic mobility parameter. The inverse problem of the linearized electro-seismic conversion consists in two steps, namely the inversion of Biot's equations and the inversion of Maxwell equations. We analyze the reconstruction of conductivity and electro-kinetic mobility parameter in Maxwell equations with internal measurements, while the internal measurements are provided by the results of the inversion of Biot's equations. We show that knowledge of two internal data based on well-chosen boundary conditions uniquely determines these two parameters. Moreover, a Lipschitz-type stability is proved based on the same sets of well-chosen boundary conditions.
A Stochastic Problem Arising in the Storage of Radioactive Waste
Williams, M.M.R.
2004-07-15
Nuclear waste drums can contain a collection of radioactive components of uncertain activity and randomly dispersed in position. This implies that the dose-rate at the surface of different drums in a large assembly of similar drums can have significant variations according to the physical makeup and configuration of the waste components. The present paper addresses this problem by treating the drum, and its waste, as a stochastic medium. It is assumed that the sources in the drum contribute a dose-rate to some external point. The strengths and positions are chosen by random numbers, the dose-rate is calculated and, from several thousand realizations, a probability distribution for the dose-rate is obtained. It is shown that a very close approximation to the dose-rate probability function is the log-normal distribution. This allows some useful statistical indicators, which are of environmental importance, to be calculated with little effort.As an example of a practical situation met in the storage of radioactive waste containers, we study the problem of 'hotspots'. These arise in drums in which most of the activity is concentrated on one radioactive component and hence can lead to the possibility of large surface dose-rates. It is shown how the dose-rate, the variance, and some other statistical indicators depend on the relative activities on the sources. The results highlight the importance of such hotspots and the need to quantify their effect.
MAP estimators and their consistency in Bayesian nonparametric inverse problems
NASA Astrophysics Data System (ADS)
Dashti, M.; Law, K. J. H.; Stuart, A. M.; Voss, J.
2013-09-01
We consider the inverse problem of estimating an unknown function u from noisy measurements y of a known, possibly nonlinear, map {G} applied to u. We adopt a Bayesian approach to the problem and work in a setting where the prior measure is specified as a Gaussian random field μ0. We work under a natural set of conditions on the likelihood which implies the existence of a well-posed posterior measure, μy. Under these conditions, we show that the maximum a posteriori (MAP) estimator is well defined as the minimizer of an Onsager-Machlup functional defined on the Cameron-Martin space of the prior; thus, we link a problem in probability with a problem in the calculus of variations. We then consider the case where the observational noise vanishes and establish a form of Bayesian posterior consistency for the MAP estimator. We also prove a similar result for the case where the observation of {G}(u) can be repeated as many times as desired with independent identically distributed noise. The theory is illustrated with examples from an inverse problem for the Navier-Stokes equation, motivated by problems arising in weather forecasting, and from the theory of conditioned diffusions, motivated by problems arising in molecular dynamics.
Inverse Problems in Classical and Quantum Physics
NASA Astrophysics Data System (ADS)
Almasy, Andrea A.
2009-12-01
The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. In this thesis, also two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.
Numerical linear algebra for reconstruction inverse problems
NASA Astrophysics Data System (ADS)
Nachaoui, Abdeljalil
2004-01-01
Our goal in this paper is to discuss various issues we have encountered in trying to find and implement efficient solvers for a boundary integral equation (BIE) formulation of an iterative method for solving a reconstruction problem. We survey some methods from numerical linear algebra, which are relevant for the solution of this class of inverse problems. We motivate the use of our constructing algorithm, discuss its implementation and mention the use of preconditioned Krylov methods.
Urban surface water pollution problems arising from misconnections.
Revitt, D Michael; Ellis, J Bryan
2016-05-01
The impacts of misconnections on the organic and nutrient loadings to surface waters are assessed using specific household appliance data for two urban sub-catchments located in the London metropolitan region and the city of Swansea. Potential loadings of biochemical oxygen demand (BOD), soluble reactive phosphorus (PO4-P) and ammoniacal nitrogen (NH4-N) due to misconnections are calculated for three different scenarios based on the measured daily flows from specific appliances and either measured daily pollutant concentrations or average pollutant concentrations for relevant greywater and black water sources obtained from an extensive review of the literature. Downstream receiving water concentrations, together with the associated uncertainties, are predicted from derived misconnection discharge concentrations and compared to existing freshwater standards for comparable river types. Consideration of dilution ratios indicates that these would need to be of the order of 50-100:1 to maintain high water quality with respect to BOD and NH4-N following typical misconnection discharges but only poor quality for PO4-P is likely to be achievable. The main pollutant loading contributions to misconnections arise from toilets (NH4-N and BOD), kitchen sinks (BOD and PO4-P) washing machines (PO4-P and BOD) and, to a lesser extent, dishwashers (PO4-P). By completely eliminating toilet misconnections and ensuring misconnections from all other appliances do not exceed 2%, the potential pollution problems due to BOD and NH4-N discharges would be alleviated but this would not be the case for PO4-P. In the event of a treatment option being preferred to solve the misconnection problem, it is shown that for an area the size of metropolitan Greater London, a sewage treatment plant with a Population Equivalent value approaching 900,000 would be required to efficiently remove BOD and NH4-N to safely dischargeable levels but such a plant is unlikely to have the capacity to deal
An efficient method for inverse problems
NASA Technical Reports Server (NTRS)
Daripa, Prabir
1987-01-01
A new inverse method for aerodynamic design of subcritical airfoils is presented. The pressure distribution in this method can be prescribed in a natural way, i.e. as a function of arclength of the as yet unknown body. This inverse problem is shown to be mathematically equivalent to solving a single nonlinear boundary value problem subject to known Dirichlet data on the boundary. The solution to this problem determines the airfoil, the free stream Mach number M(sub x) and the upstream flow direction theta(sub x). The existence of a solution for any given pressure distribution is discussed. The method is easy to implement and extremely efficient. We present a series of results for which comparisons are made with the known airfoils.
An inverse kinematic problem with internal sources
NASA Astrophysics Data System (ADS)
Pestov, Leonid; Uhlmann, Gunther; Zhou, Hanming
2015-05-01
Given a bounded domain M in {{{R}}n} with a conformally Euclidean metric g=ρ d{{x}2}, we consider the inverse problem of recovering a semigeodesic neighborhood of a domain Γ \\subset \\partial M and the conformal factor ρ in the neighborhood from the travel time data (defined below) and the Cartesian coordinates of Γ. We develop an explicit reconstruction procedure for this problem. The key ingredient is the relation between the reconstruction procedure and a Cauchy problem of the conformal Killing equation.
Inverse problems biomechanical imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Oberai, Assad A.
2016-03-01
It is now well recognized that a host of imaging modalities (a list that includes Ultrasound, MRI, Optical Coherence Tomography, and optical microscopy) can be used to "watch" tissue as it deforms in response to an internal or external excitation. The result is a detailed map of the deformation field in the interior of the tissue. This deformation field can be used in conjunction with a material mechanical response to determine the spatial distribution of material properties of the tissue by solving an inverse problem. Images of material properties thus obtained can be used to quantify the health of the tissue. Recently, they have been used to detect, diagnose and monitor cancerous lesions, detect vulnerable plaque in arteries, diagnose liver cirrhosis, and possibly detect the onset of Alzheimer's disease. In this talk I will describe the mathematical and computational aspects of solving this class of inverse problems, and their applications in biology and medicine. In particular, I will discuss the well-posedness of these problems and quantify the amount of displacement data necessary to obtain a unique property distribution. I will describe an efficient algorithm for solving the resulting inverse problem. I will also describe some recent developments based on Bayesian inference in estimating the variance in the estimates of material properties. I will conclude with the applications of these techniques in diagnosing breast cancer and in characterizing the mechanical properties of cells with sub-cellular resolution.
Riemann–Hilbert problem approach for two-dimensional flow inverse scattering
Agaltsov, A. D.; Novikov, R. G.
2014-10-15
We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.
Inverse scattering problem in turbulent magnetic fluctuations
NASA Astrophysics Data System (ADS)
Treumann, Rudolf A.; Baumjohann, Wolfgang; Narita, Yasuhito
2016-08-01
We apply a particular form of the inverse scattering theory to turbulent magnetic fluctuations in a plasma. In the present note we develop the theory, formulate the magnetic fluctuation problem in terms of its electrodynamic turbulent response function, and reduce it to the solution of a special form of the famous Gelfand-Levitan-Marchenko equation of quantum mechanical scattering theory. The last of these applies to transmission and reflection in an active medium. The theory of turbulent magnetic fluctuations does not refer to such quantities. It requires a somewhat different formulation. We reduce the theory to the measurement of the low-frequency electromagnetic fluctuation spectrum, which is not the turbulent spectral energy density. The inverse theory in this form enables obtaining information about the turbulent response function of the medium. The dynamic causes of the electromagnetic fluctuations are implicit to it. Thus, it is of vital interest in low-frequency magnetic turbulence. The theory is developed until presentation of the equations in applicable form to observations of turbulent electromagnetic fluctuations as input from measurements. Solution of the final integral equation should be done by standard numerical methods based on iteration. We point to the possibility of treating power law fluctuation spectra as an example. Formulation of the problem to include observations of spectral power densities in turbulence is not attempted. This leads to severe mathematical problems and requires a reformulation of inverse scattering theory. One particular aspect of the present inverse theory of turbulent fluctuations is that its structure naturally leads to spatial information which is obtained from the temporal information that is inherent to the observation of time series. The Taylor assumption is not needed here. This is a consequence of Maxwell's equations, which couple space and time evolution. The inversion procedure takes advantage of a particular
Network connections that evolve to circumvent the inverse optics problem.
Ng, Cherlyn; Sundararajan, Janani; Hogan, Michael; Purves, Dale
2013-01-01
A fundamental problem in vision science is how useful perceptions and behaviors arise in the absence of information about the physical sources of retinal stimuli (the inverse optics problem). Psychophysical studies show that human observers contend with this problem by using the frequency of occurrence of stimulus patterns in cumulative experience to generate percepts. To begin to understand the neural mechanisms underlying this strategy, we examined the connectivity of simple neural networks evolved to respond according to the cumulative rank of stimulus luminance values. Evolved similarities with the connectivity of early level visual neurons suggests that biological visual circuitry uses the same mechanisms as a means of creating useful perceptions and behaviors without information about the real world.
Direct and inverse problems of infrared tomography.
Sizikov, Valery S; Evseev, Vadim; Fateev, Alexander; Clausen, Sønnik
2016-01-01
The problems of infrared tomography-direct (the modeling of measured functions) and inverse (the reconstruction of gaseous medium parameters)-are considered with a laboratory burner flame as an example of an application. The two measurement modes are used: active (ON) with an external IR source and passive (OFF) without one. Received light intensities on detectors are modeled in the direct problem or measured in the experiment whereas integral equations with respect to the absorption coefficient and Planck function (which yields the temperature profile of the medium) are solved in the inverse problem with (1) modeled and (2) measured received intensities as the input data. An axisymmetric flame and parallel scanning scheme of measurements considered in this work yield singular integral equations that are solved numerically using the generalized quadrature method, spline smoothing, and Tikhonov regularization. A software package in MATLAB has been developed. Two numerical examples-with modeled and real input data-were solved. The proposed methodology avoids the necessity of elaborate determination of the absorption coefficient by direct (point) measurements or calculation using spectroscopic databases (e.g., HITRAN/HITEMP). PMID:26835642
Management Problems Arising from the Introduction of Automation.
ERIC Educational Resources Information Center
Francis, Simon
1984-01-01
Relates problems prompted by implementation of online circulation and cataloging systems in Polytechnic of North London's Library Service. Conversion of records from former card catalog to computer file and organizational problems (position of central technical services unit, position of Library Service in total information handling structure of…
Inverse problems in statistical mechanics and photonics
NASA Astrophysics Data System (ADS)
Rechtsman, Mikael C.
In an inverse problem, one seeks the nature of the components of a system with known (or targeted) resultant behavior---perhaps opposite to the traditional trajectory of problem solving in physical research. In this thesis, a number of inverse problems in two categories are considered. In the first, in many-body classical systems with isotropic two-body interactions, we target uncharacteristic, technologically relevant thermodynamic behavior. In the second, we consider two problems in electromagnetic scattering and photonics. Increasingly, experimentalists have been able to tailor isotropic interactions between micron-scale colloidal spheres, allowing for the possibility of targeted self-assembly of a desired crystal structure upon freezing. Self-assembly of certain structures, the diamond lattice in particular, has a great deal of technological potential in the fields of optoelectronics and photonics. We present here new computational algorithms that find isotropic interaction potentials that yield targeted ground state crystal structures. These algorithms are applied to find interaction potentials for the honeycomb lattice (which is the two-dimensional analog of diamond), the square lattice, the simple cubic lattice, the wurtzite as well as the diamond lattice. We also present an isotropic interaction potential that gives rise to negative thermal expansion, a macroscopic behavior that has previously been associated with a highly anisotropic microscopic mechanism. Furthermore, we show that systems with only isotropic interactions may exhibit a negative Poisson's ratio, as long as they are under tension. We derive linear constraints involving the derivatives of the pair potential that gives rise to this behavior. In a study of electromagnetic scattering in random dielectric two-component composites, we use a strong-contrast perturbation expansion to obtain analytic expressions for the effective dielectric tensor to arbitrary order in the dielectric contrast between
Inverse Variational Problem for Nonstandard Lagrangians
NASA Astrophysics Data System (ADS)
Saha, A.; Talukdar, B.
2014-06-01
In the mathematical physics literature the nonstandard Lagrangians (NSLs) were introduced in an ad hoc fashion rather than being derived from the solution of the inverse problem of variational calculus. We begin with the first integral of the equation of motion and solve the associated inverse problem to obtain some of the existing results for NSLs. In addition, we provide a number of alternative Lagrangian representations. The case studies envisaged by us include (i) the usual modified Emden-type equation, (ii) Emden-type equation with dissipative term quadratic in velocity, (iii) Lotka-Volterra model and (vi) a number of the generic equations for dissipative-like dynamical systems. Our method works for nonstandard Lagrangians corresponding to the usual action integral of mechanical systems but requires modification for those associated with the modified actions like S =∫abe L(x ,x˙ , t) dt and S =∫abL 1 - γ(x ,x˙ , t) dt because in the latter case one cannot construct expressions for the Jacobi integrals.
Data quality for the inverse lsing problem
NASA Astrophysics Data System (ADS)
Decelle, Aurélien; Ricci-Tersenghi, Federico; Zhang, Pan
2016-09-01
There are many methods proposed for inferring parameters of the Ising model from given data, that is a set of configurations generated according to the model itself. However little attention has been paid until now to the data, e.g. how the data is generated, whether the inference error using one set of data could be smaller than using another set of data, etc. In this paper we discuss the data quality problem in the inverse Ising problem, using as a benchmark the kinetic Ising model. We quantify the quality of data using effective rank of the correlation matrix, and show that data gathered in a out-of-equilibrium regime has a better quality than data gathered in equilibrium for coupling reconstruction. We also propose a matrix-perturbation based method for tuning the quality of given data and for removing bad-quality (i.e. redundant) configurations from data.
An inverse problem by boundary element method
Tran-Cong, T.; Nguyen-Thien, T.; Graham, A.L.
1996-02-01
Boundary Element Methods (BEM) have been established as useful and powerful tools in a wide range of engineering applications, e.g. Brebbia et al. In this paper, we report a particular three dimensional implementation of a direct boundary integral equation (BIE) formulation and its application to numerical simulations of practical polymer processing operations. In particular, we will focus on the application of the present boundary element technology to simulate an inverse problem in plastics processing.by extrusion. The task is to design profile extrusion dies for plastics. The problem is highly non-linear due to material viscoelastic behaviours as well as unknown free surface conditions. As an example, the technique is shown to be effective in obtaining the die profiles corresponding to a square viscoelastic extrudate under different processing conditions. To further illustrate the capability of the method, examples of other non-trivial extrudate profiles and processing conditions are also given.
A local-order regularization for geophysical inverse problems
NASA Astrophysics Data System (ADS)
Gheymasi, H. Mohammadi; Gholami, A.
2013-11-01
Different types of regularization have been developed to obtain stable solutions to linear inverse problems. Among these, total variation (TV) is known as an edge preserver method, which leads to piecewise constant solutions and has received much attention for solving inverse problems arising in geophysical studies. However, the method shows staircase effects and is not suitable for the models including smooth regions. To overcome the staircase effect, we present a method, which employs a local-order difference operator in the regularization term. This method is performed in two steps: First, we apply a pre-processing step to find the edge locations in the regularized solution using a properly defined minmod limiter, where the edges are determined by a comparison of the solutions obtained using different order regularizations of the TV types. Then, we construct a local-order difference operator based on the information obtained from the pre-processing step about the edge locations, which is subsequently used as a regularization operator in the final sparsity-promoting regularization. Experimental results from the synthetic and real seismic traveltime tomography show that the proposed inversion method is able to retain the smooth regions of the regularized solution, while preserving sharp transitions presented in it.
PREFACE: International Conference on Inverse Problems 2010
NASA Astrophysics Data System (ADS)
Hon, Yiu-Chung; Ling, Leevan
2011-03-01
Following the first International Conference on Inverse Problems - Recent Theoretical Development and Numerical Approaches held at the City University of Hong Kong in 2002, the fifth International Conference was held again at the City University during December 13-17, 2010. This fifth conference was jointly organized by Professor Yiu-Chung Hon (Co-Chair, City University of Hong Kong, HKSAR), Dr Leevan Ling (Co-Chair, Hong Kong Baptist University, HKSAR), Professor Jin Cheng (Fudan University, China), Professor June-Yub Lee (Ewha Womans University, South Korea), Professor Gui-Rong Liu (University of Cincinnati, USA), Professor Jenn-Nan Wang (National Taiwan University, Taiwan), and Professor Masahiro Yamamoto (The University of Tokyo, Japan). It was agreed to alternate holding the conference among the above places (China, Japan, Korea, Taiwan, and Hong Kong) once every two years. The next conference has been scheduled to be held at the Southeast University (Nanjing, China) in 2012. The purpose of this series of conferences is to establish a strong collaborative link among the universities of the Asian-Pacific regions and worldwide leading researchers in inverse problems. The conference addressed both theoretical (mathematics), applied (engineering) and developmental aspects of inverse problems. The conference was intended to nurture Asian-American-European collaborations in the evolving interdisciplinary areas and it was envisioned that the conference would lead to long-term commitments and collaborations among the participating countries and researchers. There was a total of more than 100 participants. A call for the submission of papers was sent out after the conference, and a total of 19 papers were finally accepted for publication in this proceedings. The papers included in the proceedings cover a wide scope, which reflects the current flourishing theoretical and numerical research into inverse problems. Finally, as the co-chairs of the Inverse Problems
Bayesian inference tools for inverse problems
NASA Astrophysics Data System (ADS)
Mohammad-Djafari, Ali
2013-08-01
In this paper, first the basics of Bayesian inference with a parametric model of the data is presented. Then, the needed extensions are given when dealing with inverse problems and in particular the linear models such as Deconvolution or image reconstruction in Computed Tomography (CT). The main point to discuss then is the prior modeling of signals and images. A classification of these priors is presented, first in separable and Markovien models and then in simple or hierarchical with hidden variables. For practical applications, we need also to consider the estimation of the hyper parameters. Finally, we see that we have to infer simultaneously on the unknowns, the hidden variables and the hyper parameters. Very often, the expression of this joint posterior law is too complex to be handled directly. Indeed, rarely we can obtain analytical solutions to any point estimators such the Maximum A posteriori (MAP) or Posterior Mean (PM). Three main tools are then can be used: Laplace approximation (LAP), Markov Chain Monte Carlo (MCMC) and Bayesian Variational Approximations (BVA). To illustrate all these aspects, we will consider a deconvolution problem where we know that the input signal is sparse and propose to use a Student-t prior for that. Then, to handle the Bayesian computations with this model, we use the property of Student-t which is modelling it via an infinite mixture of Gaussians, introducing thus hidden variables which are the variances. Then, the expression of the joint posterior of the input signal samples, the hidden variables (which are here the inverse variances of those samples) and the hyper-parameters of the problem (for example the variance of the noise) is given. From this point, we will present the joint maximization by alternate optimization and the three possible approximation methods. Finally, the proposed methodology is applied in different applications such as mass spectrometry, spectrum estimation of quasi periodic biological signals and
Inverse spectral problems for differential operators on spatial networks
NASA Astrophysics Data System (ADS)
Yurko, V. A.
2016-06-01
A short survey is given of results on inverse spectral problems for ordinary differential operators on spatial networks (geometrical graphs). The focus is on the most important non-linear inverse problems of recovering coefficients of differential equations from spectral characteristics when the structure of the graph is known a priori. The first half of the survey presents results related to inverse Sturm-Liouville problems on arbitrary compact graphs. Results on inverse problems for differential operators of arbitrary order on compact graphs are then presented. In the conclusion the main results on inverse problems on non-compact graphs are given. Bibliography: 55 titles.
Stochastic inverse problems: Models and metrics
Sabbagh, Elias H.; Sabbagh, Harold A.; Murphy, R. Kim; Aldrin, John C.; Annis, Charles; Knopp, Jeremy S.
2015-03-31
In past work, we introduced model-based inverse methods, and applied them to problems in which the anomaly could be reasonably modeled by simple canonical shapes, such as rectangular solids. In these cases the parameters to be inverted would be length, width and height, as well as the occasional probe lift-off or rotation. We are now developing a formulation that allows more flexibility in modeling complex flaws. The idea consists of expanding the flaw in a sequence of basis functions, and then solving for the expansion coefficients of this sequence, which are modeled as independent random variables, uniformly distributed over their range of values. There are a number of applications of such modeling: 1. Connected cracks and multiple half-moons, which we have noted in a POD set. Ideally we would like to distinguish connected cracks from one long shallow crack. 2. Cracks of irregular profile and shape which have appeared in cold work holes during bolt-hole eddy-current inspection. One side of such cracks is much deeper than other. 3. L or C shaped crack profiles at the surface, examples of which have been seen in bolt-hole cracks. By formulating problems in a stochastic sense, we are able to leverage the stochastic global optimization algorithms in NLSE, which is resident in VIC-3D®, to answer questions of global minimization and to compute confidence bounds using the sensitivity coefficient that we get from NLSE. We will also address the issue of surrogate functions which are used during the inversion process, and how they contribute to the quality of the estimation of the bounds.
Inverse Problem in Self-assembly
NASA Astrophysics Data System (ADS)
Tkachenko, Alexei
2012-02-01
By decorating colloids and nanoparticles with DNA, one can introduce highly selective key-lock interactions between them. This leads to a new class of systems and problems in soft condensed matter physics. In particular, this opens a possibility to solve inverse problem in self-assembly: how to build an arbitrary desired structure with the bottom-up approach? I will present a theoretical and computational analysis of the hierarchical strategy in attacking this problem. It involves self-assembly of particular building blocks (``octopus particles''), that in turn would assemble into the target structure. On a conceptual level, our approach combines elements of three different brands of programmable self assembly: DNA nanotechnology, nanoparticle-DNA assemblies and patchy colloids. I will discuss the general design principles, theoretical and practical limitations of this approach, and illustrate them with our simulation results. Our crucial result is that not only it is possible to design a system that has a given nanostructure as a ground state, but one can also program and optimize the kinetic pathway for its self-assembly.
Boundary layer problem on a hyperbolic system arising from chemotaxis
NASA Astrophysics Data System (ADS)
Hou, Qianqian; Wang, Zhi-An; Zhao, Kun
2016-11-01
This paper is concerned with the boundary layer problem for a hyperbolic system transformed via a Cole-Hopf type transformation from a repulsive chemotaxis model with logarithmic sensitivity proposed in [23,34] modeling the biological movement of reinforced random walkers which deposit a non-diffusible (or slowly moving) signal that modifies the local environment for succeeding passages. By prescribing the Dirichlet boundary conditions to the transformed hyperbolic system in an interval (0 , 1), we show that the system has the boundary layer solutions as the chemical diffusion coefficient ε → 0, and further use the formal asymptotic analysis to show that the boundary layer thickness is ε 1 / 2. Our work justifies the boundary layer phenomenon that was numerically found in the recent work [25]. However we find that the original chemotaxis system does not possess boundary layer solutions when the results are reverted to the pre-transformed system.
Stability analysis of the inverse transmembrane potential problem in electrocardiography
NASA Astrophysics Data System (ADS)
Burger, Martin; Mardal, Kent-André; Nielsen, Bjørn Fredrik
2010-10-01
In this paper we study some mathematical properties of an inverse problem arising in connection with electrocardiograms (ECGs). More specifically, we analyze the possibility for recovering the transmembrane potential in the heart from ECG recordings, a challenge currently investigated by a growing number of groups. Our approach is based on the bidomain model for the electrical activity in the myocardium, and leads to a parameter identification problem for elliptic partial differential equations (PDEs). It turns out that this challenge can be split into two subproblems: the task of recovering the potential at the heart surface from body surface recordings; the problem of computing the transmembrane potential inside the heart from the potential determined at the heart surface. Problem (1), which can be formulated as the Cauchy problem for an elliptic PDE, has been extensively studied and is well known to be severely ill-posed. The main purpose of this paper is to prove that problem (2) is stable and well posed if a suitable prior is available. Moreover, our theoretical findings are illuminated by a series of numerical experiments. Finally, we discuss some aspects of uniqueness related to the anisotropy in the heart.
The relativistic inverse stellar structure problem
Lindblom, Lee
2014-01-14
The observable macroscopic properties of relativistic stars (whose equations of state are known) can be predicted by solving the stellar structure equations that follow from Einstein’s equation. For neutron stars, however, our knowledge of the equation of state is poor, so the direct stellar structure problem can not be solved without modeling the highest density part of the equation of state in some way. This talk will describe recent work on developing a model independent approach to determining the high-density neutron-star equation of state by solving an inverse stellar structure problem. This method uses the fact that Einstein’s equation provides a deterministic relationship between the equation of state and the macroscopic observables of the stars which are composed of that material. This talk illustrates how this method will be able to determine the high-density part of the neutron-star equation of state with few percent accuracy when high quality measurements of the masses and radii of just two or three neutron stars become available. This talk will also show that this method can be used with measurements of other macroscopic observables, like the masses and tidal deformabilities, which can (in principle) be measured by gravitational wave observations of binary neutron-star mergers.
PREFACE: International Conference on Inverse Problems 2010
NASA Astrophysics Data System (ADS)
Hon, Yiu-Chung; Ling, Leevan
2011-03-01
Following the first International Conference on Inverse Problems - Recent Theoretical Development and Numerical Approaches held at the City University of Hong Kong in 2002, the fifth International Conference was held again at the City University during December 13-17, 2010. This fifth conference was jointly organized by Professor Yiu-Chung Hon (Co-Chair, City University of Hong Kong, HKSAR), Dr Leevan Ling (Co-Chair, Hong Kong Baptist University, HKSAR), Professor Jin Cheng (Fudan University, China), Professor June-Yub Lee (Ewha Womans University, South Korea), Professor Gui-Rong Liu (University of Cincinnati, USA), Professor Jenn-Nan Wang (National Taiwan University, Taiwan), and Professor Masahiro Yamamoto (The University of Tokyo, Japan). It was agreed to alternate holding the conference among the above places (China, Japan, Korea, Taiwan, and Hong Kong) once every two years. The next conference has been scheduled to be held at the Southeast University (Nanjing, China) in 2012. The purpose of this series of conferences is to establish a strong collaborative link among the universities of the Asian-Pacific regions and worldwide leading researchers in inverse problems. The conference addressed both theoretical (mathematics), applied (engineering) and developmental aspects of inverse problems. The conference was intended to nurture Asian-American-European collaborations in the evolving interdisciplinary areas and it was envisioned that the conference would lead to long-term commitments and collaborations among the participating countries and researchers. There was a total of more than 100 participants. A call for the submission of papers was sent out after the conference, and a total of 19 papers were finally accepted for publication in this proceedings. The papers included in the proceedings cover a wide scope, which reflects the current flourishing theoretical and numerical research into inverse problems. Finally, as the co-chairs of the Inverse Problems
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
A Forward Glimpse into Inverse Problems through a Geology Example
ERIC Educational Resources Information Center
Winkel, Brian J.
2012-01-01
This paper describes a forward approach to an inverse problem related to detecting the nature of geological substrata which makes use of optimization techniques in a multivariable calculus setting. The true nature of the related inverse problem is highlighted. (Contains 2 figures.)
Numerical boundary condition procedure for the transonic axisymmetric inverse problem
NASA Technical Reports Server (NTRS)
Shankar, V.
1981-01-01
Two types of boundary condition procedures for the axisymmetric inverse problem are described. One is a Neumann type boundary condition (analogous to the analysis problem) and the other is a Dirichlet type boundary conditon, both requiring special treatments to make the inverse scheme numerically stable. The dummy point concept is utilized in implementing both. Results indicate the Dirichlet type inverse boundary condition is more robust and conceptually simpler to implement than the Neumann type procedure. A few results demonstrating the powerful capability of the newly developed inverse method that can handle both shocked as well as shockless body design are included.
Diaz, J. I.; Galiano, G.; Padial, J. F.
1999-01-15
We study the uniqueness of solutions of a semilinear elliptic problem obtained from an inverse formulation when the nonlinear terms of the equation are prescribed in a general class of real functions. The inverse problem arises in the modeling of the magnetic confinement of a plasma in a Stellarator device. The uniqueness proof relies on an L{sup {infinity}} -estimate on the solution of an auxiliary nonlocal problem formulated in terms of the relative rearrangement of a datum with respect to the solution.
Methods for solving of inverse heat conduction problems
NASA Astrophysics Data System (ADS)
Kobilskaya, E.; Lyashenko, V.
2016-10-01
A general mathematical model of the high-temperature thermodiffusion that occurs in a limited environment is considered. Based on this model a formulation of inverse problems for homogeneous and inhomogeneous parabolic equations is proposed. The inverse problem aims at identifying one or several unknown parameters of the mathematical model. These parameters allow maintaining the required temperature distribution and concentration of distribution of substance in the whole area or in part. For each case (internal, external heat source or a combination) the appropriate method for solving the inverse problem is proposed.
One-Dimensional Infinite Horizon Nonconcave Optimal Control Problems Arising in Economic Dynamics
Zaslavski, Alexander J.
2011-12-15
We study the existence of optimal solutions for a class of infinite horizon nonconvex autonomous discrete-time optimal control problems. This class contains optimal control problems without discounting arising in economic dynamics which describe a model with a nonconcave utility function.
An inverse problem for a class of conditional probability measure-dependent evolution equations
NASA Astrophysics Data System (ADS)
Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.
2016-09-01
We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by partial differential equation models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach.
Inverse Scattering Problems for Acoustic Waves in AN Inhomogeneous Medium.
NASA Astrophysics Data System (ADS)
Kedzierawski, Andrzej Wladyslaw
1990-01-01
This dissertation considers the inverse scattering problem of determining either the absorption of sound in an inhomogeneous medium or the surface impedance of an obstacle from a knowledge of the far-field patterns of the scattered fields corresponding to many incident time -harmonic plane waves. First, we consider the inverse problem in the case when the scattering object is an inhomogeneous medium with complex refraction index having compact support. Our approach to this problem is the orthogonal projection method of Colton-Monk (cf. The inverse scattering problem for time acoustic waves in an inhomogeneous medium, Quart. J. Mech. Appl. Math. 41 (1988), 97-125). After that, we prove the analogue of Karp's Theorem for the scattering of acoustic waves through an inhomogeneous medium with compact support. We then generalize some of these results to the case when the inhomogeneous medium is no longer of compact support. If the acoustic wave penetrates the inhomogeneous medium by only a small amount then the inverse medium problem leads to the inverse obstacle problem with an impedance boundary condition. We solve the inverse impedance problem of determining the surface impedance of an obstacle of known shape by using both the methods of Kirsch-Kress and Colton-Monk (cf. R. Kress, Linear Integral Equations, Springer-Verlag, New York, 1989).
Inverse modelling problems in linear algebra undergraduate courses
NASA Astrophysics Data System (ADS)
Martinez-Luaces, Victor E.
2013-10-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different presentations will be discussed. Finally, several results will be presented and some conclusions proposed.
Uniqueness theorems for some inverse heat-conduction problems
NASA Astrophysics Data System (ADS)
Muzylev, N. V.
1980-04-01
Heat treatment of metals, involving rapid thermal processes, is an example of situations where the mathematical determination of thermal characteristics makes it necessary to solve a certain inverse problem, i.e., from some information on the temperature field, obtained from direct measurements. The present paper deals with the uniqueness of inverse problems of this type. Uniqueness theorems are proven for the determination of the coefficients of a nonlinear parabolic equation from the boundary conditions.
Inverse kinematics problem in robotics using neural networks
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.; Lawrence, Charles
1992-01-01
In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.
The metric-restricted inverse design problem
NASA Astrophysics Data System (ADS)
Acharya, Amit; Lewicka, Marta; Pakzad, Mohammad Reza
2016-06-01
We study a class of design problems in solid mechanics, leading to a variation on the classical question of equi-dimensional embeddability of Riemannian manifolds. In this general new context, we derive a necessary and sufficient existence condition, given through a system of total differential equations, and discuss its integrability. In the classical context, the same approach yields conditions of immersibility of a given metric in terms of the Riemann curvature tensor. In the present situation, the equations do not close in a straightforward manner, and successive differentiation of the compatibility conditions leads to a new algebraic description of integrability. We also recast the problem in a variational setting and analyze the infimum of the appropriate incompatibility energy, resembling the non-Euclidean elasticity. We then derive a Γ -convergence result for dimension reduction from 3d to 2d in the Kirchhoff energy scaling regime.
Inverse scattering problems for acoustic waves in an inhomogeneous medium
NASA Astrophysics Data System (ADS)
Kedzierawski, Andrzej Wladyslaw
The inverse scattering problem is considered of determining either the absorption of sound in an inhomogeneous medium or the surface impedance of an obstacle from a knowledge of the far field patterns of the scattered field corresponding to many incident time-harmonic plane waves. First, the inverse problem is studied in the case when the scattering object is an inhomogeneous medium with complex refractive index having compact support. The approach to this problem is the orthogonal projection method of Colton-Monk (1988). After that, the analogue is proven of Karp's Theorem for the scattering of acoustic waves through an inhomogeneous medium with compact support. Some of these results are then generalized to the case when the inhomogeneous medium is no longer of compact support. If the acoustic wave penetrates the inhomogeneous medium by only a small amount then the inverse medium problem leads to the inverse obstacle problem with an impedance boundary condition. The inverse impedance problem is solved of determining the surface impedance of an obstacle of known shape by using both the methods of Kirsch-Kress and Colton-Monk (1989).
Solving inverse problems of identification type by optimal control methods
Lenhart, S.; Protopopescu, V.; Jiongmin Yong
1997-06-01
Inverse problems of identification type for nonlinear equations are considered within the framework of optimal control theory. The rigorous solution of any particular problem depends on the functional setting, type of equation, and unknown quantity (or quantities) to be determined. Here the authors present only the general articulations of the formalism. Compared to classical regularization methods (e.g. Tikhonov coupled with optimization schemes), their approach presents several advantages, namely: (i) a systematic procedure to solve inverse problems of identification type; (ii) an explicit expression for the approximations of the solution; and (iii) a convenient numerical solution of these approximations.
Correct averaging in transmission radiography: Analysis of the inverse problem
NASA Astrophysics Data System (ADS)
Wagner, Michael; Hampel, Uwe; Bieberle, Martina
2016-05-01
Transmission radiometry is frequently used in industrial measurement processes as a means to assess the thickness or composition of a material. A common problem encountered in such applications is the so-called dynamic bias error, which results from averaging beam intensities over time while the material distribution changes. We recently reported on a method to overcome the associated measurement error by solving an inverse problem, which in principle restores the exact average attenuation by considering the Poisson statistics of the underlying particle or photon emission process. In this paper we present a detailed analysis of the inverse problem and its optimal regularized numerical solution. As a result we derive an optimal parameter configuration for the inverse problem.
NASA Astrophysics Data System (ADS)
Qiu, Lingyun
Many inverse problems arising in different disciplines including exploration geophysics, medical imaging and nondestructive evaluation can be formulated as a nonlinear operator equation, F(x) = y, where F models the corresponding forward problem. Usually, the inverse problem is an ill-posed problem in the sense that a small perturbation in the data can lead to a significant impact in the reconstruction. In the first part of this dissertation, we focus on the analysis of iterative methods in Banach spaces. We assume certain conditional Holder or Lipschitz type stability of the inverse problem and prove a linear or sublinear convergence rate for the Landweber iteration and a projected steepest descent iteration. This is a novel view point for the convergence analysis of the iterative methods. The second part of this dissertation is concerned with the conditional Lipschitz stability estimate for the inverse boundary value problem for time-harmonic waves. Assuming that the wavespeed (density) is piece-wise constant with discontinuities on a finite number of known interfaces, we provide a Lipschitz stability estimate for the inverse problems of acoustic (elastic) waves. In the third part, we study the inverse boundary value problem for the acoustic time-harmonic waves. It is to determine the property of the medium inside a domain from the measurements of the displacement and normal stress on its boundary. The governing equation is the Helmholtz equation. A hierarchy algorithm is proposed and analysed for the iterative reconstruction with multi-frequency data. The algorithm is based on a projected steepest descent iteration with stability constraints.
Numerical study of a parametric parabolic equation and a related inverse boundary value problem
NASA Astrophysics Data System (ADS)
Mustonen, Lauri
2016-10-01
We consider a time-dependent linear diffusion equation together with a related inverse boundary value problem. The aim of the inverse problem is to determine, based on observations on the boundary, the nonhomogeneous diffusion coefficient in the interior of an object. The method in this paper relies on solving the forward problem for a whole family of diffusivities by using a spectral Galerkin method in the high-dimensional parameter domain. The evaluation of the parametric solution and its derivatives is then completely independent of spatial and temporal discretizations. In the case of a quadratic approximation for the parameter dependence and a direct solver for linear least squares problems, we show that the evaluation of the parametric solution does not increase the complexity of any linearized subproblem arising from a Gauss-Newtonian method that is used to minimize a Tikhonov functional. The feasibility of the proposed algorithm is demonstrated by diffusivity reconstructions in two and three spatial dimensions.
From inverse problems in mathematical physiology to quantitative differential diagnoses.
Zenker, Sven; Rubin, Jonathan; Clermont, Gilles
2007-11-01
The improved capacity to acquire quantitative data in a clinical setting has generally failed to improve outcomes in acutely ill patients, suggesting a need for advances in computer-supported data interpretation and decision making. In particular, the application of mathematical models of experimentally elucidated physiological mechanisms could augment the interpretation of quantitative, patient-specific information and help to better target therapy. Yet, such models are typically complex and nonlinear, a reality that often precludes the identification of unique parameters and states of the model that best represent available data. Hypothesizing that this non-uniqueness can convey useful information, we implemented a simplified simulation of a common differential diagnostic process (hypotension in an acute care setting), using a combination of a mathematical model of the cardiovascular system, a stochastic measurement model, and Bayesian inference techniques to quantify parameter and state uncertainty. The output of this procedure is a probability density function on the space of model parameters and initial conditions for a particular patient, based on prior population information together with patient-specific clinical observations. We show that multimodal posterior probability density functions arise naturally, even when unimodal and uninformative priors are used. The peaks of these densities correspond to clinically relevant differential diagnoses and can, in the simplified simulation setting, be constrained to a single diagnosis by assimilating additional observations from dynamical interventions (e.g., fluid challenge). We conclude that the ill-posedness of the inverse problem in quantitative physiology is not merely a technical obstacle, but rather reflects clinical reality and, when addressed adequately in the solution process, provides a novel link between mathematically described physiological knowledge and the clinical concept of differential diagnoses
Variational structure of inverse problems in wave propagation and vibration
Berryman, J.G.
1995-03-01
Practical algorithms for solving realistic inverse problems may often be viewed as problems in nonlinear programming with the data serving as constraints. Such problems are most easily analyzed when it is possible to segment the solution space into regions that are feasible (satisfying all the known constraints) and infeasible (violating some of the constraints). Then, if the feasible set is convex or at least compact, the solution to the problem will normally lie on the boundary of the feasible set. A nonlinear program may seek the solution by systematically exploring the boundary while satisfying progressively more constraints. Examples of inverse problems in wave propagation (traveltime tomography) and vibration (modal analysis) will be presented to illustrate how the variational structure of these problems may be used to create nonlinear programs using implicit variational constraints.
Inverse problems of ultrasound tomography in models with attenuation
NASA Astrophysics Data System (ADS)
Goncharsky, Alexander V.; Romanov, Sergey Y.
2014-04-01
We develop efficient methods for solving inverse problems of ultrasound tomography in models with attenuation. We treat the inverse problem as a coefficient inverse problem for unknown coordinate-dependent functions that characterize both the speed cross section and the coefficients of the wave equation describing attenuation in the diagnosed region. We derive exact formulas for the gradient of the residual functional in models with attenuation, and develop efficient algorithms for minimizing the gradient of the residual by solving the conjugate problem. These algorithms are easy to parallelize when implemented on supercomputers, allowing the computation time to be reduced by a factor of several hundred compared to a PC. The numerical analysis of model problems shows that it is possible to reconstruct not only the speed cross section, but also the properties of the attenuating medium. We investigate the choice of the initial approximation for iterative algorithms used to solve inverse problems. The algorithms considered are primarily meant for the development of ultrasound tomographs for differential diagnosis of breast cancer.
Inverse problems of ultrasound tomography in models with attenuation.
Goncharsky, Alexander V; Romanov, Sergey Y
2014-04-21
We develop efficient methods for solving inverse problems of ultrasound tomography in models with attenuation. We treat the inverse problem as a coefficient inverse problem for unknown coordinate-dependent functions that characterize both the speed cross section and the coefficients of the wave equation describing attenuation in the diagnosed region. We derive exact formulas for the gradient of the residual functional in models with attenuation, and develop efficient algorithms for minimizing the gradient of the residual by solving the conjugate problem. These algorithms are easy to parallelize when implemented on supercomputers, allowing the computation time to be reduced by a factor of several hundred compared to a PC. The numerical analysis of model problems shows that it is possible to reconstruct not only the speed cross section, but also the properties of the attenuating medium. We investigate the choice of the initial approximation for iterative algorithms used to solve inverse problems. The algorithms considered are primarily meant for the development of ultrasound tomographs for differential diagnosis of breast cancer. PMID:24694653
Inverse problems of ultrasound tomography in models with attenuation.
Goncharsky, Alexander V; Romanov, Sergey Y
2014-04-21
We develop efficient methods for solving inverse problems of ultrasound tomography in models with attenuation. We treat the inverse problem as a coefficient inverse problem for unknown coordinate-dependent functions that characterize both the speed cross section and the coefficients of the wave equation describing attenuation in the diagnosed region. We derive exact formulas for the gradient of the residual functional in models with attenuation, and develop efficient algorithms for minimizing the gradient of the residual by solving the conjugate problem. These algorithms are easy to parallelize when implemented on supercomputers, allowing the computation time to be reduced by a factor of several hundred compared to a PC. The numerical analysis of model problems shows that it is possible to reconstruct not only the speed cross section, but also the properties of the attenuating medium. We investigate the choice of the initial approximation for iterative algorithms used to solve inverse problems. The algorithms considered are primarily meant for the development of ultrasound tomographs for differential diagnosis of breast cancer.
Improved TV-CS Approaches for Inverse Scattering Problem
Bevacqua, M. T.; Di Donato, L.
2015-01-01
Total Variation and Compressive Sensing (TV-CS) techniques represent a very attractive approach to inverse scattering problems. In fact, if the unknown is piecewise constant and so has a sparse gradient, TV-CS approaches allow us to achieve optimal reconstructions, reducing considerably the number of measurements and enforcing the sparsity on the gradient of the sought unknowns. In this paper, we introduce two different techniques based on TV-CS that exploit in a different manner the concept of gradient in order to improve the solution of the inverse scattering problems obtained by TV-CS approach. Numerical examples are addressed to show the effectiveness of the method. PMID:26495420
Methods for solving ill-posed inverse problems
NASA Astrophysics Data System (ADS)
Alifanov, O. M.
1983-11-01
Various approaches to the solution of inverse problems of heat conduction are reviewed, including direct analytical and numerical methods, the method of iterative regularization, and algebraic and numerical methods regularized in accordance with the variational principle. The method of iterative regularization is shown to be the most versatile of the above approaches. The basic principles of this method are briefly examined, and methods are proposed for computing the gradients of discrepancies. An approach is proposed to the iterative solution of inverse problems with a specified order of smoothness.
Solving probabilistic inverse problems rapidly with prior samples
NASA Astrophysics Data System (ADS)
Käufl, Paul; Valentine, Andrew P.; de Wit, Ralph W.; Trampert, Jeannot
2016-06-01
Owing to the increasing availability of computational resources, in recent years the probabilistic solution of non-linear, geophysical inverse problems by means of sampling methods has become increasingly feasible. Nevertheless, we still face situations in which a Monte Carlo approach is not practical. This is particularly true in cases where the evaluation of the forward problem is computationally intensive or where inversions have to be carried out repeatedly or in a timely manner, as in natural hazards monitoring tasks such as earthquake early warning. Here, we present an alternative to Monte Carlo sampling, in which inferences are entirely based on a set of prior samples-that is, samples that have been obtained independent of a particular observed datum. This has the advantage that the computationally expensive sampling stage becomes separated from the inversion stage, and the set of prior samples-once obtained-can be reused for repeated evaluations of the inverse mapping without additional computational effort. This property is useful if the problem is such that repeated inversions of independent data have to be carried out. We formulate the inverse problem in a Bayesian framework and present a practical way to make posterior inferences based on a set of prior samples. We compare the prior sampling based approach to a Markov Chain Monte Carlo approach that samples from the posterior probability distribution. We show results for both a toy example, and a realistic seismological source parameter estimation problem. We find that the posterior uncertainty estimates obtained based on prior sampling can be considered conservative estimates of the uncertainties obtained by directly sampling from the posterior distribution.
Numerical nonlinear inverse problem of determining wall heat flux
NASA Astrophysics Data System (ADS)
Zueco, J.; Alhama, F.; González Fernández, C. F.
2005-03-01
The inverse problem of determining time-variable surface heat flux in a plane wall, with constant or temperature dependent thermal properties, is numerically studied. Different kinds of incident heat flux, including rectangular waveform, are assumed. The solution is numerically solved as a function estimation problem, so that no a priori information for the functional waveforms of the unknown heat flux is needed. In all cases, a solution in the form of a piece-wise function is used to approach the incident flux. Transient temperature measurements at the boundary, from the solution of the direct problem, served as the simulated experimental data needed as input for the inverse analysis. Both direct and inverse heat conduction problems are solved using the network simulation method. The solution is obtained step-by-step by minimising the classical functional that compares the above input data with those obtained from the solution of the inverse problem. A straight line of variable slope and length is used for each one of the stretches of the desired solution. The influence of random error, number of functional terms and the effect of sensor location are studied. In all cases, the results closely agree with the solution.
A tutorial on inverse problems for anomalous diffusion processes
NASA Astrophysics Data System (ADS)
Jin, Bangti; Rundell, William
2015-03-01
Over the last two decades, anomalous diffusion processes in which the mean squares variance grows slower or faster than that in a Gaussian process have found many applications. At a macroscopic level, these processes are adequately described by fractional differential equations, which involves fractional derivatives in time or/and space. The fractional derivatives describe either history mechanism or long range interactions of particle motions at a microscopic level. The new physics can change dramatically the behavior of the forward problems. For example, the solution operator of the time fractional diffusion diffusion equation has only limited smoothing property, whereas the solution for the space fractional diffusion equation may contain weak singularity. Naturally one expects that the new physics will impact related inverse problems in terms of uniqueness, stability, and degree of ill-posedness. The last aspect is especially important from a practical point of view, i.e., stably reconstructing the quantities of interest. In this paper, we employ a formal analytic and numerical way, especially the two-parameter Mittag-Leffler function and singular value decomposition, to examine the degree of ill-posedness of several ‘classical’ inverse problems for fractional differential equations involving a Djrbashian-Caputo fractional derivative in either time or space, which represent the fractional analogues of that for classical integral order differential equations. We discuss four inverse problems, i.e., backward fractional diffusion, sideways problem, inverse source problem and inverse potential problem for time fractional diffusion, and inverse Sturm-Liouville problem, Cauchy problem, backward fractional diffusion and sideways problem for space fractional diffusion. It is found that contrary to the wide belief, the influence of anomalous diffusion on the degree of ill-posedness is not definitive: it can either significantly improve or worsen the conditioning of
A time domain sampling method for inverse acoustic scattering problems
NASA Astrophysics Data System (ADS)
Guo, Yukun; Hömberg, Dietmar; Hu, Guanghui; Li, Jingzhi; Liu, Hongyu
2016-06-01
This work concerns the inverse scattering problems of imaging unknown/inaccessible scatterers by transient acoustic near-field measurements. Based on the analysis of the migration method, we propose efficient and effective sampling schemes for imaging small and extended scatterers from knowledge of time-dependent scattered data due to incident impulsive point sources. Though the inverse scattering problems are known to be nonlinear and ill-posed, the proposed imaging algorithms are totally "direct" involving only integral calculations on the measurement surface. Theoretical justifications are presented and numerical experiments are conducted to demonstrate the effectiveness and robustness of our methods. In particular, the proposed static imaging functionals enhance the performance of the total focusing method (TFM) and the dynamic imaging functionals show analogous behavior to the time reversal inversion but without solving time-dependent wave equations.
Solving the Inverse-Square Problem with Complex Variables
ERIC Educational Resources Information Center
Gauthier, N.
2005-01-01
The equation of motion for a mass that moves under the influence of a central, inverse-square force is formulated and solved as a problem in complex variables. To find the solution, the constancy of angular momentum is first established using complex variables. Next, the complex position coordinate and complex velocity of the particle are assumed…
Geophysics in Hydrogeological Inverse Problem: Hero or Villain?
NASA Astrophysics Data System (ADS)
Alcolea, A.; Renard, P.; Mariethoz, G.
2007-12-01
Geostatistical inverse problem is a powerful tool to aid decision-makers in aquifer management. During the last few years, existing inverse problem codes have been updated in order to accommodate "non- traditional" types of observations (i.e. heads or concentrations). The potential of exhaustive geophysical data has been shown to be well-suited for aquifer characterization. However, limited attention has been devoted to the use of this information in real field hydrogeological inverse problems. In this work, we present an application of inverse problem to the management of coastal aquifers including different types of data (heads, resistivities and prior information on transmissivity and storage coefficient). Spatial variability is characterized using the regularized pilot points method. The procedure is as follows. First, we obtain a characterization of the transmissivity and storage coefficient fields from calibration data. Second, this characterization is used to design a pumping network by means of a genetic algorithm. Several constraints apply, such as operational costs or environmental side effects. Three cases are presented, depending on the calibration data sets: (1) only resistivities (no calibration is performed), (2) heads and prior information of model parameters, and (3) all of them altogether (resistivities are used as external drift). Results show that, by themselves, resistivity or head data sets (and prior information) do not suffice to obtain a reliable characterization of the system. However, the consideration of all data at the same time leads to the best characterization of the system among the ones tested.
Computational methods for inverse problems in geophysics: inversion of travel time observations
Pereyra, V.; Keller, H.B.; Lee, W.H.K.
1980-01-01
General ways of solving various inverse problems are studied for given travel time observations between sources and receivers. These problems are separated into three components: (a) the representation of the unknown quantities appearing in the model; (b) the nonlinear least-squares problem; (c) the direct, two-point ray-tracing problem used to compute travel time once the model parameters are given. Novel software is described for (b) and (c), and some ideas given on (a). Numerical results obtained with artificial data and an implementation of the algorithm are also presented. ?? 1980.
NASA Astrophysics Data System (ADS)
Plestenjak, Bor; Gheorghiu, Călin I.; Hochstenbach, Michiel E.
2015-10-01
In numerous science and engineering applications a partial differential equation has to be solved on some fairly regular domain that allows the use of the method of separation of variables. In several orthogonal coordinate systems separation of variables applied to the Helmholtz, Laplace, or Schrödinger equation leads to a multiparameter eigenvalue problem (MEP); important cases include Mathieu's system, Lamé's system, and a system of spheroidal wave functions. Although multiparameter approaches are exploited occasionally to solve such equations numerically, MEPs remain less well known, and the variety of available numerical methods is not wide. The classical approach of discretizing the equations using standard finite differences leads to algebraic MEPs with large matrices, which are difficult to solve efficiently. The aim of this paper is to change this perspective. We show that by combining spectral collocation methods and new efficient numerical methods for algebraic MEPs it is possible to solve such problems both very efficiently and accurately. We improve on several previous results available in the literature, and also present a MATLAB toolbox for solving a wide range of problems.
Kılıç, Emre Eibert, Thomas F.
2015-05-01
An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems. Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained.
Analysis of some integrals arising in the atomic three-electron problem
NASA Astrophysics Data System (ADS)
King, Frederick W.
1991-12-01
A detailed analysis is presented for the evaluation of atomic integrals of the form Fri1rj2rk3r-223rm31r12 ne-αr1-βr2-γr3dr1 dr2 dr3, which arise in several contexts of the three-electron atomic problem. All convergent integrals with i>=-2, j>=-2, k>=-2, m>=-1, and n>=-1 are examined. These integrals are solved by two distinct procedures. A majority of the integrals can be evaluated by a reduction of the three-electron integrals to integrals arising in the atomic two-electron integral problem. A second approach allows all integrals with the aforementioned indices to be evaluated by the use of Sack's expansion [J. Math. Phys. 5, 245 (1964)] of the interelectronic separation, which leads to a reduction of the above nine-dimensional integrals to a set of three-dimensional integrals. A discussion is given for the numerical evaluation of the three-dimensional integrals that arise.
Making use of a partial order in solving inverse problems
NASA Astrophysics Data System (ADS)
Korolev, Yury; Yagola, Anatoly
2013-09-01
In many applications, the concepts of inequality and comparison play an essential role, and the nature of the objects under consideration is better described by means of partial order relations. To reflect this nature, the conventional problem statements in normed spaces have to be modified. There is a need to enrich the structure of the functional spaces employed. In this paper, we consider inverse problems in Banach lattices—functional spaces endowed with both norm and partial order. In this new problem statement, we are able to construct a linearly constrained set that contains all admissible approximate solutions given the approximate data, the approximation errors and prior restrictions that are available. We do not suppose the problem to be either well posed or ill posed since we believe that the concepts and techniques we describe here can be useful in both cases. The range of applications includes medical imaging (CT, PET, SPECT, photo- and thermoacoustics), geophysics (e.g., inverse gravimetry problems), engineering design and other inverse problems.
NASA Astrophysics Data System (ADS)
Uhlmann, Gunther
2008-07-01
This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology
NASA Astrophysics Data System (ADS)
Uhlmann, Gunther
2008-07-01
This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology
Application of inverse heat conduction problem on temperature measurement
NASA Astrophysics Data System (ADS)
Zhang, X.; Zhou, G.; Dong, B.; Li, Q.; Liu, L. Q.
2013-09-01
For regenerative cooling devices, such as G-M refrigerator, pulse tube cooler or thermoacoustic cooler, the gas oscillating bring about temperature fluctuations inevitably, which is harmful in many applications requiring high stable temperatures. To find out the oscillating mechanism of the cooling temperature and improve the temperature stability of cooler, the inner temperature of the cold head has to be measured. However, it is difficult to measure the inner oscillating temperature of the cold head directly because the invasive temperature detectors may disturb the oscillating flow. Fortunately, the outer surface temperature of the cold head can be measured accurately by invasive temperature measurement techniques. In this paper, a mathematical model of inverse heat conduction problem is presented to identify the inner surface oscillating temperature of cold head according to the measured temperature of the outer surface in a GM cryocooler. Inverse heat conduction problem will be solved using control volume approach. Outer surface oscillating temperature could be used as input conditions of inverse problem and the inner surface oscillating temperature of cold head can be inversely obtained. A simple uncertainty analysis of the oscillating temperature measurement also will be provided.
SIAM conference on inverse problems: Geophysical applications. Final technical report
1995-12-31
This conference was the second in a series devoted to a particular area of inverse problems. The theme of this series is to discuss problems of major scientific importance in a specific area from a mathematical perspective. The theme of this symposium was geophysical applications. In putting together the program we tried to include a wide range of mathematical scientists and to interpret geophysics in as broad a sense as possible. Our speaker came from industry, government laboratories, and diverse departments in academia. We managed to attract a geographically diverse audience with participation from five continents. There were talks devoted to seismology, hydrology, determination of the earth`s interior on a global scale as well as oceanographic and atmospheric inverse problems.
An Inverse Problem Approach for Elasticity Imaging through Vibroacoustics
Aguilo, Miguel A.; Brigham, J. C.; Aquino, W.; Fatemi, M.
2011-01-01
A new methodology for estimating the spatial distribution of elastic moduli using the steady-state dynamic response of solids immersed in fluids is presented. The technique relies on the ensuing acoustic pressure field from a remotely excited solid to inversely estimate the spatial distribution of Young’s modulus. This work proposes the use of Gaussian radial basis functions (GRBF) to represent the spatial variation of elastic moduli. GRBF are shown to possess the advantage of representing smooth functions with quasi-compact support, and can efficiently represent elastic moduli distributions such as those that occur in soft biological tissue in the presence of tumors. The direct problem consists of a coupled acoustic-structure interaction boundary value problem solved in the frequency domain using the finite element method. The inverse problem is cast as an optimization problem in which the objective function is defined as a measure of discrepancy between an experimentally measured response and a finite element representation of the system. Non-gradient based optimization algorithms in combination with a divide and conquer strategy are used to solve the resulting optimization problem. The feasibility of the proposed approach is demonstrated through a series of numerical and a physical experiment. For comparison purposes, the surface velocity response was also used for the inverse characterization as the measured response in place of the acoustic pressure. PMID:20335092
A non-local free boundary problem arising in a theory of financial bubbles
Berestycki, Henri; Monneau, Regis; Scheinkman, José A.
2014-01-01
We consider an evolution non-local free boundary problem that arises in the modelling of speculative bubbles. The solution of the model is the speculative component in the price of an asset. In the framework of viscosity solutions, we show the existence and uniqueness of the solution. We also show that the solution is convex in space, and establish several monotonicity properties of the solution and of the free boundary with respect to parameters of the problem. To study the free boundary, we use, in particular, the fact that the odd part of the solution solves a more standard obstacle problem. We show that the free boundary is and describe the asymptotics of the free boundary as c, the cost of transacting the asset, goes to zero. PMID:25288815
A non-local free boundary problem arising in a theory of financial bubbles.
Berestycki, Henri; Monneau, Regis; Scheinkman, José A
2014-11-13
We consider an evolution non-local free boundary problem that arises in the modelling of speculative bubbles. The solution of the model is the speculative component in the price of an asset. In the framework of viscosity solutions, we show the existence and uniqueness of the solution. We also show that the solution is convex in space, and establish several monotonicity properties of the solution and of the free boundary with respect to parameters of the problem. To study the free boundary, we use, in particular, the fact that the odd part of the solution solves a more standard obstacle problem. We show that the free boundary is [Formula: see text] and describe the asymptotics of the free boundary as c, the cost of transacting the asset, goes to zero. PMID:25288815
Boundary identification for 2-D parabolic problems arising in thermal testing of materials
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kojima, Fumio
1988-01-01
Problems on the identification of two-dimensional spatial domains arising in the detection and characterization of structural flaws in materials are considered. For a thermal diffusion system with external boundary input, observations of the temperature on the surface are used in an output least square approach. Parameter estimation techniques based on the method of mappings are discussed, and approximation schemes are developed based on a finite-element Galerkin approach. Theoretical convergence results for computational techniques are given, and the results are applied to the identification of two kinds of boundary shapes.
ZELINSKI, ADAM C.; GOYAL, VIVEK K.; ADALSTEINSSON, ELFAR
2010-01-01
A problem that arises in slice-selective magnetic resonance imaging (MRI) radio-frequency (RF) excitation pulse design is abstracted as a novel linear inverse problem with a simultaneous sparsity constraint. Multiple unknown signal vectors are to be determined, where each passes through a different system matrix and the results are added to yield a single observation vector. Given the matrices and lone observation, the objective is to find a simultaneously sparse set of unknown vectors that approximately solves the system. We refer to this as the multiple-system single-output (MSSO) simultaneous sparse approximation problem. This manuscript contrasts the MSSO problem with other simultaneous sparsity problems and conducts an initial exploration of algorithms with which to solve it. Greedy algorithms and techniques based on convex relaxation are derived and compared empirically. Experiments involve sparsity pattern recovery in noiseless and noisy settings and MRI RF pulse design. PMID:20445814
Two hybrid regularization frameworks for solving the electrocardiography inverse problem
NASA Astrophysics Data System (ADS)
Jiang, Mingfeng; Xia, Ling; Shou, Guofa; Liu, Feng; Crozier, Stuart
2008-09-01
In this paper, two hybrid regularization frameworks, LSQR-Tik and Tik-LSQR, which integrate the properties of the direct regularization method (Tikhonov) and the iterative regularization method (LSQR), have been proposed and investigated for solving ECG inverse problems. The LSQR-Tik method is based on the Lanczos process, which yields a sequence of small bidiagonal systems to approximate the original ill-posed problem and then the Tikhonov regularization method is applied to stabilize the projected problem. The Tik-LSQR method is formulated as an iterative LSQR inverse, augmented with a Tikhonov-like prior information term. The performances of these two hybrid methods are evaluated using a realistic heart-torso model simulation protocol, in which the heart surface source method is employed to calculate the simulated epicardial potentials (EPs) from the action potentials (APs), and then the acquired EPs are used to calculate simulated body surface potentials (BSPs). The results show that the regularized solutions obtained by the LSQR-Tik method are approximate to those of the Tikhonov method, the computational cost of the LSQR-Tik method, however, is much less than that of the Tikhonov method. Moreover, the Tik-LSQR scheme can reconstruct the epcicardial potential distribution more accurately, specifically for the BSPs with large noisy cases. This investigation suggests that hybrid regularization methods may be more effective than separate regularization approaches for ECG inverse problems.
On the Inverse Problem of Binocular 3D Motion Perception
Lages, Martin; Heron, Suzanne
2010-01-01
It is shown that existing processing schemes of 3D motion perception such as interocular velocity difference, changing disparity over time, as well as joint encoding of motion and disparity, do not offer a general solution to the inverse optics problem of local binocular 3D motion. Instead we suggest that local velocity constraints in combination with binocular disparity and other depth cues provide a more flexible framework for the solution of the inverse problem. In the context of the aperture problem we derive predictions from two plausible default strategies: (1) the vector normal prefers slow motion in 3D whereas (2) the cyclopean average is based on slow motion in 2D. Predicting perceived motion directions for ambiguous line motion provides an opportunity to distinguish between these strategies of 3D motion processing. Our theoretical results suggest that velocity constraints and disparity from feature tracking are needed to solve the inverse problem of 3D motion perception. It seems plausible that motion and disparity input is processed in parallel and integrated late in the visual processing hierarchy. PMID:21124957
Inverse problems for homogeneous transport equations: II. The multidimensional case
NASA Astrophysics Data System (ADS)
Bal, Guillaume
2000-08-01
A companion paper by Bal (Bal G 2000 Inverse Problems 16 997) and this paper are parts I and II of a series dealing with the reconstruction from boundary measurements of the scattering operator of homogeneous linear transport equations. This part II deals with the case of convex bounded domains in dimensions higher than one. We distinguish the analysis of smooth boundaries from that of boundaries with discontinuities such as corners. We propose a reconstruction in the case of degenerate symmetric scattering operators and show the well-posedness of the inverse problem. The proof of well-posedness is based on a decomposition of angular moments of the transport solution into unbounded and bounded components. This decomposition allows us to show the linear independence of a sufficiently large number of angular moments of the transport solution that are used to construct an invertible system for the scattering coefficients to be reconstructed.
Inverse problems in the modeling of vibrations of flexible beams
NASA Technical Reports Server (NTRS)
Banks, H. T.; Powers, R. K.; Rosen, I. G.
1987-01-01
The formulation and solution of inverse problems for the estimation of parameters which describe damping and other dynamic properties in distributed models for the vibration of flexible structures is considered. Motivated by a slewing beam experiment, the identification of a nonlinear velocity dependent term which models air drag damping in the Euler-Bernoulli equation is investigated. Galerkin techniques are used to generate finite dimensional approximations. Convergence estimates and numerical results are given. The modeling of, and related inverse problems for the dynamics of a high pressure hose line feeding a gas thruster actuator at the tip of a cantilevered beam are then considered. Approximation and convergence are discussed and numerical results involving experimental data are presented.
Functional error estimators for the adaptive discretization of inverse problems
NASA Astrophysics Data System (ADS)
Clason, Christian; Kaltenbacher, Barbara; Wachsmuth, Daniel
2016-10-01
So-called functional error estimators provide a valuable tool for reliably estimating the discretization error for a sum of two convex functions. We apply this concept to Tikhonov regularization for the solution of inverse problems for partial differential equations, not only for quadratic Hilbert space regularization terms but also for nonsmooth Banach space penalties. Examples include the measure-space norm (i.e., sparsity regularization) or the indicator function of an {L}∞ ball (i.e., Ivanov regularization). The error estimators can be written in terms of residuals in the optimality system that can then be estimated by conventional techniques, thus leading to explicit estimators. This is illustrated by means of an elliptic inverse source problem with the above-mentioned penalties, and numerical results are provided for the case of sparsity regularization.
NASA Astrophysics Data System (ADS)
Roul, Pradip
2016-04-01
The paper deals with a numerical technique for solving nonlinear singular boundary value problems arising in various physical models. First, we convert the original problem to an equivalent integral equation to surmount the singularity and employ afterward the boundary condition to compute the undetermined coefficient. Finally, the integral equation without undetermined coefficient is treated using homotopy perturbation method. The present method is implemented on three physical model examples: i) thermal explosions; ii) steady-state oxygen diffusion in a spherical shell; iii) the equilibrium of the isothermal gas sphere. The results obtained by the present method are compared with that obtained using finite-difference method, B-spline method and a numerical technique based on the direct integration method, and comparison reveals that the proposed method with few solution components produces similar results and the method is computationally efficient than others.
Diffuse interface methods for inverse problems: case study for an elliptic Cauchy problem
NASA Astrophysics Data System (ADS)
Burger, Martin; Løseth Elvetun, Ole; Schlottbom, Matthias
2015-12-01
Many inverse problems have to deal with complex, evolving and often not exactly known geometries, e.g. as domains of forward problems modeled by partial differential equations. This makes it desirable to use methods which are robust with respect to perturbed or not well resolved domains, and which allow for efficient discretizations not resolving any fine detail of those geometries. For forward problems in partial differential equations methods based on diffuse interface representations have gained strong attention in the last years, but so far they have not been considered systematically for inverse problems. In this work we introduce a diffuse domain method as a tool for the solution of variational inverse problems. As a particular example we study ECG inversion in further detail. ECG inversion is a linear inverse source problem with boundary measurements governed by an anisotropic diffusion equation, which naturally cries for solutions under changing geometries, namely the beating heart. We formulate a regularization strategy using Tikhonov regularization and, using standard source conditions, we prove convergence rates. A special property of our approach is that not only operator perturbations are introduced by the diffuse domain method, but more important we have to deal with topologies which depend on a parameter \\varepsilon in the diffuse domain method, i.e. we have to deal with \\varepsilon -dependent forward operators and \\varepsilon -dependent norms. In particular the appropriate function spaces for the unknown and the data depend on \\varepsilon . This prevents the application of some standard convergence techniques for inverse problems, in particular interpreting the perturbations as data errors in the original problem does not yield suitable results. We consequently develop a novel approach based on saddle-point problems. The numerical solution of the problem is discussed as well and results for several computational experiments are reported. In
Combined approach to the inverse protein folding problem. Final report
Ruben A. Abagyan
2000-06-01
The main scientific contribution of the project ''Combined approach to the inverse protein folding problem'' submitted in 1996 and funded by the Department of Energy in 1997 is the formulation and development of the idea of the multilink recognition method for identification of functional and structural homologues of newly discovered genes. This idea became very popular after they first announced it and used it in prediction of the threading targets for the CASP2 competition (Critical Assessment of Structure Prediction).
Asynchronous global optimization techniques for medium and large inversion problems
Pereyra, V.; Koshy, M.; Meza, J.C.
1995-04-01
We discuss global optimization procedures adequate for seismic inversion problems. We explain how to save function evaluations (which may involve large scale ray tracing or other expensive operations) by creating a data base of information on what parts of parameter space have already been inspected. It is also shown how a correct parallel implementation using PVM speeds up the process almost linearly with respect to the number of processors, provided that the function evaluations are expensive enough to offset the communication overhead.
Introduction to the 30th volume of Inverse Problems
NASA Astrophysics Data System (ADS)
Louis, Alfred K.
2014-01-01
The field of inverse problems is a fast-developing domain of research originating from the practical demands of finding the cause when a result is observed. The woodpecker, searching for insects, is probing a tree using sound waves: the information searched for is whether there is an insect or not, hence a 0-1 decision. When the result has to contain more information, ad hoc solutions are not at hand and more sophisticated methods have to be developed. Right from its first appearance, the field of inverse problems has been characterized by an interdisciplinary nature: the interpretation of measured data, reinforced by mathematical models serving the analyzing questions of observability, stability and resolution, developing efficient, stable and accurate algorithms to gain as much information as possible from the input and to feedback to the questions of optimal measurement configuration. As is typical for a new area of research, facets of it are separated and studied independently. Hence, fields such as the theory of inverse scattering, tomography in general and regularization methods have developed. However, all aspects have to be reassembled to arrive at the best possible solution to the problem at hand. This development is reflected by the first and still leading journal in the field, Inverse Problems. Founded by pioneers Roy Pike from London and Pierre Sabatier from Montpellier, who enjoyably describes the journal's nascence in his book Rêves et Combats d'un Enseignant-Chercheur, Retour Inverse [1], the journal has developed successfully over the last few decades. Neither the Editors-in-Chief, formerly called Honorary Editors, nor the board or authors could have set the path to success alone. Their fruitful interplay, complemented by the efficient and highly competent publishing team at IOP Publishing, has been fundamental. As such it is my honor and pleasure to follow my renowned colleagues Pierre Sabatier, Mario Bertero, Frank Natterer, Alberto Grünbaum and
Convergence properties of a quadratic approach to the inverse-scattering problem
NASA Astrophysics Data System (ADS)
Persico, Raffaele; Soldovieri, Francesco; Pierri, Rocco
2002-12-01
The local-minima question that arises in the framework of a quadratic approach to inverse-scattering problems is investigated. In particular, a sufficient condition for the absence of local minima is given, and some guidelines to ensure the reliability of the algorithm are outlined for the case of data not belonging to the range of the relevant quadratic operator. This is relevant also when an iterated solution procedure based on a quadratic approximation of the electromagnetic scattering at each step is considered.
NASA Astrophysics Data System (ADS)
Ivanyshyn Yaman, Olha; Le Louër, Frédérique
2016-09-01
This paper deals with the material derivative analysis of the boundary integral operators arising from the scattering theory of time-harmonic electromagnetic waves and its application to inverse problems. We present new results using the Piola transform of the boundary parametrisation to transport the integral operators on a fixed reference boundary. The transported integral operators are infinitely differentiable with respect to the parametrisations and simplified expressions of the material derivatives are obtained. Using these results, we extend a nonlinear integral equations approach developed for solving acoustic inverse obstacle scattering problems to electromagnetism. The inverse problem is formulated as a pair of nonlinear and ill-posed integral equations for the unknown boundary representing the boundary condition and the measurements, for which the iteratively regularized Gauss-Newton method can be applied. The algorithm has the interesting feature that it avoids the numerous numerical solution of boundary value problems at each iteration step. Numerical experiments are presented in the special case of star-shaped obstacles.
NASA Astrophysics Data System (ADS)
Miller, Eric L.; Willsky, Alan S.
1996-01-01
In this paper, we present an approach to the nonlinear inverse scattering problem using the extended Born approximation (EBA) on the basis of methods from the fields of multiscale and statistical signal processing. By posing the problem directly in the wavelet transform domain, regularization is provided through the use of a multiscale prior statistical model. Using the maximum a posteriori (MAP) framework, we introduce the relative Cramér-Rao bound (RCRB) as a tool for analyzing the level of detail in a reconstruction supported by a data set as a function of the physics, the source-receiver geometry, and the nature of our prior information. The MAP estimate is determined using a novel implementation of the Levenberg-Marquardt algorithm in which the RCRB is used to achieve a substantial reduction in the effective dimensionality of the inversion problem with minimal degradation in performance. Additional reduction in complexity is achieved by taking advantage of the sparse structure of the matrices defining the EBA in scale space. An inverse electrical conductivity problem arising in geophysical prospecting applications provides the vehicle for demonstrating the analysis and algorithmic techniques developed in this paper.
Analysis and solution of the ill-posed inverse heat conduction problem
Weber, C.F.
1981-01-01
The inverse conduction problem arises when experimental measurements are taken in the interior of a body, and it is desired to calculate temperature and heat flux values on the surface. The problem is shown to be ill-posed, as the solution exhibits unstable dependence on the given data functions. A special solution procedure is developed for the one-dimensional case which replaces the heat conduction equation with an approximating hyperbolic equation. If viewed from a new perspective, where the roles of the spatial and time variables are interchanged, then an initial value problem for the damped wave equation is obtained. Since this formulation is well-posed, both analytic and numerical solution procedures are readily available. Sample calculations confirm that this approach produces consistent, reliable results for both linear and nonlinear problems.
The inverse problems of wing panel manufacture processes
NASA Astrophysics Data System (ADS)
Oleinikov, A. I.; Bormotin, K. S.
2013-12-01
It is shown that inverse problems of steady-state creep bending of plates in both the geometrically linear and nonlinear formulations can be represented in a variational formulation. Steady-state values of the obtained functionals corresponding to the solutions of the problems of inelastic deformation and springback are determined by applying a finite element procedure to the functionals. Optimal laws of creep deformation are formulated using the criterion of minimizing damage in the functionals of the inverse problems. The formulated problems are reduced to the problems solved by the finite element method using MSC.Marc software. Currently, forming of light metals poses tremendous challenges due to their low ductility at room temperature and their unusual deformation characteristics at hot-cold work: strong asymmetry between tensile and compressive behavior, and a very pronounced anisotropy. We used the constitutive models of steady-state creep of initially transverse isotropy structural materials the kind of the stress state has influence. The paper gives basics of the developed computer-aided system of design, modeling, and electronic simulation targeting the processes of manufacture of wing integral panels. The modeling results can be used to calculate the die tooling, determine the panel processibility, and control panel rejection in the course of forming.
The inverse problems of wing panel manufacture processes
Oleinikov, A. I.; Bormotin, K. S.
2013-12-16
It is shown that inverse problems of steady-state creep bending of plates in both the geometrically linear and nonlinear formulations can be represented in a variational formulation. Steady-state values of the obtained functionals corresponding to the solutions of the problems of inelastic deformation and springback are determined by applying a finite element procedure to the functionals. Optimal laws of creep deformation are formulated using the criterion of minimizing damage in the functionals of the inverse problems. The formulated problems are reduced to the problems solved by the finite element method using MSC.Marc software. Currently, forming of light metals poses tremendous challenges due to their low ductility at room temperature and their unusual deformation characteristics at hot-cold work: strong asymmetry between tensile and compressive behavior, and a very pronounced anisotropy. We used the constitutive models of steady-state creep of initially transverse isotropy structural materials the kind of the stress state has influence. The paper gives basics of the developed computer-aided system of design, modeling, and electronic simulation targeting the processes of manufacture of wing integral panels. The modeling results can be used to calculate the die tooling, determine the panel processibility, and control panel rejection in the course of forming.
Inverse problems with Poisson data: statistical regularization theory, applications and algorithms
NASA Astrophysics Data System (ADS)
Hohage, Thorsten; Werner, Frank
2016-09-01
Inverse problems with Poisson data arise in many photonic imaging modalities in medicine, engineering and astronomy. The design of regularization methods and estimators for such problems has been studied intensively over the last two decades. In this review we give an overview of statistical regularization theory for such problems, the most important applications, and the most widely used algorithms. The focus is on variational regularization methods in the form of penalized maximum likelihood estimators, which can be analyzed in a general setup. Complementing a number of recent convergence rate results we will establish consistency results. Moreover, we discuss estimators based on a wavelet-vaguelette decomposition of the (necessarily linear) forward operator. As most prominent applications we briefly introduce Positron emission tomography, inverse problems in fluorescence microscopy, and phase retrieval problems. The computation of a penalized maximum likelihood estimator involves the solution of a (typically convex) minimization problem. We also review several efficient algorithms which have been proposed for such problems over the last five years.
Inference in infinite-dimensional inverse problems - Discretization and duality
NASA Technical Reports Server (NTRS)
Stark, Philip B.
1992-01-01
Many techniques for solving inverse problems involve approximating the unknown model, a function, by a finite-dimensional 'discretization' or parametric representation. The uncertainty in the computed solution is sometimes taken to be the uncertainty within the parametrization; this can result in unwarranted confidence. The theory of conjugate duality can overcome the limitations of discretization within the 'strict bounds' formalism, a technique for constructing confidence intervals for functionals of the unknown model incorporating certain types of prior information. The usual computational approach to strict bounds approximates the 'primal' problem in a way that the resulting confidence intervals are at most long enough to have the nominal coverage probability. There is another approach based on 'dual' optimization problems that gives confidence intervals with at least the nominal coverage probability. The pair of intervals derived by the two approaches bracket a correct confidence interval. The theory is illustrated with gravimetric, seismic, geomagnetic, and helioseismic problems and a numerical example in seismology.
Application of the hybrid method to inverse heat conduction problems
NASA Astrophysics Data System (ADS)
Chen, Han-Taw; Chang, Shiuh-Ming
1990-04-01
The hybrid method involving the combined use of Laplace transform method and the FEM method is considerably powerful for solving one-dimensional linear heat conduction problems. In the present method, the time-dependent terms are removed from the problem using the Laplace transform method, and then the FEM is applied to the space domain. The transformed temperature is inverted numerically to obtain the result in the physical quantity. The estimation of the surface heat flux or temperature from transient measured temperatures inside the solid agrees well with the analytical solution of the direct problem without Beck's sensitivity analysis and a least-square criterion. Due to no time step, the present method can directly calculate the surface conditions of an inverse problem without step by step computation in the time domain until the specific time is reached.
Inverse problem of pulsed eddy current field of ferromagnetic plates
NASA Astrophysics Data System (ADS)
Chen, Xing-Le; Lei, Yin-Zhao
2015-03-01
To determine the wall thickness, conductivity and permeability of a ferromagnetic plate, an inverse problem is established with measured values and calculated values of time-domain induced voltage in pulsed eddy current testing on the plate. From time-domain analytical expressions of the partial derivatives of induced voltage with respect to parameters, it is deduced that the partial derivatives are approximately linearly dependent. Then the constraints of these parameters are obtained by solving a partial linear differential equation. It is indicated that only the product of conductivity and wall thickness, and the product of relative permeability and wall thickness can be determined accurately through the inverse problem with time-domain induced voltage. In the practical testing, supposing the conductivity of the ferromagnetic plate under test is a fixed value, and then the relative variation of wall thickness between two testing points can be calculated via the ratio of the corresponding inversion results of the product of conductivity and wall thickness. Finally, this method for wall thickness measurement is verified by the experiment results of a carbon steel plate. Project supported by the National Defense Basic Technology Research Program of China (Grant No. Z132013T001).
Inverse problem for in vivo NMR spatial localization
Hasenfeld, A.C.
1985-11-01
The basic physical problem of NMR spatial localization is considered. To study diseased sites, one must solve the problem of adequately localizing the NMR signal. We formulate this as an inverse problem. As the NMR Bloch equations determine the motion of nuclear spins in applied magnetic fields, a theoretical study is undertaken to answer the question of how to design magnetic field configurations to achieve these localized excited spin populations. Because of physical constraints in the production of the relevant radiofrequency fields, the problem factors into a temporal one and a spatial one. We formulate the temporal problem as a nonlinear transformation, called the Bloch Transform, from the rf input to the magnetization response. In trying to invert this transformation, both linear (for the Fourier Transform) and nonlinear (for the Bloch Transform) modes of radiofrequency excitation are constructed. The spatial problem is essentially a statics problem for the Maxwell equations of electromagnetism, as the wavelengths of the radiation considered are on the order of ten meters, and so propagation effects are negligible. In the general case, analytic solutions are unavailable, and so the methods of computer simulation are used to map the rf field spatial profiles. Numerical experiments are also performed to verify the theoretical analysis, and experimental confirmation of the theory is carried out on the 0.5 Tesla IBM/Oxford Imaging Spectrometer at the LBL NMR Medical Imaging Facility. While no explicit inverse is constructed to ''solve'' this problem, the combined theoretical/numerical analysis is validated experimentally, justifying the approximations made. 56 refs., 31 figs.
Inverse problems in heterogeneous and fractured media using peridynamics
Turner, Daniel Z.; van Bloemen Waanders, Bart G.; Parks, Michael L.
2015-12-10
The following work presents an adjoint-based methodology for solving inverse problems in heterogeneous and fractured media using state-based peridynamics. We show that the inner product involving the peridynamic operators is self-adjoint. The proposed method is illustrated for several numerical examples with constant and spatially varying material parameters as well as in the context of fractures. We also present a framework for obtaining material parameters by integrating digital image correlation (DIC) with inverse analysis. This framework is demonstrated by evaluating the bulk and shear moduli for a sample of nuclear graphite using digital photographs taken during the experiment. The resulting measured values correspond well with other results reported in the literature. Lastly, we show that this framework can be used to determine the load state given observed measurements of a crack opening. Furthermore, this type of analysis has many applications in characterizing subsurface stress-state conditions given fracture patterns in cores of geologic material.
Principal Component Geostatistical Approach for large-dimensional inverse problems
Kitanidis, P K; Lee, J
2014-01-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113
Principal Component Geostatistical Approach for large-dimensional inverse problems
NASA Astrophysics Data System (ADS)
Kitanidis, P. K.; Lee, J.
2014-07-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.
NASA Technical Reports Server (NTRS)
Fymat, A. L.
1976-01-01
The paper studies the inversion of the radiative transfer equation describing the interaction of electromagnetic radiation with atmospheric aerosols. The interaction can be considered as the propagation in the aerosol medium of two light beams: the direct beam in the line-of-sight attenuated by absorption and scattering, and the diffuse beam arising from scattering into the viewing direction, which propagates more or less in random fashion. The latter beam has single scattering and multiple scattering contributions. In the former case and for single scattering, the problem is reducible to first-kind Fredholm equations, while for multiple scattering it is necessary to invert partial integrodifferential equations. A nonlinear minimization search method, applicable to the solution of both types of problems has been developed, and is applied here to the problem of monitoring aerosol pollution, namely the complex refractive index and size distribution of aerosol particles.
Reconstructing Images in Astrophysics, an Inverse Problem Point of View
NASA Astrophysics Data System (ADS)
Theys, Céline; Aime, Claude
2016-04-01
After a short introduction, a first section provides a brief tutorial to the physics of image formation and its detection in the presence of noises. The rest of the chapter focuses on the resolution of the inverse problem
Point source reconstruction principle of linear inverse problems
NASA Astrophysics Data System (ADS)
Terazono, Yasushi; Fujimaki, Norio; Murata, Tsutomu; Matani, Ayumu
2010-11-01
Exact point source reconstruction for underdetermined linear inverse problems with a block-wise structure was studied. In a block-wise problem, elements of a source vector are partitioned into blocks. Accordingly, a leadfield matrix, which represents the forward observation process, is also partitioned into blocks. A point source is a source having only one nonzero block. An example of such a problem is current distribution estimation in electroencephalography and magnetoencephalography, where a source vector represents a vector field and a point source represents a single current dipole. In this study, the block-wise norm, a block-wise extension of the ellp-norm, was defined as the family of cost functions of the inverse method. The main result is that a set of three conditions was found to be necessary and sufficient for block-wise norm minimization to ensure exact point source reconstruction for any leadfield matrix that admit such reconstruction. The block-wise norm that satisfies the conditions is the sum of the cost of all the observations of source blocks, or in other words, the block-wisely extended leadfield-weighted ell1-norm. Additional results are that minimization of such a norm always provides block-wisely sparse solutions and that its solutions form cones in source space.
Inverse problem of quadratic time-dependent Hamiltonians
NASA Astrophysics Data System (ADS)
Guo, Guang-Jie; Meng, Yan; Chang, Hong; Duan, Hui-Zeng; Di, Bing
2015-08-01
Using an algebraic approach, it is possible to obtain the temporal evolution wave function for a Gaussian wave-packet obeying the quadratic time-dependent Hamiltonian (QTDH). However, in general, most of the practical cases are not exactly solvable, for we need general solutions of the Riccatti equations which are not generally known. We therefore bypass directly solving for the temporal evolution wave function, and study its inverse problem. We start with a particular evolution of the wave-packet, and get the required Hamiltonian by using the inverse method. The inverse approach opens up a new way to find new exact solutions to the QTDH. Some typical examples are studied in detail. For a specific time-dependent periodic harmonic oscillator, the Berry phase is obtained exactly. Project supported by the National Natural Science Foundation of China (Grant No. 11347171), the Natural Science Foundation of Hebei Province of China (Grant No. A2012108003), and the Key Project of Educational Commission of Hebei Province of China (Grant No. ZD2014052).
A spatiotemporal dynamic distributed solution to the MEG inverse problem
Lamus, Camilo; Hämäläinen, Matti S.; Temereanca, Simona; Brown, Emery N.; Purdon, Patrick L.
2012-01-01
MEG/EEG are non-invasive imaging techniques that record brain activity with high temporal resolution. However, estimation of brain source currents from surface recordings requires solving an ill-conditioned inverse problem. Converging lines of evidence in neuroscience, from neuronal network models to resting-state imaging and neurophysiology, suggest that cortical activation is a distributed spatiotemporal dynamic process, supported by both local and long-distance neuroanatomic connections. Because spatiotemporal dynamics of this kind are central to brain physiology, inverse solutions could be improved by incorporating models of these dynamics. In this article, we present a model for cortical activity based on nearest-neighbor autoregression that incorporates local spatiotemporal interactions between distributed sources in a manner consistent with neurophysiology and neuroanatomy. We develop a dynamic Maximum a Posteriori Expectation-Maximization (dMAP-EM) source localization algorithm for estimation of cortical sources and model parameters based on the Kalman Filter, the Fixed Interval Smoother, and the EM algorithms. We apply the dMAP-EM algorithm to simulated experiments as well as to human experimental data. Furthermore, we derive expressions to relate our dynamic estimation formulas to those of standard static models, and show how dynamic methods optimally assimilate past and future data. Our results establish the feasibility of spatiotemporal dynamic estimation in large-scale distributed source spaces with several thousand source locations and hundreds of sensors, with resulting inverse solutions that provide substantial performance improvements over static methods. PMID:22155043
Negative Compressibility and Inverse Problem for Spinning Gas
Vasily Geyko and Nathaniel J. Fisch
2013-01-11
A spinning ideal gas in a cylinder with a smooth surface is shown to have unusual properties. First, under compression parallel to the axis of rotation, the spinning gas exhibits negative compressibility because energy can be stored in the rotation. Second, the spinning breaks the symmetry under which partial pressures of a mixture of gases simply add proportional to the constituent number densities. Thus, remarkably, in a mixture of spinning gases, an inverse problem can be formulated such that the gas constituents can be determined through external measurements only.
Rank-one inverse scattering problem: Reformulation and analytic solutions
NASA Astrophysics Data System (ADS)
Hartt, K.
1984-03-01
Using the K-matrix formalism, we give a simplified reformulation of the S-wave rank-one inverse scattering problem. The resulting Cauchy integral equation, obtained differently by Gourdin and Martin in their first paper, is tailored to rational representations of F(k)=k(δ0). Use of such F(k) permits a simple but general solution without integration, giving analytic form factors having a pole structure like the S matrix that are reducible to rational expressions using Padé approximants. Finally, we show a bound state pole condition is necessary, and makes the form factor unique.
Rank-one inverse scattering problem: Reformulation and analytic solutions
Hartt, K.
1984-03-01
Using the K-matrix formalism, we give a simplified reformulation of the S-wave rank-one inverse scattering problem. The resulting Cauchy integral equation, obtained differently by Gourdin and Martin in their first paper, is tailored to rational representations of F(k) = k cot(delta/sub 0/). Use of such F(k) permits a simple but general solution without integration, giving analytic form factors having a pole structure like the S matrix that are reducible to rational expressions using Pade approximants. Finally, we show a bound state pole condition is necessary, and makes the form factor unique.
SUSY at the ILC and Solving the LHC Inverse Problem
Gainer, James S.; /SLAC
2008-05-28
Recently a large scale study of points in the MSSM parameter space which are problematic at the Large Hadron Collider (LHC) has been performed. This work was carried out in part to determine whether the proposed International Linear Collider (ILC) could be used to solve the LHC inverse problem. The results suggest that while the ILC will be a valuable tool, an energy upgrade may be crucial to its success, and that, in general, precision studies of the MSSM are more difficult at the ILC than has generally been believed.
Introduction to the 30th volume of Inverse Problems
NASA Astrophysics Data System (ADS)
Louis, Alfred K.
2014-01-01
The field of inverse problems is a fast-developing domain of research originating from the practical demands of finding the cause when a result is observed. The woodpecker, searching for insects, is probing a tree using sound waves: the information searched for is whether there is an insect or not, hence a 0-1 decision. When the result has to contain more information, ad hoc solutions are not at hand and more sophisticated methods have to be developed. Right from its first appearance, the field of inverse problems has been characterized by an interdisciplinary nature: the interpretation of measured data, reinforced by mathematical models serving the analyzing questions of observability, stability and resolution, developing efficient, stable and accurate algorithms to gain as much information as possible from the input and to feedback to the questions of optimal measurement configuration. As is typical for a new area of research, facets of it are separated and studied independently. Hence, fields such as the theory of inverse scattering, tomography in general and regularization methods have developed. However, all aspects have to be reassembled to arrive at the best possible solution to the problem at hand. This development is reflected by the first and still leading journal in the field, Inverse Problems. Founded by pioneers Roy Pike from London and Pierre Sabatier from Montpellier, who enjoyably describes the journal's nascence in his book Rêves et Combats d'un Enseignant-Chercheur, Retour Inverse [1], the journal has developed successfully over the last few decades. Neither the Editors-in-Chief, formerly called Honorary Editors, nor the board or authors could have set the path to success alone. Their fruitful interplay, complemented by the efficient and highly competent publishing team at IOP Publishing, has been fundamental. As such it is my honor and pleasure to follow my renowned colleagues Pierre Sabatier, Mario Bertero, Frank Natterer, Alberto Grünbaum and
Topological inversion for solution of geodesy-constrained geophysical problems
NASA Astrophysics Data System (ADS)
Saltogianni, Vasso; Stiros, Stathis
2015-04-01
Geodetic data, mostly GPS observations, permit to measure displacements of selected points around activated faults and volcanoes, and on the basis of geophysical models, to model the underlying physical processes. This requires inversion of redundant systems of highly non-linear equations with >3 unknowns; a situation analogous to the adjustment of geodetic networks. However, in geophysical problems inversion cannot be based on conventional least-squares techniques, and is based on numerical inversion techniques (a priori fixing of some variables, optimization in steps with values of two variables each time to be regarded fixed, random search in the vicinity of approximate solutions). Still these techniques lead to solutions trapped in local minima, to correlated estimates and to solutions with poor error control (usually sampling-based approaches). To overcome these problems, a numerical-topological, grid-search based technique in the RN space is proposed (N the number of unknown variables). This technique is in fact a generalization and refinement of techniques used in lighthouse positioning and in some cases of low-accuracy 2-D positioning using Wi-Fi etc. The basic concept is to assume discrete possible ranges of each variable, and from these ranges to define a grid G in the RN space, with some of the gridpoints to approximate the true solutions of the system. Each point of hyper-grid G is then tested whether it satisfies the observations, given their uncertainty level, and successful grid points define a sub-space of G containing the true solutions. The optimal (minimal) space containing one or more solutions is obtained using a trial-and-error approach, and a single optimization factor. From this essentially deterministic identification of the set of gridpoints satisfying the system of equations, at a following step, a stochastic optimal solution is computed corresponding to the center of gravity of this set of gridpoints. This solution corresponds to a
Inverse Problems in Complex Models and Applications to Earth Sciences
NASA Astrophysics Data System (ADS)
Bosch, M. E.
2015-12-01
The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied
Inverse spin glass and related maximum entropy problems.
Castellana, Michele; Bialek, William
2014-09-12
If we have a system of binary variables and we measure the pairwise correlations among these variables, then the least structured or maximum entropy model for their joint distribution is an Ising model with pairwise interactions among the spins. Here we consider inhomogeneous systems in which we constrain, for example, not the full matrix of correlations, but only the distribution from which these correlations are drawn. In this sense, what we have constructed is an inverse spin glass: rather than choosing coupling constants at random from a distribution and calculating correlations, we choose the correlations from a distribution and infer the coupling constants. We argue that such models generate a block structure in the space of couplings, which provides an explicit solution of the inverse problem. This allows us to generate a phase diagram in the space of (measurable) moments of the distribution of correlations. We expect that these ideas will be most useful in building models for systems that are nonequilibrium statistical mechanics problems, such as networks of real neurons. PMID:25260004
Using Inverse Problem Methods with Surveillance Data in Pneumococcal Vaccination
Sutton, Karyn L.; Banks, H. T.; Castillo-Chavez, Carlos
2010-01-01
The design and evaluation of epidemiological control strategies is central to public health policy. While inverse problem methods are routinely used in many applications, this remains an area in which their use is relatively rare, although their potential impact is great. We describe methods particularly relevant to epidemiological modeling at the population level. These methods are then applied to the study of pneumococcal vaccination strategies as a relevant example which poses many challenges common to other infectious diseases. We demonstrate that relevant yet typically unknown parameters may be estimated, and show that a calibrated model may used to assess implemented vaccine policies through the estimation of parameters if vaccine history is recorded along with infection and colonization information. Finally, we show how one might determine an appropriate level of refinement or aggregation in the age-structured model given age-stratified observations. These results illustrate ways in which the collection and analysis of surveillance data can be improved using inverse problem methods. PMID:20209093
Reducing the Dimensionality of the Inverse Problem in IMRT
NASA Astrophysics Data System (ADS)
Cabal, Gonzalo
2007-11-01
The inverse problem in IMRT (Intensity Modulated Radiation Therapy) consists in finding a set of radiation parameters based on the conditions given by the radiation therapist. The dimensionality of such problem usually depends on the number of bixels in which each radiation field is divided. Recently, the efforts have been put on finding arrangements of small segments (subfields) irradiating uniformly. In this paper a deterministic algorithm which allows to find solutions given a maximal number of segments is proposed. The procedure consists of two parts. In the first part the segments are chosen based on the irradiation geometry defined by the therapist. In the second part, the radiation intensity of the segments is optimized using standard optimization algorithms. Results are presented. Computational times were reduced and the final fluence maps were less complex without significantly sacrifying clinical value.
Reducing the Dimensionality of the Inverse Problem in IMRT
Cabal, Gonzalo
2007-11-26
The inverse problem in IMRT (Intensity Modulated Radiation Therapy) consists in finding a set of radiation parameters based on the conditions given by the radiation therapist. The dimensionality of such problem usually depends on the number of bixels in which each radiation field is divided. Recently, the efforts have been put on finding arrangements of small segments (subfields) irradiating uniformly. In this paper a deterministic algorithm which allows to find solutions given a maximal number of segments is proposed. The procedure consists of two parts. In the first part the segments are chosen based on the irradiation geometry defined by the therapist. In the second part, the radiation intensity of the segments is optimized using standard optimization algorithms. Results are presented. Computational times were reduced and the final fluence maps were less complex without significantly sacrifying clinical value.
Nonlocal Separable Solutions of the Inverse Scattering Problem
NASA Astrophysics Data System (ADS)
Gherghetta, Tony; Nambu, Yoichiro
We extend the nonlocal separable potential solutions of Gourdin and Martin for the inverse scattering problem to the case where sin δ0 has more than N zeroes, δ0 being the s-wave scattering phase shift and δ0(0) - δ0(∞) = Nπ. As an example we construct the solution for the particular case of 4He and show how to incorporate a weakly bound state. Using a local square well potential chosen to mimic the real 4He potential, we compare the off-shell extension of the nonlocal potential solution with the exactly solvable square well. We then discuss how a nonlocal potential might be used to simplify the many-body problem of liquid 4He.
Inverse problems and computational cell metabolic models: a statistical approach
NASA Astrophysics Data System (ADS)
Calvetti, D.; Somersalo, E.
2008-07-01
In this article, we give an overview of the Bayesian modelling of metabolic systems at the cellular and subcellular level. The models are based on detailed description of key biochemical reactions occurring in tissue, which may in turn be compartmentalized into cytosol and mitochondria, and of transports between the compartments. The classical deterministic approach which models metabolic systems as dynamical systems with Michaelis-Menten kinetics, is replaced by a stochastic extension where the model parameters are interpreted as random variables with an appropriate probability density. The inverse problem of cell metabolism in this setting consists of estimating the density of the model parameters. After discussing some possible approaches to solving the problem, we address the issue of how to assess the reliability of the predictions of a stochastic model by proposing an output analysis in terms of model uncertainties. Visualization modalities for organizing the large amount of information provided by the Bayesian dynamic sensitivity analysis are also illustrated.
Galerkin approximation for inverse problems for nonautonomous nonlinear distributed systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Reich, Simeon; Rosen, I. G.
1988-01-01
An abstract framework and convergence theory is developed for Galerkin approximation for inverse problems involving the identification of nonautonomous nonlinear distributed parameter systems. A set of relatively easily verified conditions is provided which are sufficient to guarantee the existence of optimal solutions and their approximation by a sequence of solutions to a sequence of approximating finite dimensional identification problems. The approach is based on the theory of monotone operators in Banach spaces and is applicable to a reasonably broad class of nonlinear distributed systems. Operator theoretic and variational techniques are used to establish a fundamental convergence result. An example involving evolution systems with dynamics described by nonstationary quasilinear elliptic operators along with some applications are presented and discussed.
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
Inverse zombies, anesthesia awareness, and the hard problem of unconsciousness.
Mashour, George A; LaRock, Eric
2008-12-01
Philosophical (p-) zombies are constructs that possess all of the behavioral features and responses of a sentient human being, yet are not conscious. P-zombies are intimately linked to the hard problem of consciousness and have been invoked as arguments against physicalist approaches. But what if we were to invert the characteristics of p-zombies? Such an inverse (i-) zombie would possess all of the behavioral features and responses of an insensate being, yet would nonetheless be conscious. While p-zombies are logically possible but naturally improbable, an approximation of i-zombies actually exists: individuals experiencing what is referred to as "anesthesia awareness." Patients under general anesthesia may be intubated (preventing speech), paralyzed (preventing movement), and narcotized (minimizing response to nociceptive stimuli). Thus, they appear--and typically are--unconscious. In 1-2 cases/1000, however, patients may be aware of intraoperative events, sometimes without any objective indices. Furthermore, a much higher percentage of patients (22% in a recent study) may have the subjective experience of dreaming during general anesthesia. P-zombies confront us with the hard problem of consciousness--how do we explain the presence of qualia? I-zombies present a more practical problem--how do we detect the presence of qualia? The current investigation compares p-zombies to i-zombies and explores the "hard problem" of unconsciousness with a focus on anesthesia awareness.
Inverse zombies, anesthesia awareness, and the hard problem of unconsciousness.
Mashour, George A; LaRock, Eric
2008-12-01
Philosophical (p-) zombies are constructs that possess all of the behavioral features and responses of a sentient human being, yet are not conscious. P-zombies are intimately linked to the hard problem of consciousness and have been invoked as arguments against physicalist approaches. But what if we were to invert the characteristics of p-zombies? Such an inverse (i-) zombie would possess all of the behavioral features and responses of an insensate being, yet would nonetheless be conscious. While p-zombies are logically possible but naturally improbable, an approximation of i-zombies actually exists: individuals experiencing what is referred to as "anesthesia awareness." Patients under general anesthesia may be intubated (preventing speech), paralyzed (preventing movement), and narcotized (minimizing response to nociceptive stimuli). Thus, they appear--and typically are--unconscious. In 1-2 cases/1000, however, patients may be aware of intraoperative events, sometimes without any objective indices. Furthermore, a much higher percentage of patients (22% in a recent study) may have the subjective experience of dreaming during general anesthesia. P-zombies confront us with the hard problem of consciousness--how do we explain the presence of qualia? I-zombies present a more practical problem--how do we detect the presence of qualia? The current investigation compares p-zombies to i-zombies and explores the "hard problem" of unconsciousness with a focus on anesthesia awareness. PMID:18635380
Basis set expansion for inverse problems in plasma diagnostic analysis.
Jones, B; Ruiz, C L
2013-07-01
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Basis set expansion for inverse problems in plasma diagnostic analysis
Jones, B.; Ruiz, C. L.
2013-07-15
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20–25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Basis set expansion for inverse problems in plasma diagnostic analysis
NASA Astrophysics Data System (ADS)
Jones, B.; Ruiz, C. L.
2013-07-01
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)], 10.1063/1.1482156 is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Theoretical study of Laplacian electrocardiography forward and inverse problem
NASA Astrophysics Data System (ADS)
Wu, Dongsheng
The present study is concerned with a fundamental problem of cardiac electrophysiology, that is relating in a quantitative way the electrical activity within the heart to the signals recorded over the body surface. By computer simulation, a rigorous evaluation of the performance of the body surface Laplacian electrocardiographic maps in a physiologically reasonable and well-controlled computational setting is provided in this dissertation. The present forward heart-torso model, a three-dimensional ventricular conduction model embedded in a realistically shaped inhomogeneous torso volume conductor model, represents, up to date, the most advanced computer model, which is available for studying the Laplacian electrocardiographic fields corresponding to normal and abnormal ventricular conduction processes. Theoretical studies show that one can achieve enhanced spatial resolution of mapping cardiac electrical activity by obtaining the Laplacian ECG over the body surface. The present work demonstrates the excellent performance of the body surface Laplacian electrocardiographic maps in resolving and imaging the underlying regional myocardial electrical activity. The biophysics underlying this is that the Laplacian ECG heavily weights the contributions from the myocardial bioelectric sources that are closest to the recording location, whereas the potential ECG sums up the contributions from a large area of activated myocardial tissue. It is this regional nature of the Laplacian ECG that makes it possible to provide a more localized body surface manifestation of the underlying regional myocardial electrical activity. The feasibility of applying the Laplacian ECG to the inverse problems has also been investigated. Theoretical studies of the Laplacian electrocardiogram based inverse problem by using a homogeneous spherical volume conductor and a realistically shaped volume conductor have been conducted. The present work shows encouraging results which suggest the feasibility
NASA Technical Reports Server (NTRS)
Backus, George
1987-01-01
Let R be the real numbers, R(n) the linear space of all real n-tuples, and R(infinity) the linear space of all infinite real sequences x = (x sub 1, x sub 2,...). Let P sub n :R(infinity) approaches R(n) be the projection operator with P sub n (x) = (x sub 1,...,x sub n). Let p(infinity) be a probability measure on the smallest sigma-ring of subsets of R(infinity) which includes all of the cylinder sets P sub n(-1) (B sub n), where B sub n is an arbitrary Borel subset of R(n). Let p sub n be the marginal distribution of p(infinity) on R(n), so p sub n(B sub n) = p(infinity)(P sub n to the -1(B sub n)) for each B sub n. A measure on R(n) is isotropic if it is invariant under all orthogonal transformations of R(n). All members of the set of all isotropic probability distributions on R(n) are described. The result calls into question both stochastic inversion and Bayesian inference, as currently used in many geophysical inverse problems.
Solution accelerators for large scale 3D electromagnetic inverse problems
Newman, Gregory A.; Boggs, Paul T.
2004-04-05
We provide a framework for preconditioning nonlinear 3D electromagnetic inverse scattering problems using nonlinear conjugate gradient (NLCG) and limited memory (LM) quasi-Newton methods. Key to our approach is the use of an approximate adjoint method that allows for an economical approximation of the Hessian that is updated at each inversion iteration. Using this approximate Hessian as a preconditoner, we show that the preconditioned NLCG iteration converges significantly faster than the non-preconditioned iteration, as well as converging to a data misfit level below that observed for the non-preconditioned method. Similar conclusions are also observed for the LM iteration; preconditioned with the approximate Hessian, the LM iteration converges faster than the non-preconditioned version. At this time, however, we see little difference between the convergence performance of the preconditioned LM scheme and the preconditioned NLCG scheme. A possible reason for this outcome is the behavior of the line search within the LM iteration. It was anticipated that, near convergence, a step size of one would be approached, but what was observed, instead, were step lengths that were nowhere near one. We provide some insights into the reasons for this behavior and suggest further research that may improve the performance of the LM methods.
Inverse problems in heterogeneous and fractured media using peridynamics
Turner, Daniel Z.; van Bloemen Waanders, Bart G.; Parks, Michael L.
2015-12-10
The following work presents an adjoint-based methodology for solving inverse problems in heterogeneous and fractured media using state-based peridynamics. We show that the inner product involving the peridynamic operators is self-adjoint. The proposed method is illustrated for several numerical examples with constant and spatially varying material parameters as well as in the context of fractures. We also present a framework for obtaining material parameters by integrating digital image correlation (DIC) with inverse analysis. This framework is demonstrated by evaluating the bulk and shear moduli for a sample of nuclear graphite using digital photographs taken during the experiment. The resulting measuredmore » values correspond well with other results reported in the literature. Lastly, we show that this framework can be used to determine the load state given observed measurements of a crack opening. Furthermore, this type of analysis has many applications in characterizing subsurface stress-state conditions given fracture patterns in cores of geologic material.« less
The inverse problem for the simple dynamo model
NASA Astrophysics Data System (ADS)
Reshetnyak, Maxim
2016-04-01
The inverse solution of the 1D Parker dynamo equations is considered. The method [1] is based on minimization of the cost-function, which characterize deviation of the model solution properties from the desired ones. The output is the latitude distribution of the magnetic field generation sources: the α- and ω-effects. Minimization is made using the Monte-Carlo method. The details of the method, as well as some applications, which can be interesting for the broad dynamo community, are considered: conditions when the invisible for the observer at the surface of the planet toroidal part of the magnetic field is much larger than the poloidal counterpart. It is also demonstrated in what circumstances magnetic field in the both hemispheres has different properties (the so-called hemispherical dynamo), and simple physical explanation of this phenomenon is proposed. References [1] Reshetnyak M.Yu. Inverse problem in Parker's dynamo. Russ. J. Earth Sci. 2015. 15. ES4001, doi:10.2205/2015ES000558, arXiv:1511.06243
The LHC Inverse Problem, Supersymmetry and the ILC
Berger, C.F.; Gainer, J.S.; Hewett, J.L.; Lillie, B.; Rizzo, T.G.
2007-11-12
We address the question whether the ILC can resolve the LHC Inverse Problem within the framework of the MSSM. We examine 242 points in the MSSM parameter space which were generated at random and were found to give indistinguishable signatures at the LHC. After a realistic simulation including full Standard Model backgrounds and a fast detector simulation, we find that roughly only one third of these scenarios lead to visible signatures of some kind with a significance {ge} 5 at the ILC with {radical}s = 500 GeV. Furthermore, we examine these points in parameter space pairwise and find that only one third of the pairs are distinguishable at the ILC at 5{sigma}.
Canonically Transformed Detectors Applied to the Classical Inverse Scattering Problem
NASA Astrophysics Data System (ADS)
Jung, C.; Seligman, T. H.; Torres, J. M.
The concept of measurement in classical scattering is interpreted as an overlap of a particle packet with some area in phase space that describes the detector. Considering that usually we record the passage of particles at some point in space, a common detector is described e.g. for one-dimensional systems as a narrow strip in phase space. We generalize this concept allowing this strip to be transformed by some, possibly non-linear, canonical transformation, introducing thus a canonically transformed detector. We show such detectors to be useful in the context of the inverse scattering problem in situations where recently discovered scattering echoes could not be seen without their help. More relevant applications in quantum systems are suggested.
Multi-frequency orthogonality sampling for inverse obstacle scattering problems
NASA Astrophysics Data System (ADS)
Griesmaier, Roland
2011-08-01
We discuss a simple non-iterative method to reconstruct the support of a collection of obstacles from the measurements of far-field patterns of acoustic or electromagnetic waves corresponding to plane-wave incident fields with one or few incident directions at several frequencies. The method is a variant of the orthogonality sampling algorithm recently studied by Potthast (2010 Inverse Problems 26 074015). Our theoretical analysis of the algorithm relies on an asymptotic expansion of the far-field pattern of the scattered field as the size of the scatterers tends to zero with respect to the wavelength of the incident field that holds not only at a single frequency, but also across appropriate frequency bands. This expansion suggests some modifications to the original orthogonality sampling algorithm and yields a theoretical motivation for its multi-frequency version. We illustrate the performance of the reconstruction method by numerical examples.
Nonlocal regularization of inverse problems: a unified variational framework
Yang, Zhili; Jacob, Mathews
2014-01-01
We introduce a unifying energy minimization framework for nonlocal regularization of inverse problems. In contrast to the weighted sum of square differences between image pixels used by current schemes, the proposed functional is an unweighted sum of inter-patch distances. We use robust distance metrics that promote the averaging of similar patches, while discouraging the averaging of dissimilar patches. We show that the first iteration of a majorize-minimize algorithm to minimize the proposed cost function is similar to current non-local methods. The reformulation thus provides a theoretical justification for the heuristic approach of iterating non-local schemes, which re-estimate the weights from the current image estimate. Thanks to the reformulation, we now understand that the widely reported alias amplification associated with iterative non-local methods are caused by the convergence to local minimum of the nonconvex penalty. We introduce an efficient continuation strategy to overcome this problem. The similarity of the proposed criterion to widely used non-quadratic penalties (eg. total variation and `p semi-norms) opens the door to the adaptation of fast algorithms developed in the context of compressive sensing; we introduce several novel algorithms to solve the proposed non-local optimization problem. Thanks to the unifying framework, these fast algorithms are readily applicable for a large class of distance metrics. PMID:23014745
The inverse problem of bimorph mirror tuning on a beamline
Huang, Rong
2013-03-07
One of the challenges of tuning bimorph mirrors with many electrodes is that the calculated focusing voltages can be different by more than the safety limit (such as 500 V for the mirrors used at 17-ID at the Advanced Photon Source) between adjacent electrodes. A study of this problem at 17-ID revealed that the inverse problem of the tuning in situ, using X-rays, became ill-conditioned when the number of electrodes was large and the calculated focusing voltages were contaminated with measurement errors. Increasing the number of beamlets during the tuning could reduce the matrix condition number in the problem, but obtaining voltages with variation below the safety limit was still not always guaranteed and multiple iterations of tuning were often required. Applying Tikhonov regularization and using the L-curve criterion for the determination of the regularization parameter made it straightforward to obtain focusing voltages with well behaved variations. Some characteristics of the tuning results obtained using Tikhonov regularization are given in this paper.
The ultrasound elastography inverse problem and the effective criteria.
Aghajani, Atefeh; Haghpanahi, Mohammad; Nikazad, Touraj
2013-11-01
The elastography (elasticity imaging) is one of the recent state-of-the-art methods for diagnosis of abnormalities in soft tissue. The idea is based on the computation of the tissue elasticity distribution. This leads to the inverse elasticity problem; in that, displacement field and boundary conditions are known, and elasticity distribution of the tissue is aimed for computation. We treat this problem by the Gauss-Newton method. This iterative method results in an ill-posed problem, and therefore, regularization schemes are required to deal with this issue. The impacts of the initial guess for tissue elasticity distribution, contrast ratio between elastic modulus of tumor and normal tissue, and noise level of the input data on the estimated solutions are investigated via two different regularization methods. The numerical results show that the accuracy and speed of convergence vary when different regularization methods are applied. Also, the semi-convergence behavior has been observed and discussed. At the end, we signify the necessity of a clever initial guess and intelligent stopping criteria for the iterations. The main purpose here is to highlight some technical factors that have an influence on elasticity image quality and diagnostic accuracy, and we have tried our best to make this article accessible for a broad audience.
Inverse problems for linear hyperbolic equations using mixed formulations
NASA Astrophysics Data System (ADS)
Cîndea, Nicolae; Münch, Arnaud
2015-07-01
We introduce a direct method which allows the solving of numerically inverse problems for linear hyperbolic equations. We first consider the reconstruction of the full solution of the equation posed in Ω × (0,T)—Ω being a bounded subset of {{{R}}N}—from a partial distributed observation. We employ a least-squares technique and minimize the L2-norm of the distance from the observation to any solution. Taking the hyperbolic equation as the main constraint of the problem, the optimality conditions are reduced to a mixed formulation involving both the state to reconstruct and a Lagrange multiplier. Under usual geometric optic conditions, we show the well-posedness of this mixed formulation (in particular the inf-sup condition) and then introduce a numerical approximation based on space-time finite element discretization. We prove the strong convergence of the approximation and then discuss several examples for N = 1 and N = 2. The problem of the reconstruction of both the state and the source terms is also addressed.
The geometry of discombinations and its applications to semi-inverse problems in anelasticity.
Yavari, Arash; Goriely, Alain
2014-09-01
The geometrical formulation of continuum mechanics provides us with a powerful approach to understand and solve problems in anelasticity where an elastic deformation is combined with a non-elastic component arising from defects, thermal stresses, growth effects or other effects leading to residual stresses. The central idea is to assume that the material manifold, prescribing the reference configuration for a body, has an intrinsic, non-Euclidean, geometrical structure. Residual stresses then naturally arise when this configuration is mapped into Euclidean space. Here, we consider the problem of discombinations (a new term that we introduce in this paper), that is, a combined distribution of fields of dislocations, disclinations and point defects. Given a discombination, we compute the geometrical characteristics of the material manifold (curvature, torsion, non-metricity), its Cartan's moving frames and structural equations. This identification provides a powerful algorithm to solve semi-inverse problems with non-elastic components. As an example, we calculate the residual stress field of a cylindrically symmetric distribution of discombinations in an infinite circular cylindrical bar made of an incompressible hyperelastic isotropic elastic solid. PMID:25197257
A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems
NASA Astrophysics Data System (ADS)
Iglesias, Marco A.
2016-02-01
. The numerical investigation is carried out with synthetic experiments on two model inverse problems: (i) identification of conductivity on a Darcy flow model and (ii) electrical impedance tomography with the complete electrode model. We further demonstrate the potential application of the method in solving shape identification problems that arises from the aforementioned forward models by means of a level-set approach for the parameterization of unknown geometries.
Methodes entropiques appliquees au probleme inverse en magnetoencephalographie
NASA Astrophysics Data System (ADS)
Lapalme, Ervig
2005-07-01
This thesis is devoted to biomagnetic source localization using magnetoencephalography. This problem is known to have an infinite number of solutions. So methods are required to take into account anatomical and functional information on the solution. The work presented in this thesis uses the maximum entropy on the mean method to constrain the solution. This method originates from statistical mechanics and information theory. This thesis is divided into two main parts containing three chapters each. The first part reviews the magnetoencephalographic inverse problem: the theory needed to understand its context and the hypotheses for simplifying the problem. In the last chapter of this first part, the maximum entropy on the mean method is presented: its origins are explained and also how it is applied to our problem. The second part is the original work of this thesis presenting three articles; one of them already published and two others submitted for publication. In the first article, a biomagnetic source model is developed and applied in a theoretical con text but still demonstrating the efficiency of the method. In the second article, we go one step further towards a realistic modelization of the cerebral activation. The main priors are estimated using the magnetoencephalographic data. This method proved to be very efficient in realistic simulations. In the third article, the previous method is extended to deal with time signals thus exploiting the excellent time resolution offered by magnetoencephalography. Compared with our previous work, the temporal method is applied to real magnetoencephalographic data coming from a somatotopy experience and results agree with previous physiological knowledge about this kind of cognitive process.
An inverse problem approach to modelling coastal effluent plumes
NASA Astrophysics Data System (ADS)
Lam, D. C. L.; Murthy, C. R.; Miners, K. C.
Formulated as an inverse problem, the diffusion parameters associated with length-scale dependent eddy diffusivities can be viewed as the unknowns in the mass conservation equation for coastal zone transport problems. The values of the diffusion parameters can be optimized according to an error function incorporated with observed concentration data. Examples are given for the Fickian, shear diffusion and inertial subrange diffusion models. Based on a new set of dyeplume data collected in the coastal zone off Bronte, Lake Ontario, it is shown that the predictions of turbulence closure models can be evaluated for different flow conditions. The choice of computational schemes for this diagnostic approach is based on tests with analytic solutions and observed data. It is found that the optimized shear diffusion model produced a better agreement with observations for both high and low advective flows than, e.g., the unoptimized semi-empirical model, Ky=0.075 σy1.2, described by Murthy and Kenney.
NASA Astrophysics Data System (ADS)
Zhan, Qin; Yuan, Yuan; Fan, Xiangtao; Huang, Jianyong; Xiong, Chunyang; Yuan, Fan
2016-06-01
Digital image correlation (DIC) is essentially implicated in a class of inverse problem. Here, a regularization scheme is developed for the subset-based DIC technique to effectively inhibit potential ill-posedness that likely arises in actual deformation calculations and hence enhance numerical stability, accuracy and precision of correlation measurement. With the aid of a parameterized two-dimensional Butterworth window, a regularized subpixel registration strategy is established, in which the amount of speckle information introduced to correlation calculations may be weighted through equivalent subset size constraint. The optimal regularization parameter associated with each individual sampling point is determined in a self-adaptive way by numerically investigating the curve of 2-norm condition number of coefficient matrix versus the corresponding equivalent subset size, based on which the regularized solution can eventually be obtained. Numerical results deriving from both synthetic speckle images and actual experimental images demonstrate the feasibility and effectiveness of the set of newly-proposed regularized DIC algorithms.
Haber, Eldad
2014-03-17
The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequal- ity constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.
Inverse problem for the current loop model: Possibilities and restrictions
NASA Astrophysics Data System (ADS)
Demina, I. M.; Farafonova, Yu. G.
2016-07-01
The possibilities of determining arbitrary current loop parameters based on the spatial structures of the magnetic field components generated by this loop on a sphere with a specified radius have been considered with the use of models. The model parameters were selected such that anomalies created by current loops on a sphere with a radius of 6378 km would be comparable in value with the different-scale anomalies of the observed main geomagnetic field (MGF). The least squares method was used to solve the inverse problem. Estimates close to the specified values were obtained for all current loop parameters except the current strength and radius. The radius determination error can reach ±120 km; at the same time, the magnetic moment value is determined with an accuracy of ±1%. The resolvability of the current force and radius can to a certain degree be improved by decreasing the observation sphere radius such that the ratio of the source distance to the current loop radius would be at least smaller than eight, which can be difficult to reach when modeling MGF.
Solving Inverse Detection Problems Using Passive Radiation Signatures
Favorite, Jeffrey A.; Armstrong, Jerawan C.; Vaquer, Pablo A.
2012-08-15
The ability to reconstruct an unknown radioactive object based on its passive gamma-ray and neutron signatures is very important in homeland security applications. Often in the analysis of unknown radioactive objects, for simplicity or speed or because there is no other information, they are modeled as spherically symmetric regardless of their actual geometry. In these presentation we discuss the accuracy and implications of this approximation for decay gamma rays and for neutron-induced gamma rays. We discuss an extension of spherical raytracing (for uncollided fluxes) that allows it to be used when the exterior shielding is flat or cylindrical. We revisit some early results in boundary perturbation theory, showing that the Roussopolos estimate is the correct one to use when the quantity of interest is the flux or leakage on the boundary. We apply boundary perturbation theory to problems in which spherically symmetric systems are perturbed in asymmetric nonspherical ways. We apply mesh adaptive direct search (MADS) algorithms to object reconstructions. We present a benchmark test set that may be used to quantitatively evaluate inverse detection methods.
NASA Astrophysics Data System (ADS)
Hohage, Thorsten
1997-10-01
Convergence and logarithmic convergence rates of the iteratively regularized Gauss - Newton method in a Hilbert space setting are proven provided a logarithmic source condition is satisfied. This method is applied to an inverse potential and an inverse scattering problem, and the source condition is interpreted as a smoothness condition in terms of Sobolev spaces for the case where the domain is a circle. Numerical experiments yield convergence and convergence rates of the form expected by our general convergence theorem.
Bayesian Genomic-Enabled Prediction as an Inverse Problem
Cuevas, Jaime; Pérez-Elizalde, Sergio; Soberanis, Victor; Pérez-Rodríguez, Paulino; Gianola, Daniel; Crossa, José
2014-01-01
Genomic-enabled prediction in plant and animal breeding has become an active area of research. Many prediction models address the collinearity that arises when the number (p) of molecular markers (e.g. single-nucleotide polymorphisms) is larger than the sample size (n). Here we propose four Bayesian approaches to the problem based on commonly used data reduction methods. Specifically, we use a Gaussian linear model for an orthogonal transformation of both the observed data and the matrix of molecular markers. Because shrinkage of estimates is affected by the prior variance of transformed effects, we propose four structures of the prior variance as a way of potentially increasing the prediction accuracy of the models fitted. To evaluate our methods, maize and wheat data previously used with standard Bayesian regression models were employed for measuring prediction accuracy using the proposed models. Results indicate that, for the maize and wheat data sets, our Bayesian models yielded, on average, a prediction accuracy that is 3% greater than that of standard Bayesian regression models, with less computational effort. PMID:25155273
Butler, T.; Graham, L.; Estep, D.; Westerink, J.J.
2015-01-01
The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed. PMID:25937695
NASA Astrophysics Data System (ADS)
Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
Forward- vs. Inverse Problems in Modeling Seismic Attenuation
NASA Astrophysics Data System (ADS)
Morozov, I. B.
2015-12-01
Seismic attenuation is an important property of wave propagation used in numerous applications. However, the attenuation is also a complex phenomenon, and it is important to differentiate between its two typical uses: 1) in forward problems, to model the amplitudes and spectral contents of waves required for hazard assessment and geotechnical engineering, and 2) in inverse problems, to determine the physical properties of the subsurface. In the forward-problem sense, the attenuation is successfully characterized in terms of empirical parameters of geometric spreading, radiation patterns, scattering amplitudes, t-star, alpha, kappa, or Q. Arguably, the predicted energy losses can be correct even if the underlying attenuation model is phenomenological and not sufficiently based on physics. An example of such phenomenological model is the viscoelasticity based on the correspondence principle and the Q-factor assigned to the material. By contrast, when used to invert for in situ material properties, models addressing the specific physics are required. In many studies (including in this session), a Q-factor is interpreted as a property of a point within the subsurface; however this property is only phenomenological and may be physically insufficient or inconsistent. For example, the bulk or shear Q at the same point can be different when evaluated from different wave modes. The cases of frequency-dependent Q are particularly prone of ambiguities such as trade-off with the assumed background geometric spreading. To rigorously characterize the in situ material properties responsible for seismic-wave attenuation, it is insufficient to only focus on the seismic energy loss. Mechanical models of the material need to be considered. Such models can be constructed by using Lagrangian mechanics. These models should likely contain no Q but will be based on parameters of microstructure such as heterogeneity, fractures, or fluids. I illustrate several such models based on viscosity
Entire nodal solutions to the pure critical exponent problem arising from concentration
NASA Astrophysics Data System (ADS)
Clapp, Mónica
2016-09-01
We obtain new sign changing solutions to the problem We exhibit solutions up to (℘p) which blow up at a single point as p →2*, developing a peak whose asymptotic profile is a rescaling of a nonradial sign changing solution to problem (℘∞). We also obtain existence and multiplicity of sign changing nonradial solutions to the Bahri-Coron problem (℘2*) in annuli.
A numerical solution of a singular boundary value problem arising in boundary layer theory.
Hu, Jiancheng
2016-01-01
In this paper, a second-order nonlinear singular boundary value problem is presented, which is equivalent to the well-known Falkner-Skan equation. And the one-dimensional third-order boundary value problem on interval [Formula: see text] is equivalently transformed into a second-order boundary value problem on finite interval [Formula: see text]. The finite difference method is utilized to solve the singular boundary value problem, in which the amount of computational effort is significantly less than the other numerical methods. The numerical solutions obtained by the finite difference method are in agreement with those obtained by previous authors. PMID:27026894
Flight control in the hawkmoth Manduca sexta: the inverse problem of hovering.
Hedrick, T L; Daniel, T L
2006-08-01
The inverse problem of hovering flight, that is, the range of wing movements appropriate for sustained flight at a fixed position and orientation, was examined by developing a simulation of the hawkmoth Manduca sexta. Inverse problems arise when one is seeking the parameters that are required to achieve a specified model outcome. In contrast, forward problems explore the outcomes given a specified set of input parameters. The simulation was coupled to a microgenetic algorithm that found specific sequences of wing and body motions, encoded by ten independent kinematic parameters, capable of generating the fixed body position and orientation characteristic of hovering flight. Additionally, we explored the consequences of restricting the number of free kinematic parameters and used this information to assess the importance to flight control of individual parameters and various combinations of them. Output from the simulated moth was compared to kinematic recordings of hovering flight in real hawkmoths; the real and simulated moths performed similarly with respect to their range of variation in position and orientation. The simulated moth also used average wingbeat kinematics (amplitude, stroke plane orientation, etc) similar to those of the real moths. However, many different subsets of the available kinematic were sufficient for hovering flight and available kinematic data from real moths does not include sufficient detail to assess which, if any, of these was consistent with the real moth. This general result, the multiplicity of possible hovering kinematics, shows that the means by which Manduca sexta actually maintains position and orientation may have considerable freedom and therefore may be influenced by many other factors beyond the physical and aerodynamic requirements of hovering flight.
Application of evolution strategies for the solution of an inverse problem in near-field optics.
Macías, Demetrio; Vial, Alexandre; Barchiesi, Dominique
2004-08-01
We introduce an inversion procedure for the characterization of a nanostructure from near-field intensity data. The method proposed is based on heuristic arguments and makes use of evolution strategies for the solution of the inverse problem as a nonlinear constrained-optimization problem. By means of some examples we illustrate the performance of our inversion method. We also discuss its possibilities and potential applications.
ERIC Educational Resources Information Center
Parelius, Robert James
Hypothetical and actual problems in the organizational, professional, collegial, and client relationships of college faculty were studied. A list of hypothetical problems was derived from a systematic literature search, and semi-structured interviews were conducted with 32 faculty of history, biological science, political science, and business…
A variational Bayesian approach for inverse problems with skew-t error distributions
NASA Astrophysics Data System (ADS)
Guha, Nilabja; Wu, Xiaoqing; Efendiev, Yalchin; Jin, Bangti; Mallick, Bani K.
2015-11-01
In this work, we develop a novel robust Bayesian approach to inverse problems with data errors following a skew-t distribution. A hierarchical Bayesian model is developed in the inverse problem setup. The Bayesian approach contains a natural mechanism for regularization in the form of a prior distribution, and a LASSO type prior distribution is used to strongly induce sparseness. We propose a variational type algorithm by minimizing the Kullback-Leibler divergence between the true posterior distribution and a separable approximation. The proposed method is illustrated on several two-dimensional linear and nonlinear inverse problems, e.g. Cauchy problem and permeability estimation problem.
The inverse scattering problem at fixed angular momentum for nonlocal separable interactions
NASA Technical Reports Server (NTRS)
Chadan, K.
1972-01-01
The problem of inverse scattering at fixed angular momentum is considered. The problem is particularized to the case of nonlocal separable interactions. A brief survey of the inverse problem for nonlocal separable interactions is presented. This problem can be solved exactly by integration. It amounts to solving singular integral equations of the Hilbert-Mushkhelishvili type, which have been studied extensively in the past and appear in many areas of physics, including theory of elasticity and dispersions relations in high energy physics.
NASA Astrophysics Data System (ADS)
Oralsyn, Gulaym
2016-08-01
We study an inverse coefficient problem for a model equation for one-dimensional heat transfer with a preservation of medium temperature. It is needed (together with finding its solution) to find time dependent unknown coefficient of the equation. So, for this inverse problem, existence of an unique generalized solution is proved. The main difficulty of the considered problems is that the eigenfunction system of the corresponding boundary value problems does not have the basis property.
Variational methods for direct/inverse problems of atmospheric dynamics and chemistry
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena
2013-04-01
We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the
Model error estimation and correction by solving a inverse problem
NASA Astrophysics Data System (ADS)
Xue, Haile
2016-04-01
Nowadays, the weather forecasts and climate predictions are increasingly relied on numerical models. Yet, errors inevitably exist in model due to the imperfect numeric and parameterizations. From the practical point of view, model correction is an efficient strategy. Despite of the different complexity of forecast error correction algorithms, the general idea is to estimate the forecast errors by considering the NWP as a direct problem. Chou (1974) suggested an alternative view by considering the NWP as an inverse problem. The model error tendency term (ME) due to the model deficiency is assumed as an unknown term in NWP model, which can be discretized into short intervals (for example 6 hour) and considered as a constant or linear form in each interval. Given the past re-analyses and NWP model, the discretized MEs in the past intervals can be solved iteratively as a constant or linear-increased tendency term in each interval. These MEs can be further used as the online corrections. In this study, an iterative method for obtaining the MEs in past intervals was presented, and its convergence had been confirmed with sets of experiments in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August (JA) 2009 and January-February (JF) 2010. Then these MEs were used to get online model corretions based of systematic errors of GRAPES-GFS for July 2009 and January 2010. The data sets associated with initial condition and sea surface temperature (SST) used in this study are both based on NCEP final (FNL) data. According to the iterative numerical experiments, the following key conclusions can be drawn:(1) Batches of iteration test results indicated that the hour 6 forecast errors were reduced to 10% of their original value after 20 steps of iteration.(2) By offlinely comparing the error corrections estimated by MEs to the mean forecast errors, the patterns of estimated errors were considered to agree well with those
The inverse problem: Ocean tides derived from earth tide observations
NASA Technical Reports Server (NTRS)
Kuo, J. T.
1978-01-01
Indirect mapping ocean tides by means of land and island-based tidal gravity measurements is presented. The inverse scheme of linear programming is used for indirect mapping of ocean tides. Open ocean tides were measured by the numerical integration of Laplace's tidal equations.
ERIC Educational Resources Information Center
Brown, Malcolm
2009-01-01
Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…
NASA Astrophysics Data System (ADS)
Natarajan, Ramesh
1992-05-01
A method for computing the desired eigenvalues and corresponding eigenvectors of a large-scale, nonsymmetric, complex generalized eigenvalue problem is described. This scheme is primarily intended for the normal mode analysis and the stability characterization of the stationary states of parameterized time-dependent partial differential equations, in particular, when a finite element method is used for the numerical discretization. The algorithm, which is based on the previous work of Saad, may be succintly described as a multiple shift-and-invert, restarted Arnoldi procedure which uses reorthogonalization and automatic shift selection to provide stability and convergence, while minimizing the overall computational effort. The application and efficiency of the method is illustrated using two representative test problems.
Reinforcement learning solution for HJB equation arising in constrained optimal control problem.
Luo, Biao; Wu, Huai-Ning; Huang, Tingwen; Liu, Derong
2015-11-01
The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations.
Reinforcement learning solution for HJB equation arising in constrained optimal control problem.
Luo, Biao; Wu, Huai-Ning; Huang, Tingwen; Liu, Derong
2015-11-01
The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations. PMID:26356598
Khan, T.; Ramuhalli, Pradeep; Dass, Sarat
2011-06-30
Flaw profile characterization from NDE measurements is a typical inverse problem. A novel transformation of this inverse problem into a tracking problem, and subsequent application of a sequential Monte Carlo method called particle filtering, has been proposed by the authors in an earlier publication [1]. In this study, the problem of flaw characterization from multi-sensor data is considered. The NDE inverse problem is posed as a statistical inverse problem and particle filtering is modified to handle data from multiple measurement modes. The measurement modes are assumed to be independent of each other with principal component analysis (PCA) used to legitimize the assumption of independence. The proposed particle filter based data fusion algorithm is applied to experimental NDE data to investigate its feasibility.
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
On t-local solvability of inverse scattering problems in two-dimensional layered media
NASA Astrophysics Data System (ADS)
Baev, A. V.
2015-06-01
The solvability of two-dimensional inverse scattering problems for the Klein-Gordon equation and the Dirac system in a time-local formulation is analyzed in the framework of the Galerkin method. A necessary and sufficient condition for the unique solvability of these problems is obtained in the form of an energy conservation law. It is shown that the inverse problems are solvable only in the class of potentials for which the stationary Navier-Stokes equation is solvable.
A boundary integral method for an inverse problem in thermal imaging
NASA Technical Reports Server (NTRS)
Bryan, Kurt
1992-01-01
An inverse problem in thermal imaging involving the recovery of a void in a material from its surface temperature response to external heating is examined. Uniqueness and continuous dependence results for the inverse problem are demonstrated, and a numerical method for its solution is developed. This method is based on an optimization approach, coupled with a boundary integral equation formulation of the forward heat conduction problem. Some convergence results for the method are proved, and several examples are presented using computationally generated data.
NASA Astrophysics Data System (ADS)
Ashyralyyev, Charyyar; Akyüz, Gulzipa
2016-08-01
In this study, we discuss well-posedness of Bitsadze-Samarskii type inverse elliptic problem with Dirichlet conditions. We establish abstract results on stability and coercive stability estimates for the solution of this inverse problem. Then, the abstract results are applied to three overdetermined problems for the multi-dimensional elliptic equation with different boundary conditions. Stability inequalities for solutions of these applications are obtained.
Inversion problem for ion-atom differential elastic scattering.
NASA Technical Reports Server (NTRS)
Rich, W. G.; Bobbio, S. M.; Champion, R. L.; Doverspike, L. D.
1971-01-01
The paper describes a practical application of Remler's (1971) method by which one constructs a set of phase shifts from high resolution measurements of the differential elastic scattering of protons by rare-gas atoms. These JWKB phase shifts are then formally inverted to determine the corresponding intermolecular potentials. The validity of the method is demonstrated by comparing an intermolecular potential obtained by direct inversion of experimental data with a fairly accurate calculation by Wolniewicz (1965).
NASA Astrophysics Data System (ADS)
Bürger, Raimund; Kumar, Sarvesh; Ruiz-Baier, Ricardo
2015-10-01
The sedimentation-consolidation and flow processes of a mixture of small particles dispersed in a viscous fluid at low Reynolds numbers can be described by a nonlinear transport equation for the solids concentration coupled with the Stokes problem written in terms of the mixture flow velocity and the pressure field. Here both the viscosity and the forcing term depend on the local solids concentration. A semi-discrete discontinuous finite volume element (DFVE) scheme is proposed for this model. The numerical method is constructed on a baseline finite element family of linear discontinuous elements for the approximation of velocity components and concentration field, whereas the pressure is approximated by piecewise constant elements. The unique solvability of both the nonlinear continuous problem and the semi-discrete DFVE scheme is discussed, and optimal convergence estimates in several spatial norms are derived. Properties of the model and the predicted space accuracy of the proposed formulation are illustrated by detailed numerical examples, including flows under gravity with changing direction, a secondary settling tank in an axisymmetric setting, and batch sedimentation in a tilted cylindrical vessel.
Application of spectral Lanczos decomposition method to large scale problems arising geophysics
Tamarchenko, T.
1996-12-31
This paper presents an application of Spectral Lanczos Decomposition Method (SLDM) to numerical modeling of electromagnetic diffusion and elastic waves propagation in inhomogeneous media. SLDM approximates an action of a matrix function as a linear combination of basis vectors in Krylov subspace. I applied the method to model electromagnetic fields in three-dimensions and elastic waves in two dimensions. The finite-difference approximation of the spatial part of differential operator reduces the initial boundary-value problem to a system of ordinary differential equations with respect to time. The solution to this system requires calculating exponential and sine/cosine functions of the stiffness matrices. Large scale numerical examples are in a good agreement with the theoretical error bounds and stability estimates given by Druskin, Knizhnerman, 1987.
New problems arising from old drugs: second-generation effects of acetaminophen.
Tiegs, Gisa; Karimi, Khalil; Brune, Kay; Arck, Petra
2014-09-01
Acetaminophen (APAP)/paracetamol is one of the most commonly used over-the-counter drugs taken worldwide for treatment of pain and fever. Although considered as safe when taken in recommended doses not higher than 4 g/day, APAP overdose is currently the most important cause of acute liver failure (ALF). ALF may require liver transplantation and can be fatal. The reasons for APAP-related ALF are mostly intentional (suicidal) or unintentional overdose. However, results from large scale epidemiological studies provide increasing evidence for second generation effects of APAP, even when taken in pharmacological doses. Most strikingly, APAP medication during pregnancy has been associated with health problems including neurodevelopmental and behavioral disorders such as attention deficit hyperactivity disorder and increase in the risk of wheezing and incidence of asthma among offspring. This article reviews the epidemiological findings and aims to shed light into the molecular and cellular mechanisms responsible for APAP-mediated prenatal risk for asthma. PMID:25075430
Approximate Series Solution of Nonlinear Singular Boundary Value Problems Arising in Physiology
2014-01-01
We introduce an efficient recursive scheme based on Adomian decomposition method (ADM) for solving nonlinear singular boundary value problems. This approach is based on a modification of the ADM; here we use all the boundary conditions to derive an integral equation before establishing the recursive scheme for the solution components. In fact, we develop the recursive scheme without any undetermined coefficients while computing the solution components. Unlike the classical ADM, the proposed method avoids solving a sequence of nonlinear algebraic or transcendental equations for the undetermined coefficients. The approximate solution is obtained in the form of series with easily calculable components. The uniqueness of the solution is discussed. The convergence and error analysis of the proposed method are also established. The accuracy and reliability of the proposed method are examined by four numerical examples. PMID:24707221
NASA Astrophysics Data System (ADS)
Jiang, Mingfeng; Xia, Ling; Shou, Guofa; Tang, Min
2007-03-01
Computing epicardial potentials from body surface potentials constitutes one form of ill-posed inverse problem of electrocardiography (ECG). To solve this ECG inverse problem, the Tikhonov regularization and truncated singular-value decomposition (TSVD) methods have been commonly used to overcome the ill-posed property by imposing constraints on the magnitudes or derivatives of the computed epicardial potentials. Such direct regularization methods, however, are impractical when the transfer matrix is large. The least-squares QR (LSQR) method, one of the iterative regularization methods based on Lanczos bidiagonalization and QR factorization, has been shown to be numerically more reliable in various circumstances than the other methods considered. This LSQR method, however, to our knowledge, has not been introduced and investigated for the ECG inverse problem. In this paper, the regularization properties of the Krylov subspace iterative method of LSQR for solving the ECG inverse problem were investigated. Due to the 'semi-convergence' property of the LSQR method, the L-curve method was used to determine the stopping iteration number. The performance of the LSQR method for solving the ECG inverse problem was also evaluated based on a realistic heart-torso model simulation protocol. The results show that the inverse solutions recovered by the LSQR method were more accurate than those recovered by the Tikhonov and TSVD methods. In addition, by combing the LSQR with genetic algorithms (GA), the performance can be improved further. It suggests that their combination may provide a good scheme for solving the ECG inverse problem.
Remark on boundary data for inverse boundary value problems for the Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Imanuvilov, O. Yu; Yamamoto, M.
2015-10-01
In this note, we prove that for the Navier-Stokes equations, a pair of Dirichlet and Neumann data and pressure uniquely correspond to a pair of Dirichlet data and surface stress on the boundary. Hence the two inverse boundary value problems in Imanuvilov and Yamamoto (2015 Inverse Probl. 31 035004) and Lai et al (Arch. Rational Mech. Anal.) are the same.
[Anaesthetic problems arising during the surgical correction of scoliosis (harrington chnique].
Hack, G; Schraudebach, T; Rommelsheim, K; Freiberger, K U; Picht, U
1976-04-01
Anaesthesia for the surgical correction of scoliosis with the Harrington technique carries serious risks on account of the impaired cardiac and pulmonary function, the length of the operation, the area involved and the post-operative problems. Based on the experience gained in 32 young persons who had this operation the anaesthetic procedure for these cases is described: it comprises detailted pre- operative examination of cardiac and pulmonary function, continuous monitoring during the operation, a careful technique that takes into account the massive blood loss and stress associated with the operation, a careful technique that takes into account the massive blood loss and stress associated with the operation and close surveillance during the post-operative stage. Controlled hypotension (60 mm Hg) succeeded in reducing the blood loss during operation to 2,500 ml, compared with 4,500 ml without hypotension. If the pre-0perative examinations have established adequate cardiac function, if surgeon and anaesthetist work in close collaboration and if the heart action, pulse, arterial and venous pressure (catheter) and body temperature are continuously monitored, then controlled hypotension offers a means to reduce the, generally massive, blood loss during the surgical correction of scoliosis.
Ryoo, Seung-Bum; Oh, Heung-Kwon; Ha, Heon-Kyun; Choe, Eun Kyung; Moon, Sang Hui
2012-01-01
An anorectal foreign body can cause serious complications such as incontinence, rectal perforation, peritonitis, or pelvic abscess, so it should be managed immediately. We experienced two cases of operative treatment for a self-inserted anorectal foreign body. In one, the foreign body could not be removed as it was completely impacted in the anal canal. We failed to remove it through the anus. A laparotomy and removal of the foreign body was performed by using an incision on the rectum. Primary colsure and a sigmoid loop colostomy were done. A colostomy take-down was done after three months. The other was a rectal perforation from anal masturbation with a plastic device. We performed primary repair of the perforated rectosigmoid colon, and we didea sigmoid loop colostom. A colostomy take-down was done three months later. Immediate and proper treatment for a self-inserted anorectal foreign body is important to prevent severe complications, and we report successful surgical treatments for problems caused by anorectal foreign bodies. PMID:22413083
NASA Astrophysics Data System (ADS)
Corrado, Cesare; Gerbeau, Jean-Frédéric; Moireau, Philippe
2015-02-01
This work addresses the inverse problem of electrocardiography from a new perspective, by combining electrical and mechanical measurements. Our strategy relies on the definition of a model of the electromechanical contraction which is registered on ECG data but also on measured mechanical displacements of the heart tissue typically extracted from medical images. In this respect, we establish in this work the convergence of a sequential estimator which combines for such coupled problems various state of the art sequential data assimilation methods in a unified consistent and efficient framework. Indeed, we aggregate a Luenberger observer for the mechanical state and a Reduced-Order Unscented Kalman Filter applied on the parameters to be identified and a POD projection of the electrical state. Then using synthetic data we show the benefits of our approach for the estimation of the electrical state of the ventricles along the heart beat compared with more classical strategies which only consider an electrophysiological model with ECG measurements. Our numerical results actually show that the mechanical measurements improve the identifiability of the electrical problem allowing to reconstruct the electrical state of the coupled system more precisely. Therefore, this work is intended to be a first proof of concept, with theoretical justifications and numerical investigations, of the advantage of using available multi-modal observations for the estimation and identification of an electromechanical model of the heart.
Global solution to a hyperbolic problem arising in the modeling of blood flow in circulatory systems
NASA Astrophysics Data System (ADS)
Ruan, Weihua; Clark, M. E.; Zhao, Meide; Curcio, Anthony
2007-07-01
This paper considers a system of first-order, hyperbolic, partial differential equations in the domain of a one-dimensional network. The system models the blood flow in human circulatory systems as an initial-boundary-value problem with boundary conditions of either algebraic or differential type. The differential equations are nonhomogeneous with frictional damping terms and the state variables are coupled at internal junctions. The existence and uniqueness of the local classical solution have been established in our earlier work [W. Ruan, M.E. Clark, M. Zhao, A. Curcio, A hyperbolic system of equations of blood flow in an arterial network, J. Appl. Math. 64 (2) (2003) 637-667; W. Ruan, M.E. Clark, M. Zhao, A. Curcio, Blood flow in a network, Nonlinear Anal. Real World Appl. 5 (2004) 463-485; W. Ruan, M.E. Clark, M. Zhao, A. Curcio, A quasilinear hyperbolic system that models blood flow in a network, in: Charles V. Benton (Ed.), Focus on Mathematical Physics Research, Nova Science Publishers, Inc., New York, 2004, pp. 203-230]. This paper continues the analysis and gives sufficient conditions for the global existence of the classical solution. We prove that the solution exists globally if the boundary data satisfy the dissipative condition (2.3) or (3.2), and the norms of the initial and forcing functions in a certain Sobolev space are sufficiently small. This is only the first step toward establishing the global existence of the solution to physiologically realistic models, because, in general, the chosen dissipative conditions (2.3) and (3.2) do not appear to hold for the originally proposed boundary conditions (1.3)-(1.12).
NASA Astrophysics Data System (ADS)
Cheng, Jin; Hon, Yiu-Chung; Seo, Jin Keun; Yamamoto, Masahiro
2005-01-01
The Second International Conference on Inverse Problems: Recent Theoretical Developments and Numerical Approaches was held at Fudan University, Shanghai from 16-21 June 2004. The first conference in this series was held at the City University of Hong Kong in January 2002 and it was agreed to hold the conference once every two years in a Pan-Pacific Asian country. The next conference is scheduled to be held at Hokkaido University, Sapporo, Japan in July 2006. The purpose of this series of biennial conferences is to establish and develop constant international collaboration, especially among the Pan-Pacific Asian countries. In recent decades, interest in inverse problems has been flourishing all over the globe because of both the theoretical interest and practical requirements. In particular, in Asian countries, one is witnessing remarkable new trends of research in inverse problems as well as the participation of many young talents. Considering these trends, the second conference was organized with the chairperson Professor Li Tat-tsien (Fudan University), in order to provide forums for developing research cooperation and to promote activities in the field of inverse problems. Because solutions to inverse problems are needed in various applied fields, we entertained a total of 92 participants at the second conference and arranged various talks which ranged from mathematical analyses to solutions of concrete inverse problems in the real world. This volume contains 18 selected papers, all of which have undergone peer review. The 18 papers are classified as follows: Surveys: four papers give reviews of specific inverse problems. Theoretical aspects: six papers investigate the uniqueness, stability, and reconstruction schemes. Numerical methods: four papers devise new numerical methods and their applications to inverse problems. Solutions to applied inverse problems: four papers discuss concrete inverse problems such as scattering problems and inverse problems in
Comparing hard and soft prior bounds in geophysical inverse problems
NASA Technical Reports Server (NTRS)
Backus, George E.
1987-01-01
In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describeds the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.
Comparing hard and soft prior bounds in geophysical inverse problems
NASA Technical Reports Server (NTRS)
Backus, George E.
1988-01-01
In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describes the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.
Variational principles and optimal solutions of the inverse problems of creep bending of plates
NASA Astrophysics Data System (ADS)
Bormotin, K. S.; Oleinikov, A. I.
2012-09-01
It is shown that inverse problems of steady-state creep bending of plates in both the geometrically linear and nonlinear formulations can be represented in a variational formulation. Steady-state values of the obtained functionals corresponding to the solutions of the problems of inelastic deformation and elastic unloading are determined by applying a finite element procedure to the functionals. Optimal laws of creep deformation are formulated using the criterion of minimizing damage in the functionals of the inverse problems. The formulated problems are reduced to the problems solved by the finite element method using MSC.Marc software.
NASA Astrophysics Data System (ADS)
Grigoriev, M.; Babich, L.
2015-09-01
The article represents the main noninvasive methods of heart electrical activity examination, theoretical bases of solution of electrocardiography inverse problem, application of different methods of heart examination in clinical practice, and generalized achievements in this sphere in global experience.
NASA Astrophysics Data System (ADS)
Denisov, A. M.; Zakharov, E. V.; Kalinin, A. V.; Kalinin, V. V.
2010-07-01
A numerical method is proposed for solving an inverse electrocardiography problem for a medium with a piecewise constant electrical conductivity. The method is based on the method of boundary integral equations and Tikhonov regularization.
The numerical solution of the boundary inverse problem for a parabolic equation
NASA Astrophysics Data System (ADS)
Vasil'ev, V. V.; Vasilyeva, M. V.; Kardashevsky, A. M.
2016-10-01
Boundary inverse problems occupy an important place among the inverse problems of mathematical physics. They are connected with the problems of diagnosis, when additional measurements on one of the borders or inside the computational domain are necessary to restore the boundary regime in the other border, inaccessible to direct measurements. The boundary inverse problems belong to a class of conditionally correct problems, and therefore, their numerical solution requires the development of special computational algorithms. The paper deals with the solution of the boundary inverse problem for one-dimensional second-order parabolic equations, consisting in the restoration of boundary regime according to measurements inside the computational domain. For the numerical solution of the inverse problem it is proposed to use an analogue of a computational algorithm, proposed and developed to meet the challenges of identification of the right side of the parabolic equations in the works P.N.Vabishchevich and his students based on a special decomposition of solving the problem at each temporal layer. We present and discuss the results of a computational experiment conducted on model problems with quasi-solutions, including with random errors in the input data.
Some Inverse Problems in Periodic Homogenization of Hamilton-Jacobi Equations
NASA Astrophysics Data System (ADS)
Luo, Songting; Tran, Hung V.; Yu, Yifeng
2016-09-01
We look at the effective Hamiltonian {overline{H}} associated with the Hamiltonian {H(p,x)=H(p)+V(x)} in the periodic homogenization theory. Our central goal is to understand the relation between {V} and {overline{H}}. We formulate some inverse problems concerning this relation. Such types of inverse problems are, in general, very challenging. In this paper, we discuss several special cases in both convex and nonconvex settings.
Review of the inverse scattering problem at fixed energy in quantum mechanics
NASA Technical Reports Server (NTRS)
Sabatier, P. C.
1972-01-01
Methods of solution of the inverse scattering problem at fixed energy in quantum mechanics are presented. Scattering experiments of a beam of particles at a nonrelativisitic energy by a target made up of particles are analyzed. The Schroedinger equation is used to develop the quantum mechanical description of the system and one of several functions depending on the relative distance of the particles. The inverse problem is the construction of the potentials from experimental measurements.
Coll-Font, Jaume; Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrel J; Wang, Dafang; Brooks, Dana H; van Dam, Peter; Macleod, Rob S
2015-01-01
Cardiac electrical imaging often requires the examination of different forward and inverse problem formulations based on mathematical and numerical approximations of the underlying source and the intervening volume conductor that can generate the associated voltages on the surface of the body. If the goal is to recover the source on the heart from body surface potentials, the solution strategy must include numerical techniques that can incorporate appropriate constraints and recover useful solutions, even though the problem is badly posed. Creating complete software solutions to such problems is a daunting undertaking. In order to make such tools more accessible to a broad array of researchers, the Center for Integrative Biomedical Computing (CIBC) has made an ECG forward/inverse toolkit available within the open source SCIRun system. Here we report on three new methods added to the inverse suite of the toolkit. These new algorithms, namely a Total Variation method, a non-decreasing TMP inverse and a spline-based inverse, consist of two inverse methods that take advantage of the temporal structure of the heart potentials and one that leverages the spatial characteristics of the transmembrane potentials. These three methods further expand the possibilities of researchers in cardiology to explore and compare solutions to their particular imaging problem. PMID:26618184
FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)
NASA Astrophysics Data System (ADS)
2014-10-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the
FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems
NASA Astrophysics Data System (ADS)
Vourc'h, Eric; Rodet, Thomas
2015-11-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods
Hybrid modeling of direct and inverse problems of heat conduction
NASA Astrophysics Data System (ADS)
Matsevityi, Yu. M.
1981-02-01
The article explains the method of solving nonlinear problems of heat conduction with the aid of hybrid computer systems. It examines the possibility of using hybrid systems for realizing the method of optimum dynamic filtration.
NASA Astrophysics Data System (ADS)
Tenorio, L.; Haber, E.; Symes, W. W.; Stark, P. B.; Cox, D.; Ghattas, O.
2008-06-01
In the words of D D Jackson, the data of real-world inverse problems tend to be inaccurate, insufficient and inconsistent (1972 Geophys. J. R. Astron. Soc. 28 97-110). In view of these features, the characterization of solution uncertainty is an essential aspect of the study of inverse problems. The development of computational technology, in particular of multiscale and adaptive methods and robust optimization algorithms, has combined with advances in statistical methods in recent years to create unprecedented opportunities to understand and explore the role of uncertainty in inversion. Following this introductory article, the special section contains 16 papers describing recent statistical and computational advances in a variety of inverse problem settings.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.; Park, C.B.
2005-01-01
In a set of two papers we study the inverse problem of refraction travel times. The purpose of this work is to use the study as a basis for development of more sophisticated methods for finding more reliable solutions to the inverse problem of refraction travel times, which is known to be nonunique. The first paper, "Types of Geophysical Nonuniqueness through Minimization," emphasizes the existence of different forms of nonuniqueness in the realm of inverse geophysical problems. Each type of nonuniqueness requires a different type and amount of a priori information to acquire a reliable solution. Based on such coupling, a nonuniqueness classification is designed. Therefore, since most inverse geophysical problems are nonunique, each inverse problem must be studied to define what type of nonuniqueness it belongs to and thus determine what type of a priori information is necessary to find a realistic solution. The second paper, "Quantifying Refraction Nonuniqueness Using a Three-layer Model," serves as an example of such an approach. However, its main purpose is to provide a better understanding of the inverse refraction problem by studying the type of nonuniqueness it possesses. An approach for obtaining a realistic solution to the inverse refraction problem is planned to be offered in a third paper that is in preparation. The main goal of this paper is to redefine the existing generalized notion of nonuniqueness and a priori information by offering a classified, discriminate structure. Nonuniqueness is often encountered when trying to solve inverse problems. However, possible nonuniqueness diversity is typically neglected and nonuniqueness is regarded as a whole, as an unpleasant "black box" and is approached in the same manner by applying smoothing constraints, damping constraints with respect to the solution increment and, rarely, damping constraints with respect to some sparse reference information about the true parameters. In practice, when solving geophysical
Hemmelmayr, Vera C.; Cordeau, Jean-François; Crainic, Teodor Gabriel
2012-01-01
In this paper, we propose an adaptive large neighborhood search heuristic for the Two-Echelon Vehicle Routing Problem (2E-VRP) and the Location Routing Problem (LRP). The 2E-VRP arises in two-level transportation systems such as those encountered in the context of city logistics. In such systems, freight arrives at a major terminal and is shipped through intermediate satellite facilities to the final customers. The LRP can be seen as a special case of the 2E-VRP in which vehicle routing is performed only at the second level. We have developed new neighborhood search operators by exploiting the structure of the two problem classes considered and have also adapted existing operators from the literature. The operators are used in a hierarchical scheme reflecting the multi-level nature of the problem. Computational experiments conducted on several sets of instances from the literature show that our algorithm outperforms existing solution methods for the 2E-VRP and achieves excellent results on the LRP. PMID:23483764
Numerical computations on one-dimensional inverse scattering problems
NASA Technical Reports Server (NTRS)
Dunn, M. H.; Hariharan, S. I.
1983-01-01
An approximate method to determine the index of refraction of a dielectric obstacle is presented. For simplicity one dimensional models of electromagnetic scattering are treated. The governing equations yield a second order boundary value problem, in which the index of refraction appears as a functional parameter. The availability of reflection coefficients yield two additional boundary conditions. The index of refraction by a k-th order spline which can be written as a linear combination of B-splines is approximated. For N distinct reflection coefficients, the resulting N boundary value problems yield a system of N nonlinear equations in N unknowns which are the coefficients of the B-splines.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
An inverse problem of thickness design for bilayer textile materials under low temperature
NASA Astrophysics Data System (ADS)
Xu, Dinghua; Cheng, Jianxin; Chen, Yuanbo; Ge, Meibao
2011-04-01
The human heat-moisture-comfort level is mainly determined by heat and moisture transfer characteristics in clothing. With respect to the model of steady-state heat and moisture transfer through parallel pore textiles, we propose an inverse problem of thickness design for bilayer textile material under low temperature in this paper. Adopting the idea of regularization method, we formulate the inverse problem solving into a function minimization problem. Combining the finite difference method for ordinary differential equations with direct search method of one-dimensional minimization problems, we derive three kinds of iteration algorithms of regularized solution for the inverse problem of thickness design. Numerical simulation is achieved to verify the efficiency of proposed methods.
Inverse problems in geographical economics: parameter identification in the spatial Solow model.
Engbers, Ralf; Burger, Martin; Capasso, Vincenzo
2014-11-13
The identification of production functions from data is an important task in the modelling of economic growth. In this paper, we consider a non-parametric approach to this identification problem in the context of the spatial Solow model which allows for rather general production functions, in particular convex-concave ones that have recently been proposed as reasonable shapes. We formulate the inverse problem and apply Tikhonov regularization. The inverse problem is discretized by finite elements and solved iteratively via a preconditioned gradient descent approach. Numerical results for the reconstruction of the production function are given and analysed at the end of this paper.
Nonlinear inverse problem for the estimation of time-and-space dependent heat transfer coefficients
NASA Astrophysics Data System (ADS)
Osman, A. M.; Beck, J. V.
1987-01-01
The aim of this paper is to describe a method and an algorithm for the direct estimation of the time-and-space dependent heat transfer coefficients from transient temperature data measured at approximate points inside a heat conducting solid. This inverse estimation problem is called herein the inverse heat transfer coefficient problem. An application considered in the present is the quenching of a solid in a liquid. The solution method used here is an extension of the sequential temperature future-information method introduced by Beck for solving the inverse heat conduction problem. The finite-difference method, based on the control volume approach, was used for the discretization of the direct heat conduction problem. Numerical results show that the proposed method is accurate and efficient.
Unrealistic parameter estimates in inverse modelling: A problem or a benefit for model calibration?
Poeter, E.P.; Hill, M.C.
1996-01-01
Estimation of unrealistic parameter values by inverse modelling is useful for constructed model discrimination. This utility is demonstrated using the three-dimensional, groundwater flow inverse model MODFLOWP to estimate parameters in a simple synthetic model where the true conditions and character of the errors are completely known. When a poorly constructed model is used, unreasonable parameter values are obtained even when using error free observations and true initial parameter values. This apparent problem is actually a benefit because it differentiates accurately and inaccurately constructed models. The problems seem obvious for a synthetic problem in which the truth is known, but are obscure when working with field data. Situations in which unrealistic parameter estimates indicate constructed model problems are illustrated in applications of inverse modelling to three field sites and to complex synthetic test cases in which it is shown that prediction accuracy also suffers when constructed models are inaccurate.
Moment inversion problem for piecewise D-finite functions
NASA Astrophysics Data System (ADS)
Batenkov, Dmitry
2009-10-01
We consider the problem of exact reconstruction of univariate functions with jump discontinuities at unknown positions from their moments. These functions are assumed to satisfy an a priori unknown linear homogeneous differential equation with polynomial coefficients on each continuity interval. Therefore, they may be specified by a finite amount of information. This reconstruction problem has practical importance in signal processing and other applications. It is somewhat of a 'folklore' that the sequence of the moments of such 'piecewise D-finite' functions satisfies a linear recurrence relation of bounded order and degree. We derive this recurrence relation explicitly. It turns out that the coefficients of the differential operator which annihilates every piece of the function, as well as the locations of the discontinuities, appear in this recurrence in a precisely controlled manner. This leads to the formulation of a generic algorithm for reconstructing a piecewise D-finite function from its moments. We investigate the conditions for solvability of resulting linear systems in the general case, as well as analyse a few particular examples. We provide results of numerical simulations for several types of signals, which test the sensitivity of the proposed algorithm to noise.
The inverse problem in electrocardiography: solutions in terms of epicardial potentials.
Rudy, Y; Messinger-Rapport, B J
1988-01-01
The objective of the inverse problem in electrocardiography is to recover noninvasively regional information about intracardiac electrical events from electrical measurements on the body surface. The choice of epicardial potentials as the solution to the inverse problem is motivated by the availability of a unique epicardial potential solution for each body surface potential distribution, by the ability to verify experimentally the inverse-recovered epicardial potentials, by the proven relationship between epicardial potentials and the details of intracardiac regional events, and by the possibility of using the inverse solution as a supplement or possible replacement to clinical epicardial potential mapping prior to surgical intervention. Although, in principle, the epicardial potential distribution can be recovered from the body surface potential distribution, the inverse problem in terms of potentials is ill-posed, and naive attempts to reconstruct the epicardial potentials result in incorrect solutions which are highly oscillatory. Large deviations from the actual solution may result from inaccuracy of the data measurement, incomplete knowledge of the potential data over the entire torso, and inaccurate description of the inhomogeneous torso volume conductor. This review begins with a mathematical and qualitative description of the inverse problem in terms of epicardial potentials. The ill-posed nature of the problem is demonstrated using a theoretical boundary value problem. Effects of inaccuracies in the body surface potential data (stability estimates) are introduced, and a sensitivity analysis of geometrical and inhomogeneity parameters is presented using an analytical eccentric spheres model. Various computational methods for relating epicardial to body surface potentials, i.e., the computation of the forward transfer matrix, are described and compared. The need for regularization of the inverse recovery of epicardial potentials, resulting from the need to
A direct analytical approach for solving linear inverse heat conduction problems
NASA Astrophysics Data System (ADS)
Ainajem, N. M.; Ozisik, M. N.
1985-08-01
The analytical approach presented for the solution of linear inverse heat conduction problems demonstrates that applied surface conditions involving abrupt changes with time can be effectively accommodated with polynomial representations in time over the entire time domain; the resulting inverse analysis predicts surface conditions accurately. All previous attempts have experienced difficulties in the development of analytic solutions that are applicable over the entire time domain when a polynomial representation is used.
Inverse problems: Fuzzy representation of uncertainty generates a regularization
NASA Technical Reports Server (NTRS)
Kreinovich, V.; Chang, Ching-Chuang; Reznik, L.; Solopchenko, G. N.
1992-01-01
In many applied problems (geophysics, medicine, and astronomy) we cannot directly measure the values x(t) of the desired physical quantity x in different moments of time, so we measure some related quantity y(t), and then we try to reconstruct the desired values x(t). This problem is often ill-posed in the sense that two essentially different functions x(t) are consistent with the same measurement results. So, in order to get a reasonable reconstruction, we must have some additional prior information about the desired function x(t). Methods that use this information to choose x(t) from the set of all possible solutions are called regularization methods. In some cases, we know the statistical characteristics both of x(t) and of the measurement errors, so we can apply statistical filtering methods (well-developed since the invention of a Wiener filter). In some situations, we know the properties of the desired process, e.g., we know that the derivative of x(t) is limited by some number delta, etc. In this case, we can apply standard regularization techniques (e.g., Tikhonov's regularization). In many cases, however, we have only uncertain knowledge about the values of x(t), about the rate with which the values of x(t) can change, and about the measurement errors. In these cases, usually one of the existing regularization methods is applied. There exist several heuristics that choose such a method. The problem with these heuristics is that they often lead to choosing different methods, and these methods lead to different functions x(t). Therefore, the results x(t) of applying these heuristic methods are often unreliable. We show that if we use fuzzy logic to describe this uncertainty, then we automatically arrive at a unique regularization method, whose parameters are uniquely determined by the experts knowledge. Although we start with the fuzzy description, but the resulting regularization turns out to be quite crisp.
From Bayes to Tarantola: New insights to understand uncertainty in inverse problems
NASA Astrophysics Data System (ADS)
Fernández-Martínez, J. L.; Fernández-Muñiz, Z.; Pallero, J. L. G.; Pedruelo-González, L. M.
2013-11-01
Anyone working on inverse problems is aware of their ill-posed character. In the case of inverse problems, this concept (ill-posed) proposed by J. Hadamard in 1902, admits revision since it is somehow related to their ill-conditioning and the use of local optimization methods to find their solution. A more general and interesting approach regarding risk analysis and epistemological decision making would consist in analyzing the existence of families of equivalent model parameters that are compatible with the prior information and predict the observed data within the same error bounds. Otherwise said, the ill-posed character of discrete inverse problems (ill-conditioning) originates that their solution is uncertain. Traditionally nonlinear inverse problems in discrete form have been solved via local optimization methods with regularization, but linear analysis techniques failed to account for the uncertainty in the solution that it is adopted. As a result of this fact uncertainty analysis in nonlinear inverse problems has been approached in a probabilistic framework (Bayesian approach), but these methods are hindered by the curse of dimensionality and by the high computational cost needed to solve the corresponding forward problems. Global optimization techniques are very attractive, but most of the times are heuristic and have the same limitations than Monte Carlo methods. New research is needed to provide uncertainty estimates, especially in the case of high dimensional nonlinear inverse problems with very costly forward problems. After the discredit of deterministic methods and some initial years of Bayesian fever, now the pendulum seems to return back, because practitioners are aware that the uncertainty analysis in high dimensional nonlinear inverse problems cannot (and should not be) solved via random sampling methodologies. The main reason is that the uncertainty “space” of nonlinear inverse problems has a mathematical structure that is embedded in the
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Cameron, M.K.; Fomel, S.B.; Sethian, J.A.
2009-01-01
In the present work we derive and study a nonlinear elliptic PDE coming from the problem of estimation of sound speed inside the Earth. The physical setting of the PDE allows us to pose only a Cauchy problem, and hence is ill-posed. However we are still able to solve it numerically on a long enough time interval to be of practical use. We used two approaches. The first approach is a finite difference time-marching numerical scheme inspired by the Lax-Friedrichs method. The key features of this scheme is the Lax-Friedrichs averaging and the wide stencil in space. The second approach is a spectral Chebyshev method with truncated series. We show that our schemes work because of (1) the special input corresponding to a positive finite seismic velocity, (2) special initial conditions corresponding to the image rays, (3) the fact that our finite-difference scheme contains small error terms which damp the high harmonics; truncation of the Chebyshev series, and (4) the need to compute the solution only for a short interval of time. We test our numerical scheme on a collection of analytic examples and demonstrate a dramatic improvement in accuracy in the estimation of the sound speed inside the Earth in comparison with the conventional Dix inversion. Our test on the Marmousi example confirms the effectiveness of the proposed approach.
Zatsiorsky, Vladimir M.
2011-01-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907
Zabaras, N. Ganapathysubramanian, B.
2008-04-20
Experimental evidence suggests that the dynamics of many physical phenomena are significantly affected by the underlying uncertainties associated with variations in properties and fluctuations in operating conditions. Recent developments in stochastic analysis have opened the possibility of realistic modeling of such systems in the presence of multiple sources of uncertainties. These advances raise the possibility of solving the corresponding stochastic inverse problem: the problem of designing/estimating the evolution of a system in the presence of multiple sources of uncertainty given limited information. A scalable, parallel methodology for stochastic inverse/design problems is developed in this article. The representation of the underlying uncertainties and the resultant stochastic dependant variables is performed using a sparse grid collocation methodology. A novel stochastic sensitivity method is introduced based on multiple solutions to deterministic sensitivity problems. The stochastic inverse/design problem is transformed to a deterministic optimization problem in a larger-dimensional space that is subsequently solved using deterministic optimization algorithms. The design framework relies entirely on deterministic direct and sensitivity analysis of the continuum systems, thereby significantly enhancing the range of applicability of the framework for the design in the presence of uncertainty of many other systems usually analyzed with legacy codes. Various illustrative examples with multiple sources of uncertainty including inverse heat conduction problems in random heterogeneous media are provided to showcase the developed framework.
NASA Astrophysics Data System (ADS)
Zabaras, N.; Ganapathysubramanian, B.
2008-04-01
Experimental evidence suggests that the dynamics of many physical phenomena are significantly affected by the underlying uncertainties associated with variations in properties and fluctuations in operating conditions. Recent developments in stochastic analysis have opened the possibility of realistic modeling of such systems in the presence of multiple sources of uncertainties. These advances raise the possibility of solving the corresponding stochastic inverse problem: the problem of designing/estimating the evolution of a system in the presence of multiple sources of uncertainty given limited information. A scalable, parallel methodology for stochastic inverse/design problems is developed in this article. The representation of the underlying uncertainties and the resultant stochastic dependant variables is performed using a sparse grid collocation methodology. A novel stochastic sensitivity method is introduced based on multiple solutions to deterministic sensitivity problems. The stochastic inverse/design problem is transformed to a deterministic optimization problem in a larger-dimensional space that is subsequently solved using deterministic optimization algorithms. The design framework relies entirely on deterministic direct and sensitivity analysis of the continuum systems, thereby significantly enhancing the range of applicability of the framework for the design in the presence of uncertainty of many other systems usually analyzed with legacy codes. Various illustrative examples with multiple sources of uncertainty including inverse heat conduction problems in random heterogeneous media are provided to showcase the developed framework.
Terekhov, Alexander V; Zatsiorsky, Vladimir M
2011-02-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423-453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907
Terekhov, Alexander V; Zatsiorsky, Vladimir M
2011-02-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423-453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem.
Application of the Biot model to ultrasound in bone: inverse problem.
Sebaa, N; Fellah, Z A; Fellah, M; Ogam, E; Mitri, F G; Depollier, C; Lauriks, W
2008-07-01
This paper concerns the ultrasonic characterization of human cancellous bone samples by solving the inverse problem using experimentally measured signals. The inverse problem is solved numerically by the least squares method. Five parameters are inverted: porosity, tortuosity, viscous characteristic length, Young modulus, and Poisson ratio of the skeletal frame. The minimization of the discrepancy between experiment and theory is made in the time domain. The ultrasonic propagation in cancellous bone is modelled using the Biot theory modified by the Johnson-Koplik-Dashen model for viscous exchange between fluid and structure. The sensitivity of the Young modulus and the Poisson ratio of the skeletal frame is studied showing their effect on the fast and slow waveforms. The inverse problem is shown to be well posed, and its solution to be unique. Experimental results for slow and fast waves transmitted through human cancellous bone samples are given and compared with theoretical predictions.
Ultrasonic characterization of human cancellous bone using the Biot theory: inverse problem.
Sebaa, N; Fellah, Z E A; Fellah, M; Ogam, E; Wirgin, A; Mitri, F G; Depollier, C; Lauriks, W
2006-10-01
This paper concerns the ultrasonic characterization of human cancellous bone samples by solving the inverse problem using experimental transmitted signals. The ultrasonic propagation in cancellous bone is modeled using the Biot theory modified by the Johnson et al. model for viscous exchange between fluid and structure. The sensitivity of the Young modulus and the Poisson ratio of the skeletal frame is studied showing their effect on the fast and slow wave forms. The inverse problem is solved numerically by the least squares method. Five parameters are inverted: the porosity, tortuosity, viscous characteristic length, Young modulus, and Poisson ratio of the skeletal frame. The minimization of the discrepancy between experiment and theory is made in the time domain. The inverse problem is shown to be well posed, and its solution to be unique. Experimental results for slow and fast waves transmitted through human cancellous bone samples are given and compared with theoretical predictions.
A numerical method for solving a stochastic inverse problem for parameters.
Butler, T; Estep, D
2013-02-01
We review recent work (Briedt et al., 2011., 2012) on a new approach to the formulation and solution of the stochastic inverse parameter determination problem, i.e. determine the random variation of input parameters to a map that matches specified random variation in the output of the map, and then apply the various aspects of this method to the interesting Brusselator model. In this approach, the problem is formulated as an inverse problem for an integral equation using the Law of Total Probability. The solution method employs two steps: (1) we construct a systematic method for approximating set-valued inverse solutions and (2) we construct a computational approach to compute a measure-theoretic approximation of the probability measure on the input space imparted by the approximate set-valued inverse that solves the inverse problem. In addition to convergence analysis, we carry out an a posteriori error analysis on the computed probability distribution that takes into account all sources of stochastic and deterministic error. PMID:24347806
A numerical method for solving a stochastic inverse problem for parameters
Butler, T.; Estep, D.
2013-01-01
We review recent work (Briedt et al., 2011., 2012) on a new approach to the formulation and solution of the stochastic inverse parameter determination problem, i.e. determine the random variation of input parameters to a map that matches specified random variation in the output of the map, and then apply the various aspects of this method to the interesting Brusselator model. In this approach, the problem is formulated as an inverse problem for an integral equation using the Law of Total Probability. The solution method employs two steps: (1) we construct a systematic method for approximating set-valued inverse solutions and (2) we construct a computational approach to compute a measure-theoretic approximation of the probability measure on the input space imparted by the approximate set-valued inverse that solves the inverse problem. In addition to convergence analysis, we carry out an a posteriori error analysis on the computed probability distribution that takes into account all sources of stochastic and deterministic error. PMID:24347806
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.
2005-01-01
This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information
NASA Astrophysics Data System (ADS)
Chmielewski, Arthur B.; Noca, Muriel; Ulvestad, James
2000-03-01
Supermassive black holes are among the most spectacular objects in the Universe, and are laboratories for physics in extreme conditions. Understanding the physics of massive black holes and related phenomena is a primary goal of the ARISE mission. The scientific goals of the mission are described in detail on the ARISE web site http://arise.ipl.nasa.gov and in the ARISE Science Goals document. The following paper, as the title suggests, is not intended to be a comprehensive description of ARISE, but deals only with one aspect of the ARISE mission-the inflatable antenna which is the key element of the ARISE spacecraft. This spacecraft,due to the extensive reliance on inflatables, may be considered as the first generation Gossamer spacecraft
Taming the non-linearity problem in GPR full-waveform inversion for high contrast media
NASA Astrophysics Data System (ADS)
Meles, Giovanni; Greenhalgh, Stewart; van der Kruk, Jan; Green, Alan; Maurer, Hansruedi
2012-03-01
We present a new algorithm for the inversion of full-waveform ground-penetrating radar (GPR) data. It is designed to tame the non-linearity issue that afflicts inverse scattering problems, especially in high contrast media. We first investigate the limitations of current full-waveform time-domain inversion schemes for GPR data and then introduce a much-improved approach based on a combined frequency-time-domain analysis. We show by means of several synthetic tests and theoretical considerations that local minima trapping (common in full bandwidth time-domain inversion) can be avoided by starting the inversion with only the low frequency content of the data. Resolution associated with the high frequencies can then be achieved by progressively expanding to wider bandwidths as the iterations proceed. Although based on a frequency analysis of the data, the new method is entirely implemented by means of a time-domain forward solver, thus combining the benefits of both frequency-domain (low frequency inversion conveys stability and avoids convergence to a local minimum; whereas high frequency inversion conveys resolution) and time-domain methods (simplicity of interpretation and recognition of events; ready availability of FDTD simulation tools).
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
Solving the structural inverse gravity problem by the modified gradient methods
NASA Astrophysics Data System (ADS)
Martyshko, P. S.; Akimova, E. N.; Misilov, V. E.
2016-09-01
New methods for solving the three-dimensional inverse gravity problem in the class of contact surfaces are described. Based on the approach previously suggested by the authors, new algorithms are developed. Application of these algorithms significantly reduces the number of the iterations and computing time compared to the previous ones. The algorithms have been numerically implemented on the multicore processor. The example of solving the structural inverse gravity problem for a model of four-layer medium (with the use of gravity field measurements) is constructed.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.; Park, C.B.
2006-01-01
We describe a possible solution to the inverse refraction-traveltime problem (IRTP) that reduces the range of possible solutions (nonuniqueness). This approach uses a reference model, derived from surface-wave shear-wave velocity estimates, as a constraint. The application of the joint analysis of refractions with surface waves (JARS) method provided a more realistic solution than the conventional refraction/tomography methods, which did not benefit from a reference model derived from real data. This confirmed our conclusion that the proposed method is an advancement in the IRTP analysis. The unique basic principles of the JARS method might be applicable to other inverse geophysical problems. ?? 2006 Society of Exploration Geophysicists.
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety
NASA Astrophysics Data System (ADS)
Bidaibekov, Yessen Y.; Kornilov, Viktor S.; Kamalova, Guldina B.; Akimzhan, Nagima Sh.
2015-09-01
Methodical aspects of teaching students of higher educational institutions of natural science orientations of training of inverse problems for differential equations are considered in the article. A fact that an academic knowledge and competence in the field of applied mathematics is formed during such training is taken into consideration.
ON THE GEOSTATISTICAL APPROACH TO THE INVERSE PROBLEM. (R825689C037)
The geostatistical approach to the inverse problem is discussed with emphasis on the importance of structural analysis. Although the geostatistical approach is occasionally misconstrued as mere cokriging, in fact it consists of two steps: estimation of statist...
A second degree Newton method for an inverse obstacle scattering problem
NASA Astrophysics Data System (ADS)
Kress, Rainer; Lee, Kuo-Ming
2011-08-01
A regularized second degree Newton method is proposed and implemented for the inverse problem for scattering of time-harmonic acoustic waves from a sound-soft obstacle. It combines ideas due to Johansson and Sleeman [18] and Hettlich and Rundell [8] and reconstructs the obstacle from the far field pattern for scattering of one incident plane wave.
Inverse problem for the Verhulst equation of limited population growth with discrete experiment data
NASA Astrophysics Data System (ADS)
Azimov, Anvar; Kasenov, Syrym; Nurseitov, Daniyar; Serovajsky, Simon
2016-08-01
Verhulst limited growth model with unknown parameters of growth is considered. These parameters are defined by discrete experiment data. This inverse problem is solved with using gradient method with interpolation of data and without it. Approximation of the delta-function is used for the latter case. As an example the bacteria population E.coli is considered.
Integro-differential method of solving the inverse coefficient heat conduction problem
NASA Astrophysics Data System (ADS)
Baranov, V. L.; Zasyad'Ko, A. A.; Frolov, G. A.
2010-03-01
On the basis of differential transformations, a stable integro-differential method of solving the inverse heat conduction problem is suggested. The method has been tested on the example of determining the thermal diffusivity on quasi-stationary fusion and heating of a quartz glazed ceramics specimen.
Global stability for an inverse problem in soil–structure interaction
Alessandrini, G.; Morassi, A.; Rosset, E.; Vessella, S.
2015-01-01
We consider the inverse problem of determining the Winkler subgrade reaction coefficient of a slab foundation modelled as a thin elastic plate clamped at the boundary. The plate is loaded by a concentrated force and its transversal deflection is measured at the interior points. We prove a global Hölder stability estimate under (mild) regularity assumptions on the unknown coefficient. PMID:26345082
Numerical solution of 2D-vector tomography problem using the method of approximate inverse
NASA Astrophysics Data System (ADS)
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-01
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
Nopens, I; Nere, N; Vanrolleghem, P A; Ramkrishna, D
2007-01-01
Many systems contain populations of individuals. Often, they are regarded as a lumped phase, which might, for some applications, lead to inadequate model predictive power. An alternative framework, Population Balance Models, has been used here to describe such a system, activated sludge flocculation in which particle size is the property one wants to model. An important problem to solve in population balance modelling is to determine the model structure that adequately describes experimentally obtained data on for instance, the time evolution of the floc size distribution. In this contribution, an alternative method based on solving the inverse problem is used to recover the model structure from the data. In this respect, the presence of similarity in the data simplifies the problem significantly. Similarity was found and the inverse problem could be solved. A forward simulation then confirmed the quality of the model structure to describe the experimental data.
Variable-permittivity linear inverse problem for the H(sub z)-polarized case
NASA Technical Reports Server (NTRS)
Moghaddam, M.; Chew, W. C.
1993-01-01
The H(sub z)-polarized inverse problem has rarely been studied before due to the complicated way in which the unknown permittivity appears in the wave equation. This problem is equivalent to the acoustic inverse problem with variable density. We have recently reported the solution to the nonlinear variable-permittivity H(sub z)-polarized inverse problem using the Born iterative method. Here, the linear inverse problem is solved for permittivity (epsilon) and permeability (mu) using a different approach which is an extension of the basic ideas of diffraction tomography (DT). The key to solving this problem is to utilize frequency diversity to obtain the required independent measurements. The receivers are assumed to be in the far field of the object, and plane wave incidence is also assumed. It is assumed that the scatterer is weak, so that the Born approximation can be used to arrive at a relationship between the measured pressure field and two terms related to the spatial Fourier transform of the two unknowns, epsilon and mu. The term involving permeability corresponds to monopole scattering and that for permittivity to dipole scattering. Measurements at several frequencies are used and a least squares problem is solved to reconstruct epsilon and mu. It is observed that the low spatial frequencies in the spectra of epsilon and mu produce inaccuracies in the results. Hence, a regularization method is devised to remove this problem. Several results are shown. Low contrast objects for which the above analysis holds are used to show that good reconstructions are obtained for both permittivity and permeability after regularization is applied.
FOREWORD: 3rd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2013)
NASA Astrophysics Data System (ADS)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2013-10-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 3rd International Workshop on New Computational Methods for Inverse Problems, NCMIP 2013 (http://www.farman.ens-cachan.fr/NCMIP_2013.html). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 22 May 2013, at the initiative of Institut Farman. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 (http://www.farman.ens-cachan.fr/NCMIP_2012.html). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational
FOREWORD: 2nd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2012)
NASA Astrophysics Data System (ADS)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2012-09-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 2nd International Workshop on New Computational Methods for Inverse Problems, (NCMIP 2012). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 15 May 2012, at the initiative of Institut Farman. The first edition of NCMIP also took place in Cachan, France, within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finance. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition
Analysis of forward and inverse problems in chemical dynamics and spectroscopy
Rabitz, H.
1993-12-01
The overall scope of this research concerns the development and application of forward and inverse analysis tools for problems in chemical dynamics and chemical kinetics. The chemical dynamics work is specifically associated with relating features in potential surfaces and resultant dynamical behavior. The analogous inverse research aims to provide stable algorithms for extracting potential surfaces from laboratory data. In the case of chemical kinetics, the focus is on the development of systematic means to reduce the complexity of chemical kinetic models. Recent progress in these directions is summarized below.
On stable finite dimensional approximation of conditionally well-posed inverse problems
NASA Astrophysics Data System (ADS)
Kokurin, M. Yu
2016-10-01
We consider nonlinear conditionally well-posed inverse problems with Hölder-type stability estimates on closed, convex, and bounded subsets of a Hilbert space. A finite dimensional version of Ivanov's quasisolution method is investigated. The method involves minimization of the discrepancy functional over the section of the set of conditional well-posedness by a finite dimensional subspace. For this multiextremal minimization problem, we prove that if its stationary point is located not too far from the desired solution of the original inverse problem, then the mentioned point necessarily belongs to a small neighborhood of the solution. The diameter of the neighborhood is estimated in terms of an error level in input data and properties of approximating finite dimensional subspaces. The results are used in a convergence analysis of the gradient projection method, as applied to the finite dimensional subproblem of Ivanov's method.
NASA Astrophysics Data System (ADS)
Groh, Andreas; Krebs, Jochen
2012-08-01
In this paper, a population balance equation, originating from applications in chemical engineering, is considered and novel solution techniques for a related inverse problem are presented. This problem consists in the determination of the breakage rate and the daughter drop distribution of an evolving drop size distribution from time-dependent measurements under the assumption of self-similarity. We analyze two established solution methods for this ill-posed problem and improve the two procedures by adapting suitable data fitting and inversion algorithms to the specific situation. In addition, we introduce a novel technique that, compared to the former, does not require certain a priori information. The improved stability properties of the resulting algorithms are substantiated with numerical examples.
NASA Astrophysics Data System (ADS)
Kirsch, Andreas; Rieder, Andreas
2016-08-01
It is common knowledge—mainly based on experience—that parameter identification problems in partial differential equations are ill-posed. Yet, a mathematical sound argumentation is missing, except for some special cases. We present a general theory for inverse problems related to abstract evolution equations which explains not only their local ill-posedness but also provides the Fréchet derivative and its adjoint of the corresponding parameter-to-solution map which are needed, e.g., in Newton-like solvers. Our abstract results are applied to inverse problems related to the following first order hyperbolic systems: Maxwell’s equation (electromagnetic scattering in conducting media) and elastic wave equation (seismic imaging).
An inverse time-dependent source problem for a time-fractional diffusion equation
NASA Astrophysics Data System (ADS)
Wei, T.; Li, X. L.; Li, Y. S.
2016-08-01
This paper is devoted to identifying a time-dependent source term in a multi-dimensional time-fractional diffusion equation from boundary Cauchy data. The existence and uniqueness of a strong solution for the corresponding direct problem with homogeneous Neumann boundary condition are firstly proved. We provide the uniqueness and a stability estimate for the inverse time-dependent source problem. Then we use the Tikhonov regularization method to solve the inverse source problem and propose a conjugate gradient algorithm to find a good approximation to the minimizer of the Tikhonov regularization functional. Numerical examples in one-dimensional and two-dimensional cases are provided to show the effectiveness of the proposed method. This paper was supported by the NSF of China (11371181) and the Fundamental Research Funds for the Central Universities (lzujbky-2013-k02).
Analytical and experimental studies for space boundary and geometry inverse heat conduction problems
NASA Astrophysics Data System (ADS)
Chen, Tzu-Fang
Inverse Heat Conduction Problems (IHCPs) have been widely used in engineering fields in recent decades. IHCPs are not the same as direct heat conduction problems which are ``well-posed''. IHCPs are made more difficult since they are inherently ``ill-posed'' that is, a small error perturbation will lead to a large error in the solution reconstructed. Prediction of an unknown in an IHCP is not an easy event. An IHCP also handles the desired information from measurements containing noise. A stable and accurate reliable inversion solver shall be studied. This dissertation is split into four parts. The first part describes space boundary IHCPs, and attempts to utilize noisy measurement data to predict unknown surface temperatures or heat fluxes. A new algorithm, using a Kalman Filter to filter the measurement noise combined with an implicit time-marching finite difference scheme, solves a space boundary IHCP. In the second part, errors in reconstruction of the temperature at each boundary of a one-dimensional IHCP can be presented by a simple relation. Each relation contains an unknown coefficient, which can be determined by using one simulation through the inversion solver of a pair of specified sensor locations. This relation can then be used to estimate the other recovery errors at the boundary without using the inverse solver. In the third part, an experimental study of temperature drop between two rough surfaces is conducted. The experimental data are analyzed by utilizing an inversion solver developed in this dissertation. In the fourth part, an IHCP with a melting process using the measured temperature and heat flux at one surface is solved by a new geometry inversion solver with a heat flux limiter to reconstruct the melting front location and the temperature history inside the test domain.
Inverse problems and optimal experiment design in unsteady heat transfer processes identification
NASA Technical Reports Server (NTRS)
Artyukhin, Eugene A.
1991-01-01
Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.
NASA Astrophysics Data System (ADS)
Wang, Youming; Chen, Xuefeng; He, Zhengjia
2011-02-01
Structural eigenvalues have been broadly applied in modal analysis, damage detection, vibration control, etc. In this paper, the interpolating multiwavelets are custom designed based on stable completion method to solve structural eigenvalue problems. The operator-orthogonality of interpolating multiwavelets gives rise to highly sparse multilevel stiffness and mass matrices of structural eigenvalue problems and permits the incremental computation of the eigenvalue solution in an efficient manner. An adaptive inverse iteration algorithm using the interpolating multiwavelets is presented to solve structural eigenvalue problems. Numerical examples validate the accuracy and efficiency of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Gross, Lutz; Altinay, Cihan; Fenwick, Joel; Smith, Troy
2014-05-01
The program package escript has been designed for solving mathematical modeling problems using python, see Gross et al. (2013). Its development and maintenance has been funded by the Australian Commonwealth to provide open source software infrastructure for the Australian Earth Science community (recent funding by the Australian Geophysical Observing System EIF (AGOS) and the AuScope Collaborative Research Infrastructure Scheme (CRIS)). The key concepts of escript are based on the terminology of spatial functions and partial differential equations (PDEs) - an approach providing abstraction from the underlying spatial discretization method (i.e. the finite element method (FEM)). This feature presents a programming environment to the user which is easy to use even for complex models. Due to the fact that implementations are independent from data structures simulations are easily portable across desktop computers and scalable compute clusters without modifications to the program code. escript has been successfully applied in a variety of applications including modeling mantel convection, melting processes, volcanic flow, earthquakes, faulting, multi-phase flow, block caving and mineralization (see Poulet et al. 2013). The recent escript release (see Gross et al. (2013)) provides an open framework for solving joint inversion problems for geophysical data sets (potential field, seismic and electro-magnetic). The strategy bases on the idea to formulate the inversion problem as an optimization problem with PDE constraints where the cost function is defined by the data defect and the regularization term for the rock properties, see Gross & Kemp (2013). This approach of first-optimize-then-discretize avoids the assemblage of the - in general- dense sensitivity matrix as used in conventional approaches where discrete programming techniques are applied to the discretized problem (first-discretize-then-optimize). In this paper we will discuss the mathematical framework for
Investigation of one inverse problem in case of modeling water areas with "liquid" boundaries
NASA Astrophysics Data System (ADS)
Sheloput, Tatiana; Agoshkov, Valery
2015-04-01
In hydrodynamics often appears the problem of modeling water areas (oceans, seas, rivers, etc.) with "liquid" boundaries. "Liquid" boundary means set of those parts of boundary where impermeability condition is broken (for example, straits, bays borders, estuaries, interfaces of oceans). Frequently such effects are ignored: for "liquid" boundaries the same conditions are used as for "solid" ones, "material boundary" approximation is applied [1]. Sometimes it is possible to interpolate the results received from models of bigger areas. Moreover, approximate estimates for boundary conditions are often used. However, those approximations are not always valid. Sometimes errors in boundary condition determination could lead to a significant decrease in the accuracy of the simulation results. In this work one way of considering the problem mentioned above is described. According to this way one inverse problem on reconstruction of boundary function in convection-reaction-diffusion equations which describe transfer of heat and salinity is solved. The work is based on theory of adjoint equations [2] and optimal control, as well as on common methodology of investigation inverse problems [3]. The work contains theoretical investigation and the results of computer simulation applied for the Baltic Sea. Moreover, conditions and restrictions that should be satisfied for solvability of the problem are entered and justified in the work. Submitted work could be applied for the solution of more complicated inverse problems and data assimilation problems in the areas with "liquid" boundaries; also it is a step for developing algorithms on computing level, speed, temperature and salinity that could be applied for real objects. References 1. A. E. Gill. Atmosphere-ocean dynamics. // London: Academic Press, 1982. 2. G. I. Marchuk. Adjoint equations. // Moscow: INM RAS, 2000, 175 p. (in Russian). 3. V.I. Agoshkov. The methods of optimal control and adjoint equations in problems of
A hierarchical Bayesian-MAP approach to inverse problems in imaging
NASA Astrophysics Data System (ADS)
Raj, Raghu G.
2016-07-01
We present a novel approach to inverse problems in imaging based on a hierarchical Bayesian-MAP (HB-MAP) formulation. In this paper we specifically focus on the difficult and basic inverse problem of multi-sensor (tomographic) imaging wherein the source object of interest is viewed from multiple directions by independent sensors. Given the measurements recorded by these sensors, the problem is to reconstruct the image (of the object) with a high degree of fidelity. We employ a probabilistic graphical modeling extension of the compound Gaussian distribution as a global image prior into a hierarchical Bayesian inference procedure. Since the prior employed by our HB-MAP algorithm is general enough to subsume a wide class of priors including those typically employed in compressive sensing (CS) algorithms, HB-MAP algorithm offers a vehicle to extend the capabilities of current CS algorithms to include truly global priors. After rigorously deriving the regression algorithm for solving our inverse problem from first principles, we demonstrate the performance of the HB-MAP algorithm on Monte Carlo trials and on real empirical data (natural scenes). In all cases we find that our algorithm outperforms previous approaches in the literature including filtered back-projection and a variety of state-of-the-art CS algorithms. We conclude with directions of future research emanating from this work.
Group-sparsity regularization for ill-posed subsurface flow inverse problems
NASA Astrophysics Data System (ADS)
Golmohammadi, Azarang; Khaninezhad, Mohammad-Reza M.; Jafarpour, Behnam
2015-10-01
Sparse representations provide a flexible and parsimonious description of high-dimensional model parameters for reconstructing subsurface flow property distributions from limited data. To further constrain ill-posed inverse problems, group-sparsity regularization can take advantage of possible relations among the entries of unknown sparse parameters when: (i) groups of sparse elements are either collectively active or inactive and (ii) only a small subset of the groups is needed to approximate the parameters of interest. Since subsurface properties exhibit strong spatial connectivity patterns they may lead to sparse descriptions that satisfy the above conditions. When these conditions are established, a group-sparsity regularization can be invoked to facilitate the solution of the resulting inverse problem by promoting sparsity across the groups. The proposed regularization penalizes the number of groups that are active without promoting sparsity within each group. Two implementations are presented in this paper: one based on the multiresolution tree structure of Wavelet decomposition, without a need for explicit prior models, and another learned from explicit prior model realizations using sparse principal component analysis (SPCA). In each case, the approach first classifies the parameters of the inverse problem into groups with specific connectivity features, and then takes advantage of the grouped structure to recover the relevant patterns in the solution from the flow data. Several numerical experiments are presented to demonstrate the advantages of additional constraining power of group-sparsity in solving ill-posed subsurface model calibration problems.
NASA Astrophysics Data System (ADS)
Rudowicz, Czesław; Karbowiak, Mirosław
2015-01-01
Survey of recent literature has revealed a doubly-worrying tendency concerning the treatment of the two distinct types of Hamiltonians, namely, the physical crystal field (CF), or equivalently ligand field (LF), Hamiltonians and the zero-field splitting (ZFS) Hamiltonians, which appear in the effective spin Hamiltonians (SH). The nature and properties of the CF (LF) Hamiltonians have been mixed up in various ways with those of the ZFS Hamiltonians. Such cases have been identified in a rapidly growing number of studies of the transition-ion based systems using electron magnetic resonance (EMR), optical spectroscopy, and magnetic measurements. These findings have far ranging implications since these Hamiltonians are cornerstones for interpretation of magnetic and spectroscopic properties of the single transition ions in various crystals or molecules as well as the exchange coupled systems (ECS) of transition ions, e.g. single molecule magnets (SMM) or single ion magnets (SIM). The seriousness of the consequences of such conceptual problems and related terminological confusions has reached a level that goes far beyond simple semantic issues or misleading keyword classifications of papers in journals and scientific databases. The prevailing confusion, denoted as the CF=ZFS confusion, pertains to the cases of labeling the true ZFS quantities as purportedly the CF (LF) quantities. Here we consider the inverse confusion between the CF (LF) quantities and the SH (ZFS) ones, denoted the ZFS=CF confusion, which consists in referring to the parameters (or Hamiltonians), which are the true CF (LF) quantities, as purportedly the ZFS (or SH) quantities. Specific cases of the ZFS=CF confusion identified in recent textbooks, reviews and papers, especially SMM- and SIM-related ones, are surveyed and the pertinent misconceptions are clarified. The serious consequences of the terminological confusions include misinterpretation of data from a wide range of experimental techniques and
Variational approach to direct and inverse problems of atmospheric pollution studies
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey
2016-04-01
We present the development of a variational approach for solving interrelated problems of atmospheric hydrodynamics and chemistry concerning air pollution transport and transformations. The proposed approach allows us to carry out complex studies of different-scale physical and chemical processes using the methods of direct and inverse modeling [1-3]. We formulate the problems of risk/vulnerability and uncertainty assessment, sensitivity studies, variational data assimilation procedures [4], etc. A computational technology of constructing consistent mathematical models and methods of their numerical implementation is based on the variational principle in the weak constraint formulation specifically designed to account for uncertainties in models and observations. Algorithms for direct and inverse modeling are designed with the use of global and local adjoint problems. Implementing the idea of adjoint integrating factors provides unconditionally monotone and stable discrete-analytic approximations for convection-diffusion-reaction problems [5,6]. The general framework is applied to the direct and inverse problems for the models of transport and transformation of pollutants in Siberian and Arctic regions. The work has been partially supported by the RFBR grant 14-01-00125 and RAS Presidium Program I.33P. References: 1. V. Penenko, A.Baklanov, E. Tsvetova and A. Mahura . Direct and inverse problems in a variational concept of environmental modeling //Pure and Applied Geoph.(2012) v.169: 447-465. 2. V. V. Penenko, E. A. Tsvetova, and A. V. Penenko Development of variational approach for direct and inverse problems of atmospheric hydrodynamics and chemistry, Izvestiya, Atmospheric and Oceanic Physics, 2015, Vol. 51, No. 3, p. 311-319, DOI: 10.1134/S0001433815030093. 3. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Methods based on the joint use of models and observational data in the framework of variational approach to forecasting weather and atmospheric composition
A multiple-scale Pascal polynomial for 2D Stokes and inverse Cauchy-Stokes problems
NASA Astrophysics Data System (ADS)
Liu, Chein-Shan; Young, D. L.
2016-05-01
The polynomial expansion method is a useful tool for solving both the direct and inverse Stokes problems, which together with the pointwise collocation technique is easy to derive the algebraic equations for satisfying the Stokes differential equations and the specified boundary conditions. In this paper we propose two novel numerical algorithms, based on a third-first order system and a third-third order system, to solve the direct and the inverse Cauchy problems in Stokes flows by developing a multiple-scale Pascal polynomial method, of which the scales are determined a priori by the collocation points. To assess the performance through numerical experiments, we find that the multiple-scale Pascal polynomial expansion method (MSPEM) is accurate and stable against large noise.
Variable-Precision Arithmetic for Solving Inverse Problems of Electrical Impedance Tomography
Tian, H.; Yamada, S.; Iwahara, M.; Yang, H.
2005-04-09
Electrical Impedance Tomography (EIT) is a nondestructive imaging technique, which reconstructs the electrical characteristic tomographys by electrical measurement on the periphery of objects. EIT approximates the spatial distribution of impedance (or conductivity) within the detected objects via employing data of injected electrical currents and boundary electrical potentials. This technique would be used for detecting flaws inside metal materials or providing medical images. In theory EIT belongs to inverse problems of low frequency current field and its reconstruction calculation suffers from ill-posed nonlinear nature. This paper presents variable-precision arithmetic is effective to improve the precision of conventional finite-difference in Newton's method. Comparing with exact symbolic arithmetic and floating-point arithmetic, variable-precision arithmetic achieves a good tradeoff between accuracy and complexity of computing. The simulation results have illustrated that variable-precision arithmetic is valid for solving inverse problems of EIT.
Solution of non-linear inverse heat conduction problems using the method of lines
NASA Astrophysics Data System (ADS)
Taler, J.; Duda, P.
Two space marching methods for solving the one-dimensional nonlinear inverse heat conduction problems are presented. The temperature-dependent thermal properties and the boundary condition on the accessible part of the boundary of the body are known. Additional temperature measurements in time are taken with a sensor located in an arbitrary position within the solid, and the objective is to determine the surface temperature and heat flux on the remaining part of the unspecified boundary. The methods have the advantage that time derivatives are not replaced by finite differences and the good accuracy of the method results from an appropriate approximation of the first time derivative using smoothing polynomials. The extension of the first method presented in this study to higher dimensions inverse heat conduction problems is straightforward.
A Solution of the Direct and Inverse Potential Problems for Arbitrary Cascades of Airfoils
NASA Technical Reports Server (NTRS)
Mutterperl, William
1944-01-01
Methods are given of determining the potential flow plast an arbitrary cascade of airfoils and the inverse problem of determining an airfoil having a prescribed velocity distribution in cascade. Results indicated that Cartesian mapping function method may be satisfactorily extended to include cascades. Numerical calculation for computing cascades by Cartesian mapping function method is considerably greater than for single airfoils but much less than hitherto required for cascades. Detailed results are presented graphically.
NASA Astrophysics Data System (ADS)
Ponomarenko, Sergey A.; Wolf, Emil
2002-10-01
We investigate the inverse scattering problem for statistically homogeneous, isotropic random media under conditions of strong fluctuations of optical wavefields. We present a method for determining the spectral density of the dielectric constant fluctuations in such media from scattering of partially coherent light. The method may find applications to a wide class of turbulent media such as the turbulent atmosphere and certain turbulent plasmas where backscattering and depolarization effects are negligible.
Metric dependent aspects of inverse problems and functionals based on helicity
NASA Astrophysics Data System (ADS)
Kotiuga, P. R.
1993-05-01
The helicity of a vector field is a metric independent density. Functionals with first order elliptic systems for Euler-Lagrange equations have been constructed from the helicity. The metric invariance is preserved for finite element discretizations involving ``Whitney elements.'' This paper relates differential geometric aspects of inverse problems to helicity based functionals in two contexts. First, the inverse problem of electrical impedance tomography in isotropic media is known to be equivalent to determining a metric within a given conformal class from a given ``Dirichlet to Neumann'' map. This fact is related to the helicity functional and Wexler's algorithm for recovering an isotropic conductivity. Second, Maxwell's equations in ``spinor form'' are shown to be the Euler-Lagrange equations of some complexified time dependent generalization of the helicity functional. In this case metric dependent aspects yield insight into the ``inverse kinematic problem in seismology.'' These two examples illustrate the underlying geometric structure in classes of inverse problems and algorithms for their solution. ``Do you know Grassmann's Ausdehnungslehre? Spottiswood spoke of it in Dublin as something above and beyond quarternions. I have not seen it, but Sir William Hamilton of Edinburgh used to say that the greater the extension the smaller the intention.'' ``May one plough with an ox and an ass together? The like of you may write everything and prove everything in 4nions, but in the transition period the bilingual method may help to introduce and explain the more perfect.'' Excerpt of a letter from J. C. Maxwell to P. G. Tait; A. P. Wills, Vector Analysis with an Introduction to Tensor Analysis (Prentice-Hall, Englewood Cliffs, NJ, 1931), pp. XXV-XXVI.
Uniqueness of the interior plane strain time-harmonic viscoelastic inverse problem
NASA Astrophysics Data System (ADS)
Zhang, Yixiao; Barbone, Paul E.; Harari, Isaac; Oberai, Assad A.
2016-07-01
Elasticity imaging has emerged as a promising medical imaging technique with applications in the detection, diagnosis and treatment monitoring of several types of disease. In elasticity imaging measured displacement fields are used to generate images of elastic parameters of tissue by solving an inverse problem. When the tissue excitation, and the resulting tissue motion is time-harmonic, elasticity imaging can be extended to image the viscoelastic properties of the tissue. This leads to an inverse problem for the complex-valued shear modulus at a given frequency. In this manuscript we have considered the uniqueness of this inverse problem for an incompressible, isotropic linear viscoelastic solid in a state of plane strain. For a single measured displacement field we conclude that the solution is infinite dimensional, and the data required to render it unique is determined by the measured strain field. In contrast, for two independent displacement fields such that the principal directions of the resulting strain fields are different, the space of possible solutions is eight dimensional, and given additional data, like the value of the shear modulus at four locations, or over a calibration region, we may determine the shear modulus everywhere. We have also considered simple analytical examples that verify these results and offer additional insights. The results derived in this paper may be used as guidelines by the practitioners of elasticity imaging in designing more robust and accurate imaging protocols.
NASA Astrophysics Data System (ADS)
Oware, E. K.; Moysey, S. M. J.; Khan, T.
2013-10-01
We introduce a new strategy for integrating hydrologic process information as a constraint within hydrogeophysical imaging problems. The approach uses a basis-constrained inversion where basis vectors are tuned to the hydrologic problem of interest. Tuning is achieved using proper orthogonal decomposition (POD) to extract an optimal basis from synthetic training data generated by Monte Carlo simulations representative of hydrologic processes at a site. A synthetic case study illustrates that the approach performs well relative to other common inversion strategies for imaging a solute plume using an electrical resistivity survey, even when the initial conceptualization of hydrologic processes is incorrect. In two synthetic case studies, we found that the POD approach was able to significantly improve imaging of the plume by reducing the root mean square error of the concentration estimates by a factor of two. More importantly, the POD approach was able to better capture the bimodal nature of the plume in the second case study, even though the prior conceptual model for the POD basis was for a single plume. The ability of the POD inversion to improve concentration estimates exemplifies the importance of integrating process information within geophysical imaging problems. In contrast, the ability to capture the bimodality of the plume in the second example indicates the flexibility of the technique to move away from this prior process constraint when it is inconsistent with the observed ERI data.
The inverse problem of acoustic wave scattering by an air-saturated poroelastic cylinder.
Ogam, Erick; Fellah, Z E A; Baki, Paul
2013-03-01
The efficient use of plastic foams in a diverse range of structural applications like in noise reduction, cushioning, and sleeping mattresses requires detailed characterization of their permeability and deformation (load-bearing) behavior. The elastic moduli and airflow resistance properties of foams are often measured using two separate techniques, one employing mechanical vibration methods and the other, flow rates of fluids based on fluid mechanics technology, respectively. A multi-parameter inverse acoustic scattering problem to recover airflow resistivity (AR) and mechanical properties of an air-saturated foam cylinder is solved. A wave-fluid saturated poroelastic structure interaction model based on the modified Biot theory and plane-wave decomposition using orthogonal cylindrical functions is employed to solve the inverse problem. The solutions to the inverse problem are obtained by constructing the objective functional given by the total square of the difference between predictions from the model and scattered acoustic field data acquired in an anechoic chamber. The value of the recovered AR is in good agreement with that of a slab sample cut from the cylinder and characterized using a method employing low frequency transmitted and reflected acoustic waves in a long waveguide developed by Fellah et al. [Rev. Sci. Instrum. 78(11), 114902 (2007)].
The inverse problem of acoustic wave scattering by an air-saturated poroelastic cylinder.
Ogam, Erick; Fellah, Z E A; Baki, Paul
2013-03-01
The efficient use of plastic foams in a diverse range of structural applications like in noise reduction, cushioning, and sleeping mattresses requires detailed characterization of their permeability and deformation (load-bearing) behavior. The elastic moduli and airflow resistance properties of foams are often measured using two separate techniques, one employing mechanical vibration methods and the other, flow rates of fluids based on fluid mechanics technology, respectively. A multi-parameter inverse acoustic scattering problem to recover airflow resistivity (AR) and mechanical properties of an air-saturated foam cylinder is solved. A wave-fluid saturated poroelastic structure interaction model based on the modified Biot theory and plane-wave decomposition using orthogonal cylindrical functions is employed to solve the inverse problem. The solutions to the inverse problem are obtained by constructing the objective functional given by the total square of the difference between predictions from the model and scattered acoustic field data acquired in an anechoic chamber. The value of the recovered AR is in good agreement with that of a slab sample cut from the cylinder and characterized using a method employing low frequency transmitted and reflected acoustic waves in a long waveguide developed by Fellah et al. [Rev. Sci. Instrum. 78(11), 114902 (2007)]. PMID:23464016
NASA Astrophysics Data System (ADS)
Mohammad khaninezhad, M.; Jafarpour, B.
2012-12-01
Data limitation and heterogeneity of the geologic formations introduce significant uncertainty in predicting the related flow and transport processes in these environments. Fluid flow and displacement behavior in subsurface systems is mainly controlled by the structural connectivity models that create preferential flow pathways (or barriers). The connectivity of extreme geologic features strongly constrains the evolution of the related flow and transport processes in subsurface formations. Therefore, characterization of the geologic continuity and facies connectivity is critical for reliable prediction of the flow and transport behavior. The goal of this study is to develop a robust and geologically consistent framework for solving large-scale nonlinear subsurface characterization inverse problems under uncertainty about geologic continuity and structural connectivity. We formulate a novel inverse modeling approach by adopting a sparse reconstruction perspective, which involves two major components: 1) sparse description of hydraulic property distribution under significant uncertainty in structural connectivity and 2) formulation of an effective sparsity-promoting inversion method that is robust against prior model uncertainty. To account for the significant variability in the structural connectivity, we use, as prior, multiple distinct connectivity models. For sparse/compact representation of high-dimensional hydraulic property maps, we investigate two methods. In one approach, we apply the principle component analysis (PCA) to each prior connectivity model individually and combine the resulting leading components from each model to form a diverse geologic dictionary. Alternatively, we combine many realizations of the hydraulic properties from different prior connectivity models and use them to generate a diverse training dataset. We use the training dataset with a sparsifying transform, such as K-SVD, to construct a sparse geologic dictionary that is robust to
Genetic algorithm for the inverse problem in synthesis of fiber gratings
NASA Astrophysics Data System (ADS)
Skaar, Johannes; Risvik, Knut M.
1998-06-01
A new method for synthesis of fiber gratings with advanced characteristic is proposed. The method is based on an optimizing genetic algorithm, and facilitates the task of weighting the different requirements to the filter spectrum. A classical problem in applied physics and engineering fields is the inverse problem. An example of such a problem is to determine a fiber grating index modulation profile corresponding to a given reflection spectrum. This is not a trivial problem, and a variety of synthesis algorithms has been proposed. For weak gratings, the synthesis problem of fiber gratings reduces to an inverse Fourier transform of the reflection coefficient. This is known as the first-order Born approximation, and applies only for gratings for which the reflectivity is small. Another solution to this problem was found by Song and Shin, who solved the coupled Gel'fand- Levitan-Marchenko (GLM) integral equations that appear in the inverse scattering theory of quantum mechanics. Their method is exact, but is restricted to reflection coefficients that can be expressed as a rational function. An iterative solution to the GLM equations was found by Peral et. al., yielding smoother coupling coefficients that the exact method. The algorithm is converging relatively fast, and gives satisfying results even for high reflectivity gratings. However, when specifying ideal, unachievable filter responses, it is desirable to have a weighting mechanisms, which makes it easier to weight the different requirements. For example, when synthesizing an optical bandpass filter, one may be interested in weighting linear phase more than sharp peaks. because the dispersion may be a more critical parameter. The iterative GLM method does not support such a mechanism in a satisfactory way.
Solution of the stationary 2D inverse heat conduction problem by Treffetz method
NASA Astrophysics Data System (ADS)
Cialkowski, Michael J.; Frąckowiak, Andrzej
2002-05-01
The paper presents analysis of a solution of Laplace equation with the use of FEM harmonic basic functions. The essence of the problem is aimed at presenting an approximate solution based on possibly large finite element. Introduction of harmonic functions allows to reduce the order of numerical integration as compared to a classical Finite Element Method. Numerical calculations conform good efficiency of the use of basic harmonic functions for resolving direct and inverse problems of stationary heat conduction. Further part of the paper shows the use of basic harmonic functions for solving Poisson’s equation and for drawing up a complete system of biharmonic and polyharmonic basic functions
On parameterization of the inverse problem for estimating aquifer properties using tracer data
Kowalsky, M. B.; Finsterle, Stefan A.; Williams, Kenneth H.; Murray, Christopher J.; Commer, Michael; Newcomer, Darrell R.; Englert, Andreas L.; Steefel, Carl I.; Hubbard, Susan
2012-06-11
We consider a field-scale tracer experiment conducted in 2007 in a shallow uranium-contaminated aquifer at Rifle, Colorado. In developing a reliable approach for inferring hydrological properties at the site through inverse modeling of the tracer data, decisions made on how to parameterize heterogeneity (i.e., how to represent a heterogeneous distribution using a limited number of parameters that are amenable to estimation) are of paramount importance. We present an approach for hydrological inversion of the tracer data and explore, using a 2D synthetic example at first, how parameterization affects the solution, and how additional characterization data could be incorporated to reduce uncertainty. Specifically, we examine sensitivity of the results to the configuration of pilot points used in a geostatistical parameterization, and to the sampling frequency and measurement error of the concentration data. A reliable solution of the inverse problem is found when the pilot point configuration is carefully implemented. In addition, we examine the use of a zonation parameterization, in which the geometry of the geological facies is known (e.g., from geophysical data or core data), to reduce the non-uniqueness of the solution and the number of unknown parameters to be estimated. When zonation information is only available for a limited region, special treatment in the remainder of the model is necessary, such as using a geostatistical parameterization. Finally, inversion of the actual field data is performed using 2D and 3D models, and results are compared with slug test data.
NASA Astrophysics Data System (ADS)
Dong, Li; Wijesinghe, Philip; Dantuono, James T.; Sampson, David D.; Munro, Peter R. T.; Kennedy, Brendan F.; Oberai, Assad A.
2016-03-01
Quantitative elasticity imaging, which retrieves elastic modulus maps from tissue, is preferred to qualitative strain imaging for acquiring system- and operator-independent images and longitudinal and multi-site diagnoses. Quantitative elasticity imaging has already been demonstrated in optical elastography by relating surface-acoustic and shear wave speed to Young's modulus via a simple algebraic relationship. Such approaches assume largely homogeneous samples and neglect the effect of boundary conditions. We present a general approach to quantitative elasticity imaging based upon the solution of the inverse elasticity problem using an iterative technique and apply it to compression optical coherence elastography. The inverse problem is one of finding the distribution of Young's modulus within a sample, that in response to an applied load, and a given displacement and traction boundary conditions, can produce a displacement field matching one measured in experiment. Key to our solution of the inverse elasticity problem is the use of the adjoint equations that allow the very efficient evaluation of the gradient of the objective function to be minimized with respect to the unknown values of Young's modulus within the sample. Although we present the approach for the case of linear elastic, isotropic, incompressible solids, this method can be employed for arbitrarily complex mechanical models. We present the details of the method and quantitative elastograms of phantoms and tissues. We demonstrate that by using the inverse approach, we can decouple the artefacts produced by mechanical tissue heterogeneity from the true distribution of Young's modulus, which are often evident in techniques that employ first-order algebraic relationships.
NASA Astrophysics Data System (ADS)
Mu, Xiyu; Cheng, Hao; Liu, Guoqing
2016-04-01
It is often difficult to provide the exact boundary condition in the practical use of variational method. The Euler equation derived from variational method cannot be solved without boundary condition. However, in some application problems such as the assimilation of remote sensing data, the values can be easily obtained in the inner region of the domain. Since the solution of elliptic partial differential equations continuously depends on the boundary condition, the boundary condition can be retrieved using part solutions in the inner area. In this paper, the variational problem of remote sensing data assimilation within a circular area is first established. The Klein-Gordon elliptic equation is derived from the Euler method of variational problems with assumed boundary condition. Secondly, a computer-friendly Green function is constructed for the Dirichlet problem of two-dimensional Klein-Gordon equation, with the formal solution according to Green formula. Thirdly, boundary values are retrieved by solving the optimal problem which is constructed according to the smoothness of boundary value function and the best approximation between formal solutions and high-accuracy measurements in the interior of the domain. Finally, the assimilation problem is solved on substituting the retrieved boundary values into the Klein-Gordon equation. It is a type of inverse problem in mathematics. The advantage of our method lies in that it needs no assumptions of the boundary condition. It thus alleviates the error introduced by artificial boundary condition in data fusion using variational method in the past.
On the Optimization of the Inverse Problem for Bouguer Gravity Anomalies
NASA Astrophysics Data System (ADS)
Zamora, A.; Velasco, A. A.; Gutierrez, A. E.
2013-12-01
Inverse modeling of gravity data presents a very ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting Earth's model. Although implementing 2- and 3-Dimensional gravitational inverse problems can determine the structural composition of the Earth, traditional inverse modeling approaches can be very unstable. A model of the shallow substructure is based on the density contrasts of anomalous bodies -with different densities with respect to a uniform region- or the boundaries between layers in a layered environment. We implement an interior-point method constrained optimization technique to improve the 2-D model of the Earth's structure through the use of known density constraints for transitional areas obtained from previous geological observations (e.g. core samples, seismic surveys, etc.). The proposed technique is applied to both synthetic data and gravitational data previously obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. We find improvements on the models obtained from this optimization scheme given that getting rid of geologically unacceptable models that would otherwise meet the required geophysical properties reduces the solution space.
Model-based elastography: a survey of approaches to the inverse elasticity problem
Doyley, M M
2012-01-01
Elastography is emerging as an imaging modality that can distinguish normal versus diseased tissues via their biomechanical properties. This article reviews current approaches to elastography in three areas — quasi-static, harmonic, and transient — and describes inversion schemes for each elastographic imaging approach. Approaches include: first-order approximation methods; direct and iterative inversion schemes for linear elastic; isotropic materials; and advanced reconstruction methods for recovering parameters that characterize complex mechanical behavior. The paper’s objective is to document efforts to develop elastography within the framework of solving an inverse problem, so that elastography may provide reliable estimates of shear modulus and other mechanical parameters. We discuss issues that must be addressed if model-based elastography is to become the prevailing approach to quasi-static, harmonic, and transient elastography: (1) developing practical techniques to transform the ill-posed problem with a well-posed one; (2) devising better forward models to capture the transient behavior of soft tissue; and (3) developing better test procedures to evaluate the performance of modulus elastograms. PMID:22222839
Method of Minimax Optimization in the Coefficient Inverse Heat-Conduction Problem
NASA Astrophysics Data System (ADS)
Diligenskaya, A. N.; Rapoport, É. Ya.
2016-07-01
Consideration has been given to the inverse problem on identification of a temperature-dependent thermal-conductivity coefficient. The problem was formulated in an extremum statement as a problem of search for a quantity considered as the optimum control of an object with distributed parameters, which is described by a nonlinear homogeneous spatially one-dimensional Fourier partial equation with boundary conditions of the second kind. As the optimality criterion, the authors used the error (minimized on the time interval of observation) of uniform approximation of the temperature computed on the object's model at an assigned point of the segment of variation in the spatial variable to its directly measured value. Pre-parametrization of the sought control action, which a priori records its description accurate to assigning parameters of representation in the class of polynomial temperature functions, ensured the reduction of the problem under study to a problem of parametric optimization. To solve the formulated problem, the authors used an analytical minimax-optimization method taking account of the alternance properties of the sought optimum solutions based on which the algorithm of computation of the optimum values of the sought parameters is reduced to a system (closed for these unknowns) of equations fixing minimax deviations of the calculated values of temperature from those observed on the time interval of identification. The obtained results confirm the efficiency of the proposed method for solution of a certain range of applied problems. The authors have studied the influence of the coordinate of a point of temperature measurement on the exactness of solution of the inverse problem.
NASA Astrophysics Data System (ADS)
Lee, J. H.; Kitanidis, P. K.
2014-12-01
The geostatistical approach (GA) to inversion has been applied to many engineering applications to estimate unknown parameter functions and quantify the uncertainty in estimation. Thanks to recent advances in sensor technology, large-scale/joint inversions have become more common and the implementation of the traditional GA algorithm would require thousands of expensive numerical simulation runs, which would be computationally infeasible. To overcome the computational challenges, we present the Principal Component Geostatistical Approach (PCGA) that makes use of leading principal components of the prior information to avoid expensive sensitivity computations and obtain an approximate GA solution and its uncertainty with a few hundred numerical simulation runs. As we show in this presentation, the PCGA estimate is close to, even almost same as the estimate obtained from full-model implemented GA while one can reduce the computation time by the order of 10 or more in most practical cases. Furthermore, our method is "black-box" in the sense that any numerical simulation software can be linked to PCGA to perform the geostatistical inversion. This enables a hassle-free implementation of GA to multi-physics problems and joint inversion with different types of measurements such as hydrologic, chemical, and geophysical data obviating the need to explicitly compute the sensitivity of measurements through expensive coupled numerical simulations. Lastly, the PCGA is easily implemented to run the numerical simulations in parallel, thus taking advantage of high performance computing environments. We show the effectiveness and efficiency of our method with several examples such as 3-D transient hydraulic tomography, joint inversion of head and tracer data and geochemical heterogeneity identification.
Manipulation with heterogeneity within a species population formulated as an inverse problem
NASA Astrophysics Data System (ADS)
Horváth, D.; Brutovsky, B.; Kočišová, J.; Šprinc, S.
2010-11-01
Dependence of the evolutionary dynamics on the population’s heterogeneity has been reliably recognized and studied within the frame of evolutionary optimization theory. As the causal relation between the heterogeneity and dynamics of environment has been revealed, the possibility to influence convergence rate of evolutionary processes by purposeful manipulation with environment emerges. For the above purposes we formulate the task as the inverse problem meaning that desired population heterogeneity, quantified by Tsallis information entropy, represents the model’s input and dynamics of environment leading to desired population heterogeneity is looked for. Here the presented abstract model of evolutionary motion within the inverse model of replicating species is case-independent and it is relevant for the broad range of phenomena observed at cellular, ecological, economic and social scales. We envision relevance of the model for anticancer therapy, in which the effort is to circumvent heterogeneity as it typically correlates with the therapy efficiency.
Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem
NASA Technical Reports Server (NTRS)
Lu, Huei-Iin; Robertson, Franklin R.
1999-01-01
A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.
Electrostatic point charge fitting as an inverse problem: Revealing the underlying ill-conditioning
Ivanov, Maxim V.; Talipov, Marat R.; Timerghazin, Qadir K.
2015-10-07
Atom-centered point charge (PC) model of the molecular electrostatics—a major workhorse of the atomistic biomolecular simulations—is usually parameterized by least-squares (LS) fitting of the point charge values to a reference electrostatic potential, a procedure that suffers from numerical instabilities due to the ill-conditioned nature of the LS problem. To reveal the origins of this ill-conditioning, we start with a general treatment of the point charge fitting problem as an inverse problem and construct an analytical model with the point charges spherically arranged according to Lebedev quadrature which is naturally suited for the inverse electrostatic problem. This analytical model is contrasted to the atom-centered point-charge model that can be viewed as an irregular quadrature poorly suited for the problem. This analysis shows that the numerical problems of the point charge fitting are due to the decay of the curvatures corresponding to the eigenvectors of LS sum Hessian matrix. In part, this ill-conditioning is intrinsic to the problem and is related to decreasing electrostatic contribution of the higher multipole moments, that are, in the case of Lebedev grid model, directly associated with the Hessian eigenvectors. For the atom-centered model, this association breaks down beyond the first few eigenvectors related to the high-curvature monopole and dipole terms; this leads to even wider spread-out of the Hessian curvature values. Using these insights, it is possible to alleviate the ill-conditioning of the LS point-charge fitting without introducing external restraints and/or constraints. Also, as the analytical Lebedev grid PC model proposed here can reproduce multipole moments up to a given rank, it may provide a promising alternative to including explicit multipole terms in a force field.
FOREWORD: 2nd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2012)
NASA Astrophysics Data System (ADS)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2012-09-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 2nd International Workshop on New Computational Methods for Inverse Problems, (NCMIP 2012). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 15 May 2012, at the initiative of Institut Farman. The first edition of NCMIP also took place in Cachan, France, within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finance. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition
FOREWORD: 3rd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2013)
NASA Astrophysics Data System (ADS)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2013-10-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 3rd International Workshop on New Computational Methods for Inverse Problems, NCMIP 2013 (http://www.farman.ens-cachan.fr/NCMIP_2013.html). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 22 May 2013, at the initiative of Institut Farman. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 (http://www.farman.ens-cachan.fr/NCMIP_2012.html). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational
Eskin, G.
2008-02-15
We consider the inverse boundary value problem for the Schroedinger operator with time-dependent electromagnetic potentials in domains with obstacles. We extend the resuls of the author's works [Inverse Probl. 19, 49 (2003); 19, 985 (2003); 20, 1497 (2004)] to the case of time-dependent potentials. We relate our results to the Aharonov-Bohm effect caused by magnetic and electric fluxes.
The inverse problem of an impenetrable sound-hard body in acoustic scattering
NASA Astrophysics Data System (ADS)
Olshansky, Yaakov; Turkel, Eli
2008-11-01
We study the inverse problem of recovering the scatterer shape from the far-field pattern(FFP) of the scattered wave in presence of noise. This problem is ill-posed and is usually addressed via regularization. Instead, a direct approach to denoise the FFP using wavelet technique is proposed by us. We are interested in methods that deal with the scatterer of the general shape which may be described by a finite number of parameters. To study the effectiveness of the technique we concentrate on simple bodies such as ellipses, where the analytic solution to the forward scattering problem is known. The shape parameters are found based on a least-square error estimator. Two cases with the FFP corrupted by Gaussian noise and/or computational error from a finite element method are considered. We also consider the case where only partial data is known in the far field.
A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.
2013-01-01
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy
The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions
NASA Astrophysics Data System (ADS)
Loaiciga, Hugo A.; MariñO, Miguel A.
1987-01-01
The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S; Brown, Emery N; Purdon, Patrick L
2014-02-15
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy
SimPEG: An open-source framework for geophysical simulations and inverse problems
NASA Astrophysics Data System (ADS)
Cockett, R.; Kang, S.; Heagy, L.
2014-12-01
Geophysical surveys are powerful tools for obtaining information about the subsurface. Inverse modelling provides a mathematical framework for constructing a model of physical property distributions that are consistent with the data collected by these surveys. The geosciences are increasingly moving towards the integration of geological, geophysical, and hydrological information to better characterize the subsurface. This integration must span disciplines and is not only challenging scientifically, but the inconsistencies between conventions often makes implementations complicated, non-reproducible, or inefficient. We have developed an open source software package for Simulation and Parameter Estimation in Geophysics (SimPEG), which provides a generalized framework for solving geophysical forward and inverse problems. SimPEG is written entirely in Python with minimal dependencies in the hopes that it can be used both as a research tool and for education. SimPEG includes finite volume discretizations on structured and unstructured meshes, interfaces to standard numerical solver packages, convex optimization algorithms, model parameterizations, and tailored visualization routines. The framework is modular and object-oriented, which promotes real time experimentation and combination of geophysical problems and inversion methodologies. In this presentation, we will highlight a few geophysical examples, including direct-current resistivity and electromagnetics, and discuss some of the challenges and successes we encountered in developing a flexible and extensible framework. Throughout development of SimPEG we have focused on simplicity, usability, documentation, and extensive testing. By embracing a fully open source development paradigm, we hope to encourage reproducible research, cooperation, and communication to help tackle some of the inherently multidisciplinary problems that face integrated geophysical methods.
A 2D forward and inverse code for streaming potential problems
NASA Astrophysics Data System (ADS)
Soueid Ahmed, A.; Jardani, A.; Revil, A.
2013-12-01
The self-potential method corresponds to the passive measurement of the electrical field in response to the occurrence of natural sources of current in the ground. One of these sources corresponds to the streaming current associated with the flow of the groundwater. We can therefore apply the self- potential method to recover non-intrusively some information regarding the groundwater flow. We first solve the forward problem starting with the solution of the groundwater flow problem, then computing the source current density, and finally solving a Poisson equation for the electrical potential. We use the finite-element method to solve the relevant partial differential equations. In order to reduce the number of (petrophysical) model parameters required to solve the forward problem, we introduced an effective charge density tensor of the pore water, which can be determined directly from the permeability tensor for neutral pore waters. The second aspect of our work concerns the inversion of the self-potential data using Tikhonov regularization with smoothness and weighting depth constraints. This approach accounts for the distribution of the electrical resistivity, which can be independently and approximately determined from electrical resistivity tomography. A numerical code, SP2DINV, has been implemented in Matlab to perform both the forward and inverse modeling. Three synthetic case studies are discussed.
SP2DINV: A 2D forward and inverse code for streaming potential problems
NASA Astrophysics Data System (ADS)
Soueid Ahmed, A.; Jardani, A.; Revil, A.; Dupont, J. P.
2013-09-01
The self-potential method corresponds to the passive measurement of the electrical field in response to the occurrence of natural sources of current in the ground. One of these sources corresponds to the streaming current associated with the flow of the ground water. We can therefore apply the self-potential method to recover non-intrusively some information regarding the ground water flow. We first solve the forward problem starting with the solution of the ground water flow problem, then computing the source current density, and finally solving a Poisson equation for the electrical potential. We use the finite-element method to solve the relevant partial differential equations. In order to reduce the number of (petrophysical) model parameters required to solve the forward problem, we introduced an effective charge density tensor of the pore water, which can be determined directly from the permeability tensor for neutral pore waters. The second aspect of our work concerns the inversion of the self-potential data using Tikhonov regularization with smoothness and weighting depth constraints. This approach accounts for the distribution of the electrical resistivity, which can be independently and approximately determined from electrical resistivity tomography. A numerical code, SP2DINV, has been implemented in Matlab to perform both the forward and inverse modeling. Three synthetic case studies are discussed.
NASA Astrophysics Data System (ADS)
Sommariva, Sara; Sorrentino, Alberto
2014-11-01
We discuss the use of a recent class of sequential Monte Carlo methods for solving inverse problems characterized by a semi-linear structure, i.e. where the data depend linearly on a subset of variables and nonlinearly on the remaining ones. In this type of problems, under proper Gaussian assumptions one can marginalize the linear variables. This means that the Monte Carlo procedure needs only to be applied to the nonlinear variables, while the linear ones can be treated analytically; as a result, the Monte Carlo variance and/or the computational cost decrease. We use this approach to solve the inverse problem of magnetoencephalography, with a multi-dipole model for the sources. Here, data depend nonlinearly on the number of sources and their locations, and depend linearly on their current vectors. The semi-analytic approach enables us to estimate the number of dipoles and their location from a whole time-series, rather than a single time point, while keeping a low computational cost.
A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty.
Zhang, Kai; Wang, Zengfei; Zhang, Liming; Yao, Jun; Yan, Xia
2015-01-01
In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method), for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model's parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively 'important' elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient.
Bayesian approach to inverse problems for functions with a variable-index Besov prior
NASA Astrophysics Data System (ADS)
Jia, Junxiong; Peng, Jigen; Gao, Jinghuai
2016-08-01
The Bayesian approach has been adopted to solve inverse problems that reconstruct a function from noisy observations. Prior measures play a key role in the Bayesian method. Hence, many probability measures have been proposed, among which total variation (TV) is a well-known prior measure that can preserve sharp edges. However, it has two drawbacks, the staircasing effect and a lack of the discretization-invariant property. The variable-index TV prior has been proposed and analyzed in the area of image analysis for the former, and the Besov prior has been employed recently for the latter. To overcome both issues together, in this paper, we present a variable-index Besov prior measure, which is a non-Gaussian measure. Some useful properties of this new prior measure have been proven for functions defined on a torus. We have also generalized Bayesian inverse theory in infinite dimensions for our new setting. Finally, this theory has been applied to integer- and fractional-order backward diffusion problems. To the best of our knowledge, this is the first time that the Bayesian approach has been used for the fractional-order backward diffusion problem, which provides an opportunity to quantify its uncertainties.
A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty
Zhang, Kai; Wang, Zengfei; Zhang, Liming; Yao, Jun; Yan, Xia
2015-01-01
In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method), for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model’s parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively ‘important’ elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient. PMID:26252392
NASA Astrophysics Data System (ADS)
Blackwell, B. F.
1981-06-01
A very efficient numerical technique has been developed to solve the one-dimensional inverse problem of heat conduction. The Gauss elimination algorithm for solving the tridiagonal system of linear algebraic equations associated with most implicit heat conduction codes is specialized to the inverse problem. When compared to the corresponding direct problem, the upper limit in additional computation time generally does not exceed 27-36%. The technique can be adapted to existing one-dimensional implicit heat conduction codes with minimal effort and applied to difference equations obtained from finite-difference, finite-element, finite control volume, or similar techniques, provided the difference equations are tridiagonal in form. It is also applicable to the nonlinear case in which thermal properties are temperature-dependent and is valid for one-dimensional radial cylindrical and spherical geometries as well as composite bodies. The calculations reported here were done by modifying a one-dimensional implicit (direct) heat conduction code. Program changes consisted of 13 additional lines of FORTRAN coding.
An inverse problem solution to the flow of tracers in naturally fractured reservoirs
Jetzabeth Ramirez S.; Fernando Samaniego V.; Fernando Rodriguez; Jesus Rivera R.
1994-01-20
This paper presents a solution for the inverse problem to the flow of tracers in naturally fractured reservoirs. The models considered include linear flow in vertical fractures, radial flow in horizontal fractures, and cubic block matrix-fracture geometry. The Rosenbrock method for nonlinear regression used in this study, allowed the estimation of up to six parameters for the cubic block matrix fracture geometry. The nonlinear regression for the three cases was carefully tested against syntetical tracer concentration responses affected by random noise, with the objective of simulating as close as possible step injection field data. Results were obtained within 95 percent confidence limits. The sensitivity of the inverse problem solution on the main parameters that describe this flow problem was investigated. The main features of the nonlinear regression program used in this study are also discussed. The procedure of this study can be applied to interpret tracer tests in naturally fractured reservoirs, allowing the estimation of fracture and matrix parameters of practical interest (longitudinal fracture dispersivity alpha, matrix porosity phi2, fracture half-width w, matrix block size d, matrix diffusion coefficient D2 and the adsorption constant kd). The methodology of this work offers a practical alternative for tracer flow tests interpretation to other techniques.
NASA Technical Reports Server (NTRS)
Liu, Gao-Lian
1991-01-01
Advances in inverse design and optimization theory in engineering fields in China are presented. Two original approaches, the image-space approach and the variational approach, are discussed in terms of turbomachine aerodynamic inverse design. Other areas of research in turbomachine aerodynamic inverse design include the improved mean-streamline (stream surface) method and optimization theory based on optimal control. Among the additional engineering fields discussed are the following: the inverse problem of heat conduction, free-surface flow, variational cogeneration of optimal grid and flow field, and optimal meshing theory of gears.
NASA Astrophysics Data System (ADS)
Mejer Hansen, Thomas; Skou Cordua, Knud; Caroline Looms, Majken; Mosegaard, Klaus
2013-03-01
From a probabilistic point-of-view, the solution to an inverse problem can be seen as a combination of independent states of information quantified by probability density functions. Typically, these states of information are provided by a set of observed data and some a priori information on the solution. The combined states of information (i.e. the solution to the inverse problem) is a probability density function typically referred to as the a posteriori probability density function. We present a generic toolbox for Matlab and Gnu Octave called SIPPI that implements a number of methods for solving such probabilistically formulated inverse problems by sampling the a posteriori probability density function. In order to describe the a priori probability density function, we consider both simple Gaussian models and more complex (and realistic) a priori models based on higher order statistics. These a priori models can be used with both linear and non-linear inverse problems. For linear inverse Gaussian problems we make use of least-squares and kriging-based methods to describe the a posteriori probability density function directly. For general non-linear (i.e. non-Gaussian) inverse problems, we make use of the extended Metropolis algorithm to sample the a posteriori probability density function. Together with the extended Metropolis algorithm, we use sequential Gibbs sampling that allow computationally efficient sampling of complex a priori models. The toolbox can be applied to any inverse problem as long as a way of solving the forward problem is provided. Here we demonstrate the methods and algorithms available in SIPPI. An application of SIPPI, to a tomographic cross borehole inverse problems, is presented in a second part of this paper.
Zhang, Yang; Liu, Guoqiang; Tao, Chunjing; Wang, Hao; He, Wenjing
2009-01-01
The analysis of electromagnetic forward and inverse problems is very important in the process of image reconstruction for magnetoacoustic tomography with magnetic induction (MAT-MI). A new analysis method was introduced in this paper. It breaks through some illogical supposes that the existing methods applied and can improve the spatial resolution of the image availably. Besides it can avoid rotating the static magnetic field which is very difficult to come true in application, therefore the development of MAT-MI technique can be promoted greatly. To test the validity of the new method, two test models were analyzed, and the availability of the method was demonstrated.
Limitations of polynomial chaos expansions in the Bayesian solution of inverse problems
Lu, Fei; Morzfeld, Matthias; Tu, Xuemin; Chorin, Alexandre J.
2015-02-01
Polynomial chaos expansions are used to reduce the computational cost in the Bayesian solutions of inverse problems by creating a surrogate posterior that can be evaluated inexpensively. We show, by analysis and example, that when the data contain significant information beyond what is assumed in the prior, the surrogate posterior can be very different from the posterior, and the resulting estimates become inaccurate. One can improve the accuracy by adaptively increasing the order of the polynomial chaos, but the cost may increase too fast for this to be cost effective compared to Monte Carlo sampling without a surrogate posterior.
Inverse problem for extragalactic transport of ultra-high energy cosmic rays
Ptuskin, V.S.; Rogovaya, S.I.; Zirakashvili, V.N. E-mail: rogovaya@izmiran.ru
2015-03-01
The energy spectra and composition of ultra-high energy cosmic rays are changing in a course of propagation in the expanding Universe filled with background radiation. We developed a numerical code for solution of inverse problem for cosmic-ray transport equations that allows the determination of average source spectra of different nuclei from the cosmic ray spectra observed at the Earth. Employing this approach, the injection spectra of protons and Iron nuclei in extragalactic sources are found assuming that only these species are accelerated at the source. The data from the Auger experiment and the combined data from the Telescope Array + HiRes experiments are used to illustrate the method.
Poincaré inverse problem and torus construction in phase space
NASA Astrophysics Data System (ADS)
Laakso, Teemu; Kaasalainen, Mikko
2016-02-01
The phase space of an integrable Hamiltonian system is foliated by invariant tori. For an arbitrary Hamiltonian H such a foliation may not exist, but we can artificially construct one through a parameterised family of surfaces, with the intention of finding, in some sense, the closest integrable approximation to H. This is the Poincaré inverse problem (PIP). In this paper, we review the available methods of solving the PIP and present a new iterative approach which works well for the often problematic thin orbits.
A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem
NASA Astrophysics Data System (ADS)
Park, Taehoon; Park, Won-Kwang
2015-09-01
Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation.
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-01-01
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods
NASA Astrophysics Data System (ADS)
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-04-01
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
A theoretical formulation of the electrophysiological inverse problem on the sphere
NASA Astrophysics Data System (ADS)
Riera, Jorge J.; Valdés, Pedro A.; Tanabe, Kunio; Kawashima, Ryuta
2006-04-01
The construction of three-dimensional images of the primary current density (PCD) produced by neuronal activity is a problem of great current interest in the neuroimaging community, though being initially formulated in the 1970s. There exist even now enthusiastic debates about the authenticity of most of the inverse solutions proposed in the literature, in which low resolution electrical tomography (LORETA) is a focus of attention. However, in our opinion, the capabilities and limitations of the electro and magneto encephalographic techniques to determine PCD configurations have not been extensively explored from a theoretical framework, even for simple volume conductor models of the head. In this paper, the electrophysiological inverse problem for the spherical head model is cast in terms of reproducing kernel Hilbert spaces (RKHS) formalism, which allows us to identify the null spaces of the implicated linear integral operators and also to define their representers. The PCD are described in terms of a continuous basis for the RKHS, which explicitly separates the harmonic and non-harmonic components. The RKHS concept permits us to bring LORETA into the scope of the general smoothing splines theory. A particular way of calculating the general smoothing splines is illustrated, avoiding a brute force discretization prematurely. The Bayes information criterion is used to handle dissimilarities in the signal/noise ratios and physical dimensions of the measurement modalities, which could affect the estimation of the amount of smoothness required for that class of inverse solution to be well specified. In order to validate the proposed method, we have estimated the 3D spherical smoothing splines from two data sets: electric potentials obtained from a skull phantom and magnetic fields recorded from subjects performing an experiment of human faces recognition.
NASA Astrophysics Data System (ADS)
Chyuan, Shiang-Woei; Liao, Yunn-Shiuan; Chen, Jeng-Tzong
2004-09-01
Engineers usually adopt multilayered design for semiconductor and electron devices, and an accurate electrostatic analysis is indispensable in the design stage. For variable design of electron devices, the BEM has become a better method than the domain-type FEM because BEM can provide a complete solution in terms of boundary values only, with substantial saving in modelling effort. Since dual BEM still has some advantages over conventional BEM for singularity arising from a degenerate boundary, the dual BEM accompanied by subregion technology, instead of tedious calculation of Fourier-Bessel transforms for the spatial Green's functions, was used to efficiently simulate the electric effect of diverse ratios of permittivity between arbitrarily multilayered domain and the fringing field around the edge of conductors. Results show that different ratios of permittivity will affect the electric field seriously, and the values of surface charge density on the edge of conductors are much higher than those on the middle part because of fringing effect. In addition, if using the DBEM to model the fringing field around the edge of conductors, the minimum allowable data of dielectric strength for keeping off dielectric breakdown can be obtained very efficiently.
NASA Astrophysics Data System (ADS)
Al-Ma'shumah, Fathimah; Permana, Dony; Sidarto, Kuntjoro Adji
2015-12-01
Customer Lifetime Value is an important and useful concept in marketing. One of its benefits is to help a company for budgeting marketing expenditure for customer acquisition and customer retention. Many mathematical models have been introduced to calculate CLV considering the customer retention/migration classification scheme. A fairly new class of these models which will be described in this paper uses Markov Chain Models (MCM). This class of models has the major advantage for its flexibility to be modified to several different cases/classification schemes. In this model, the probabilities of customer retention and acquisition play an important role. From Pfeifer and Carraway, 2000, the final formula of CLV obtained from MCM usually contains nonlinear form of the transition probability matrix. This nonlinearity makes the inverse problem of CLV difficult to solve. This paper aims to solve this inverse problem, yielding the approximate transition probabilities for the customers, by applying metaheuristic optimization algorithm developed by Yang, 2013, Flower Pollination Algorithm. The major interpretation of obtaining the transition probabilities are to set goals for marketing teams in keeping the relative frequencies of customer acquisition and customer retention.
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.
2010-01-01
The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.
NASA Astrophysics Data System (ADS)
Lawrence, Chris C.; Febbraro, Michael; Flaska, Marek; Pozzi, Sara A.; Becchetti, F. D.
2016-08-01
Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrix condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.
A hybrid differential evolution/Levenberg-Marquardt method for solving inverse transport problems
Bledsoe, Keith C; Favorite, Jeffrey A
2010-01-01
Recently, the Differential Evolution (DE) optimization method was applied to solve inverse transport problems in finite cylindrical geometries and was shown to be far superior to the Levenberg-Marquardt optimization method at finding a global optimum for problems with several unknowns. However, while extremely adept at finding a global optimum solution, the DE method often requires a large number (hundreds or thousands) of transport calculations, making it much slower than the Levenberg-Marquardt method. In this paper, a hybridization of the Differential Evolution and Levenberg-Marquardt approaches is presented. This hybrid method takes advantage of the robust search capability of the Differential Evolution method and the speed of the Levenberg-Marquardt technique.
NASA Technical Reports Server (NTRS)
Murio, Diego A.
1991-01-01
An explicit and unconditionally stable finite difference method for the solution of the transient inverse heat conduction problem in a semi-infinite or finite slab mediums subject to nonlinear radiation boundary conditions is presented. After measuring two interior temperature histories, the mollification method is used to determine the surface transient heat source if the energy radiation law is known. Alternatively, if the active surface is heated by a source at a rate proportional to a given function, the nonlinear surface radiation law is then recovered as a function of the interface temperature when the problem is feasible. Two typical examples corresponding to Newton cooling law and Stefan-Boltzmann radiation law respectively are illustrated. In all cases, the method predicts the surface conditions with an accuracy suitable for many practical purposes.
NASA Astrophysics Data System (ADS)
Egger, Herbert; Engl, Heinz W.
2005-06-01
This paper investigates the stable identification of local volatility surfaces σ(S, t) in the Black-Scholes/Dupire equation from market prices of European Vanilla options. Based on the properties of the parameter-to-solution mapping, which assigns option prices to given volatilities, we show stability and convergence of approximations gained by Tikhonov regularization. In the case of a known term-structure of the volatility surface, in particular, if the volatility is assumed to be constant in time, we prove convergence rates under simple smoothness and decay conditions on the true volatility. The convergence rate analysis sheds light onto the importance of an appropriate a priori guess for the unknown volatility and the nature of the ill-posedness of the inverse problem, caused by smoothing properties and the nonlinearity of the direct problem. Finally, the theoretical results are illustrated by numerical experiments.
LINPRO: Linear inverse problem library for data contaminated by statistical noise
NASA Astrophysics Data System (ADS)
Magierski, Piotr; Wlazłowski, Gabriel
2012-10-01
The library LINPRO which provides the solution to the linear inverse problem for data contaminated by a statistical noise is presented. The library makes use of two methods: Maximum Entropy Method and Singular Value Decomposition. As an example it has been applied to perform an analytic continuation of the imaginary time propagator obtained within the Quantum Monte Carlo method. Program summary Program title: LINPRO v1.0. Catalogue identifier: AEMT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: GNU Lesser General Public Licence. No. of lines in distributed program, including test data, etc.: 110620. No. of bytes in distributed program, including test data, etc.: 3208593. Distribution format: tar.gz. Programming language: C++. Computer: LINPRO library should compile on any computing system that has C++ compiler. Operating system: Linux or Unix. Classification: 4.9, 4.12, 4.13. External routines: OPT++: An Object-Oriented Nonlinear Optimization Library [1] (included in the distribution). Nature of problem: LINPRO library solves linear inverse problems with an arbitrary kernel and arbitrary external constraints imposed on the solution. Solution method: LINPRO library implements two complementary methods: Maximum Entropy Method and SVD method. Additional comments: Tested with compilers-GNU Compiler g++, Intel Compiler icpc. Running time: Problem dependent, ranging from seconds to hours. Each of the examples takes less than a minute to run. References: [1] OPT++: An Object-Oriented Nonlinear Optimization Library, https://software.sandia.gov/opt++/.
NASA Astrophysics Data System (ADS)
Sanan, P.; Schnepp, S. M.; May, D.; Schenk, O.
2014-12-01
Geophysical applications require efficient forward models for non-linear Stokes flow on high resolution spatio-temporal domains. The bottleneck in applying the forward model is solving the linearized, discretized Stokes problem which takes the form of a large, indefinite (saddle point) linear system. Due to the heterogeniety of the effective viscosity in the elliptic operator, devising effective preconditioners for saddle point problems has proven challenging and highly problem-dependent. Nevertheless, at least three approaches show promise for preconditioning these difficult systems in an algorithmically scalable way using multigrid and/or domain decomposition techniques. The first is to work with a hierarchy of coarser or smaller saddle point problems. The second is to use the Schur complement method to decouple and sequentially solve for the pressure and velocity. The third is to use the Schur decomposition to devise preconditioners for the full operator. These involve sub-solves resembling inexact versions of the sequential solve. The choice of approach and sub-methods depends crucially on the motivating physics, the discretization, and available computational resources. Here we examine the performance trade-offs for preconditioning strategies applied to idealized models of mantle convection and lithospheric dynamics, characterized by large viscosity gradients. Due to the arbitrary topological structure of the viscosity field in geodynamical simulations, we utilize low order, inf-sup stable mixed finite element spatial discretizations which are suitable when sharp viscosity variations occur in element interiors. Particular attention is paid to possibilities within the decoupled and approximate Schur complement factorization-based monolithic approaches to leverage recently-developed flexible, communication-avoiding, and communication-hiding Krylov subspace methods in combination with `heavy' smoothers, which require solutions of large per-node sub-problems, well
Baker, J.R.; Budinger, T.F.; Huesman, R.H.
1992-10-01
A major limitation in tomographic inverse problems is inadequate computation speed, which frequently impedes the application of engineering ideas and principles in medical science more than in the physical and engineering sciences. Medical problems are computationally taxing because a minimum description of the system often involves 5 dimensions (3 space, 1 energy, 1 time), with the range of each space coordinate requiring up to 512 samples. The computational tasks for this problem can be simply expressed by posing the problem as one in which the tomograph system response function is spatially invariant, and the noise is additive and Gaussian. Under these assumptions, a number of reconstruction methods have been implemented with generally satisfactory results for general medical imaging purposes. However, if the system response function of the tomograph is assumed more realistically to be spatially variant and the noise to be Poisson, the computational problem becomes much more difficult. Some of the algorithms being studied to compensate for position dependent resolution and statistical fluctuations in the data acquisition process, when expressed in canonical form, are not practical for clinical applications because the number of computations necessary exceeds the capabilities of high performance computer systems currently available. Reconstruction methods based on natural pixels, specifically orthonormal natural pixels, preserve symmetries in the data acquisition process. Fast implementations of orthonormal natural pixel algorithms can achieve orders of magnitude speedup relative to general implementations. Thus, specialized thought in algorithm development can lead to more significant increases in performance than can be achieved through hardware improvements alone.
NASA Astrophysics Data System (ADS)
Barbone, Paul E.; Oberai, Assad A.; Harari, Isaac
2007-12-01
We consider the direct (i.e. non-iterative) solution of the inverse problem of heat conduction for which at least two interior temperature fields are available. The strong form of the problem for the single, unknown, thermal conductivity field is governed by two partial differential equations of pure advective transport. The given temperature fields must satisfy a compatibility condition for the problem to have a solution. We introduce a novel variational formulation, the adjoint-weighted equation (AWE), for solving the two-field problem. In this case, the gradients of two given temperature fields must be linearly independent in the entire domain, a weaker condition than the compatibility required by the strong form. We show that the solution of the AWE formulation is equivalent to that of the strong form when both are well posed. We prove that the Galerkin discretization of the AWE formulation leads to a stable, convergent numerical method that has optimal rates of convergence. We show computational examples that confirm these optimal rates. The AWE formulation shows good numerical performance on problems with both smooth and rough coefficients and solutions.
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Girolami, M.
2014-11-01
We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The Bayesian framework is employed to cast the inverse problem into the task of statistical inference whose solution is the posterior distribution in infinite dimensional parameter space conditional upon observation data and Gaussian prior measure. We discretize both the likelihood and the prior using the H1-conforming finite element method together with a matrix transfer technique. The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost uncorrelated/independent from the others providing statistically efficient Markov chain simulation. However this statistical efficiency comes at a computational cost. This motivates us to consider computationally more efficient strategies for RMHMC. At the heart of our construction is the fact that for Gaussian error structures the Fisher information matrix coincides with the Gauss-Newton Hessian. We exploit this fact in considering a computationally simplified RMHMC method combining state-of-the-art adjoint techniques and the superiority of the RMHMC method. Specifically, we first form the Gauss-Newton Hessian at the maximum a posteriori point and then use it as a fixed constant metric tensor throughout RMHMC simulation. This eliminates the need for the computationally costly differential geometric Christoffel symbols, which in turn greatly reduces computational effort at a corresponding loss of sampling efficiency. We further reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint
NASA Astrophysics Data System (ADS)
Rudenko, O. V.; Gurbatov, S. N.
2016-07-01
Inverse problems of nonlinear acoustics have important applied significance. On the one hand, they are necessary for nonlinear diagnostics of media, materials, manufactured articles, building units, and biological and geological structures. On the other hand, they are needed for creating devices that ensure optimal action of acoustic radiation on a target. However, despite the many promising applications, this direction remains underdeveloped, especially for strongly distorted high-intensity waves containing shock fronts. An example of such an inverse problem is synthesis of the spatiotemporal structure of a field in a radiating system that ensures the highest possible energy density in the focal region. This problem is also related to the urgent problems of localizing wave energy and the theory of strongly nonlinear waves. Below we analyze some quite general and simple inverse nonlinear problems.
NASA Technical Reports Server (NTRS)
Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard
1988-01-01
The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.
Model Reduction of a Transient Groundwater-Flow Model for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Boyce, S. E.; Yeh, W. W.
2011-12-01
A Bayesian inverse problem requires many repeated model simulations to characterize an unknown parameter's posterior probability distribution. It is computationally infeasible to solve a Bayesian inverse problem of a discretized groundwater flow model with a high dimension parameter and state space. Model reduction has been shown to reduce the dimension of a groundwater model by several orders of magnitude and is well suited for Bayesian inverse problems. A projection-based model reduction approach is proposed to reduce the parameter and state dimensions of a groundwater model. Previous work has done this by using a greedy algorithm for the selection of parameter vectors that make up a basis and their corresponding steady-state solutions for a state basis. The proposed method extends this idea to include transient models by assembling sequentially though the greedy algorithm the parameter and state projection bases. The method begins with the parameter basis being a single vector that is equal to one or an accepted series of values. A set of state vectors that are solutions to the groundwater model using this parameter vector at appropriate times is called the parameter snapshot set. The appropriate times for the parameter snapshot set are determined by maximizing the set's minimum singular value. This optimization is a similar to those used in experimental design for maximizing information. The two bases are made orthonormal by a QR decomposition and applied to the full groundwater model to form a reduced model. The parameter basis is increased with a new parameter vector that maximizes the error between the full model and the reduced model at a set of observation times. The new parameter vector represents where the reduced model is least accurate in representing the original full model. The corresponding parameter snapshot set's appropriate times are found using a greedy algorithm. This sequentially chooses times that have maximum error between the full and
NASA Astrophysics Data System (ADS)
Zmywaczyk, J.; Koniorczyk, P.
2009-08-01
The problem of simultaneous identification of the thermal conductivity Λ(T) and the asymmetry parameter g of the Henyey-Greenstein scattering phase function is under consideration. A one-dimensional configuration in a grey participating medium with respect to silica fibers for which the thermophysical and optical properties are known from the literature is accepted. To find the unknown parameters, it is assumed that the thermal conductivity Λ(T) may be represented in a base of functions {1, T, T 2, . . .,T K } so the inverse problem can be applied to determine a set of coefficients {Λ0, Λ1, . . ., Λ K ; g}. The solution of the inverse problem is based on minimization of the ordinary squared differences between the measured and model temperatures. The measured temperatures are considered known. Temperature responses measured or theoretically generated at several different distances from the heat source along an x axis of the specimen set are known as a result of the numerical solution of the transient coupled heat transfer in a grey participating medium. An implicit finite volume method (FVM) is used for handling the energy equation, while a finite difference method (FDM) is applied to find the sensitivity coefficients with respect to the unknown set of coefficients. There are free parameters in a model, so these parameters are changed during an iteration process used by the fitting procedure. The Levenberg- Marquardt fitting procedure is iteratively searching for best fit of these parameters. The source term in the governing conservation-of-energy equation taking into account absorption, emission, and scattering of radiation is calculated by means of a discrete ordinate method together with an FDM while the scattering phase function approximated by the Henyey-Greenstein function is expanded in a series of Legendre polynomials with coefficients {c l } = (2l + 1)g l . The numerical procedure proposed here also allows consideration of some cases of coupled heat
SU-E-J-161: Inverse Problems for Optical Parameters in Laser Induced Thermal Therapy
Fahrenholtz, SJ; Stafford, RJ; Fuentes, DT
2014-06-01
Purpose: Magnetic resonance-guided laser-induced thermal therapy (MRgLITT) is investigated as a neurosurgical intervention for oncological applications throughout the body by active post market studies. Real-time MR temperature imaging is used to monitor ablative thermal delivery in the clinic. Additionally, brain MRgLITT could improve through effective planning for laser fiber's placement. Mathematical bioheat models have been extensively investigated but require reliable patient specific physical parameter data, e.g. optical parameters. This abstract applies an inverse problem algorithm to characterize optical parameter data obtained from previous MRgLITT interventions. Methods: The implemented inverse problem has three primary components: a parameter-space search algorithm, a physics model, and training data. First, the parameter-space search algorithm uses a gradient-based quasi-Newton method to optimize the effective optical attenuation coefficient, μ-eff. A parameter reduction reduces the amount of optical parameter-space the algorithm must search. Second, the physics model is a simplified bioheat model for homogeneous tissue where closed-form Green's functions represent the exact solution. Third, the training data was temperature imaging data from 23 MRgLITT oncological brain ablations (980 nm wavelength) from seven different patients. Results: To three significant figures, the descriptive statistics for μ-eff were 1470 m{sup −1} mean, 1360 m{sup −1} median, 369 m{sup −1} standard deviation, 933 m{sup −1} minimum and 2260 m{sup −1} maximum. The standard deviation normalized by the mean was 25.0%. The inverse problem took <30 minutes to optimize all 23 datasets. Conclusion: As expected, the inferred average is biased by underlying physics model. However, the standard deviation normalized by the mean is smaller than literature values and indicates an increased precision in the characterization of the optical parameters needed to plan MRg
Direct and inverse theorems on approximation by root functions of a regular boundary-value problem
NASA Astrophysics Data System (ADS)
Radzievskii, G. V.
2006-08-01
One considers the spectral problem x^{(n)}+ Fx=\\lambda x with boundary conditions U_j(x)=0, j=1,\\dots,n, for functions x on \\lbrack0,1\\rbrack . It is assumed that F is a linear bounded operator from the Hölder space C^\\gamma, \\gamma\\in \\lbrack0,n-1), into L_1 and the U_j are bounded linear functionals on C^{k_j} with k_j\\in \\{0,\\dots,n- 1\\}. Let \\mathfrak{P}_\\zeta be the linear span of the root functions of the problem x^{(n)}+ Fx=\\lambda x, U_j(x)=0, j=1,\\dots,n, corresponding to the eigenvalues \\lambda_k with \\vert\\lambda_k\\vert< \\zeta^n, and let \\mathscr E_\\zeta(f)_{W_p^l}:=\\inf\\bigl\\{\\Vert f-g\\Vert _{W_p^l}:g\\in\\mathfrak{P}_\\zeta\\bigr\\}. An estimate of \\mathscr E_\\zeta(f)_{W_p^l} is obtained in terms of the K-functional K(\\zeta^{-m},f;W_p^l,W_{p,U}^{l+m}):= \\kern24pt\\inf\\bigl\\{\\Vert f-x\\Vert _{W_p^l} +\\zeta^{-m}\\Vert x\\Vert _{W_p^{l+m}}:x\\inW_p^{l+m},\\ U_j(x)=0 for k_j (the direct theorem) and an estimate of this K-functional is obtained in terms of \\mathscr E_\\xi(f)_{W_p^l} for \\xi\\le\\zeta (the inverse theorem).In several cases two-sided bounds of the K-functional are found in terms of appropriate moduli of continuity, and then the direct and the inverse theorems are stated in terms of moduli of continuity. For the spectral problem x^{(n)}=\\lambda x with periodic boundary conditions these results coincide with Jackson's and Bernstein's direct and inverse theorems on the approximation of functions by a trigonometric system.
Baghani, Ali; Salcudean, Septimiu; Honarvar, Mohammad; Sahebjavaher, Ramin S; Rohling, Robert; Sinkus, Ralph
2011-08-01
In this paper, a novel approach to the problem of elasticity reconstruction is introduced. In this approach, the solution of the wave equation is expanded as a sum of waves travelling in different directions sharing a common wave number. In particular, the solutions for the scalar and vector potentials which are related to the dilatational and shear components of the displacement respectively are expanded as sums of travelling waves. This solution is then used as a model and fitted to the measured displacements. The value of the shear wave number which yields the best fit is then used to find the elasticity at each spatial point. The main advantage of this method over direct inversion methods is that, instead of taking the derivatives of noisy measurement data, the derivatives are taken on the analytical model. This improves the results of the inversion. The dilatational and shear components of the displacement can also be computed as a byproduct of the method, without taking any derivatives. Experimental results show the effectiveness of this technique in magnetic resonance elastography. Comparisons are made with other state-of-the-art techniques. PMID:21813354
Tuan, P.C.; Ju, M.C.
2000-03-01
A novel adaptive and robust input estimation inverse methodology of estimating the time-varying unknown heat flux, named as the input, on the two active boundaries of a 2-D inverse heat conduction problem is presented. The algorithm includes using the Kalman filter to propose a regression model between the residual innovation and the two thermal unknown boundaries flux through given 2-D heat conduction state-space models and noisy measurement sequence. Based on this regression equation, a recursive least-square estimator (RLSE) weighted by the forgetting factor is proposed to on-line estimate these unknowns. The adaptive and robust weighting technique is essential since unknowns input are time-varied and have unpredictable changing status. In this article, the authors provide the bandwidth analysis together with bias and variance tests to construct an efficient and robust forgetting factor as the ratio between the standard deviation of measurement and observable bias innovation at each time step. Herein, the unknowns are robustly and adaptively estimated under the system involving measurement noise, process error, and unpredictable change status of time-varying unknowns. The capabilities of the proposed algorithm are demonstrated through the comparison with the conventional input estimation algorithm and validated by two benchmark performance tests in 2-D cases. Results show that the proposed algorithm not only exhibits superior robust capability but also enhances the estimation performance and highly facilitates practical implementation.
Baghani, Ali; Salcudean, Septimiu; Honarvar, Mohammad; Sahebjavaher, Ramin S; Rohling, Robert; Sinkus, Ralph
2011-08-01
In this paper, a novel approach to the problem of elasticity reconstruction is introduced. In this approach, the solution of the wave equation is expanded as a sum of waves travelling in different directions sharing a common wave number. In particular, the solutions for the scalar and vector potentials which are related to the dilatational and shear components of the displacement respectively are expanded as sums of travelling waves. This solution is then used as a model and fitted to the measured displacements. The value of the shear wave number which yields the best fit is then used to find the elasticity at each spatial point. The main advantage of this method over direct inversion methods is that, instead of taking the derivatives of noisy measurement data, the derivatives are taken on the analytical model. This improves the results of the inversion. The dilatational and shear components of the displacement can also be computed as a byproduct of the method, without taking any derivatives. Experimental results show the effectiveness of this technique in magnetic resonance elastography. Comparisons are made with other state-of-the-art techniques.
Hybrid modeling of spatial continuity for application to numerical inverse problems
Friedel, Michael J.; Iwashita, Fabio
2013-01-01
A novel two-step modeling approach is presented to obtain optimal starting values and geostatistical constraints for numerical inverse problems otherwise characterized by spatially-limited field data. First, a type of unsupervised neural network, called the self-organizing map (SOM), is trained to recognize nonlinear relations among environmental variables (covariates) occurring at various scales. The values of these variables are then estimated at random locations across the model domain by iterative minimization of SOM topographic error vectors. Cross-validation is used to ensure unbiasedness and compute prediction uncertainty for select subsets of the data. Second, analytical functions are fit to experimental variograms derived from original plus resampled SOM estimates producing model variograms. Sequential Gaussian simulation is used to evaluate spatial uncertainty associated with the analytical functions and probable range for constraining variables. The hybrid modeling of spatial continuity is demonstrated using spatially-limited hydrologic measurements at different scales in Brazil: (1) physical soil properties (sand, silt, clay, hydraulic conductivity) in the 42 km2 Vargem de Caldas basin; (2) well yield and electrical conductivity of groundwater in the 132 km2 fractured crystalline aquifer; and (3) specific capacity, hydraulic head, and major ions in a 100,000 km2 transboundary fractured-basalt aquifer. These results illustrate the benefits of exploiting nonlinear relations among sparse and disparate data sets for modeling spatial continuity, but the actual application of these spatial data to improve numerical inverse modeling requires testing.
Davidson, Susan E; Gillespie, Catherine; Allum, William H; Swarbrick, Edwin
2011-01-01
Backgound The number of patients with chronic gastrointestinal (GI) symptoms after cancer therapies which have a moderate or severe impact on quality of life is similar to the number diagnosed with inflammatory bowel disease annually. However, in contrast to patients with inflammatory bowel disease, most of these patients are not referred for gastroenterological assessment. Clinicians who do see these patients are often unaware of the benefits of targeted investigation (which differ from those required to exclude recurrent cancer), the range of available treatments and how the pathological processes underlying side effects of cancer treatment differ from those in benign GI disorders. This paper aims to help clinicians become aware of the problem and suggests ways in which the panoply of syndromes can be managed. Methods A multidisciplinary literature review was performed to develop guidance to facilitate clinical management of GI side effects of cancer treatments. Results Different pathological processes within the GI tract may produce identical symptoms. Optimal management requires appropriate investigations and coordinated multidisciplinary working. Lactose intolerance, small bowel bacterial overgrowth and bile acid malabsorption frequently develop during or after chemotherapy. Toxin-negative Clostridium difficile and cytomegalovirus infection may be fulminant in immunosuppressed patients and require rapid diagnosis and treatment. Hepatic side effects include reactivation of viral hepatitis, sinusoidal obstruction syndrome, steatosis and steatohepatitis. Anticancer biological agents have multiple interactions with conventional drugs. Colonoscopy is contraindicated in neutropenic enterocolitis but endoscopy may be life-saving in other patients with GI bleeding. After cancer treatment, simple questions can identify patients who need referral for specialist management of GI symptoms. Other troublesome pelvic problems (eg, urinary, sexual, nutritional) are frequent
NASA Astrophysics Data System (ADS)
Hetmaniok, Edyta
2016-07-01
In this paper the procedure for solving the inverse problem for the binary alloy solidification in the casting mould is presented. Proposed approach is based on the mathematical model suitable for describing the investigated solidification process, the lever arm model describing the macrosegregation process, the finite element method for solving the direct problem and the artificial bee colony algorithm for minimizing the functional expressing the error of approximate solution. Goal of the discussed inverse problem is the reconstruction of heat transfer coefficient and distribution of temperature in investigated region on the basis of known measurements of temperature.
Thomas, Edward V.; Stork, Christopher L.; Mattingly, John K.
2015-07-01
Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is traditionally solved by finding the set of transport model parameter values that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature pre dicted by the hypothesized model parameters. The weights are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. The traditional implicit (often inaccurate) assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. Here, an alternative method that accounts for correlated errors between channels is described and illustrated using an inverse problem based on the combination of gam ma and neutron multiplicity counting measurements.
The inverse problem of brain energetics: ketone bodies as alternative substrates
NASA Astrophysics Data System (ADS)
Calvetti, D.; Occhipinti, R.; Somersalo, E.
2008-07-01
Little is known about brain energy metabolism under ketosis, although there is evidence that ketone bodies have a neuroprotective role in several neurological disorders. We investigate the inverse problem of estimating reaction fluxes and transport rates in the different cellular compartments of the brain, when the data amounts to a few measured arterial venous concentration differences. By using a recently developed methodology to perform Bayesian Flux Balance Analysis and a new five compartment model of the astrocyte-glutamatergic neuron cellular complex, we are able to identify the preferred biochemical pathways during shortage of glucose and in the presence of ketone bodies in the arterial blood. The analysis is performed in a minimally biased way, therefore revealing the potential of this methodology for hypothesis testing.
Dogrusoz, Yesim Serinagaoglu; Gavgani, Alireza Mazloumi
2013-04-01
In inverse electrocardiography, the goal is to estimate cardiac electrical sources from potential measurements on the body surface. It is by nature an ill-posed problem, and regularization must be employed to obtain reliable solutions. This paper employs the multiple constraint solution approach proposed in Brooks et al. (IEEE Trans Biomed Eng 46(1):3-18, 1999) and extends its practical applicability to include more than two constraints by finding appropriate values for the multiple regularization parameters. Here, we propose the use of real-valued genetic algorithms for the estimation of multiple regularization parameters. Theoretically, it is possible to include as many constraints as necessary and find the corresponding regularization parameters using this approach. We have shown the feasibility of our method using two and three constraints. The results indicate that GA could be a good approach for the estimation of multiple regularization parameters.
Patch-ordering-based wavelet frame and its use in inverse problems.
Ram, Idan; Cohen, Israel; Elad, Michael
2014-07-01
In our previous work [1] we have introduced a redundant tree-based wavelet transform (RTBWT), originally designed to represent functions defined on high dimensional data clouds and graphs. We have further shown that RTBWT can be used as a highly effective image-adaptive redundant transform that operates on an image using orderings of its overlapped patches. The resulting transform is robust to corruptions in the image, and thus able to efficiently represent the unknown target image even when it is calculated from its corrupted version. In this paper, we utilize this redundant transform as a powerful sparsity-promoting regularizer in inverse problems in image processing. We show that the image representation obtained with this transform is a frame expansion, and derive the analysis and synthesis operators associated with it. We explore the use of this frame operators to image denoising and deblurring, and demonstrate in both these cases state-of-the-art results.
Presymplectic current and the inverse problem of the calculus of variations
Khavkine, Igor
2013-11-15
The inverse problem of the calculus of variations asks whether a given system of partial differential equations (PDEs) admits a variational formulation. We show that the existence of a presymplectic form in the variational bicomplex, when horizontally closed on solutions, allows us to construct a variational formulation for a subsystem of the given PDE. No constraints on the differential order or number of dependent or independent variables are assumed. The proof follows a recent observation of Bridges, Hydon, and Lawson [Math. Proc. Cambridge Philos. Soc. 148(01), 159–178 (2010)] and generalizes an older result of Henneaux [Ann. Phys. 140(1), 45–64 (1982)] from ordinary differential equations (ODEs) to PDEs. Uniqueness of the variational formulation is also discussed.
Presymplectic current and the inverse problem of the calculus of variations
NASA Astrophysics Data System (ADS)
Khavkine, Igor
2013-11-01
The inverse problem of the calculus of variations asks whether a given system of partial differential equations (PDEs) admits a variational formulation. We show that the existence of a presymplectic form in the variational bicomplex, when horizontally closed on solutions, allows us to construct a variational formulation for a subsystem of the given PDE. No constraints on the differential order or number of dependent or independent variables are assumed. The proof follows a recent observation of Bridges, Hydon, and Lawson [Math. Proc. Cambridge Philos. Soc. 148(01), 159-178 (2010)] and generalizes an older result of Henneaux [Ann. Phys. 140(1), 45-64 (1982)] from ordinary differential equations (ODEs) to PDEs. Uniqueness of the variational formulation is also discussed.
Free-energy functional method for inverse problem of self assembly
NASA Astrophysics Data System (ADS)
Torikai, Masashi
2015-04-01
A new theoretical approach is described for the inverse self-assembly problem, i.e., the reconstruction of the interparticle interaction from a given structure. This theory is based on the variational principle for the functional that is constructed from a free energy functional in combination with Percus's approach [J. Percus, Phys. Rev. Lett. 8, 462 (1962)]. In this theory, the interparticle interaction potential for the given structure is obtained as the function that maximizes the functional. As test cases, the interparticle potentials for two-dimensional crystals, such as square, honeycomb, and kagome lattices, are predicted by this theory. The formation of each target lattice from an initial random particle configuration in Monte Carlo simulations with the predicted interparticle interaction indicates that the theory is successfully applied to the test cases.
Free-energy functional method for inverse problem of self assembly.
Torikai, Masashi
2015-04-14
A new theoretical approach is described for the inverse self-assembly problem, i.e., the reconstruction of the interparticle interaction from a given structure. This theory is based on the variational principle for the functional that is constructed from a free energy functional in combination with Percus's approach [J. Percus, Phys. Rev. Lett. 8, 462 (1962)]. In this theory, the interparticle interaction potential for the given structure is obtained as the function that maximizes the functional. As test cases, the interparticle potentials for two-dimensional crystals, such as square, honeycomb, and kagome lattices, are predicted by this theory. The formation of each target lattice from an initial random particle configuration in Monte Carlo simulations with the predicted interparticle interaction indicates that the theory is successfully applied to the test cases.
Pulse reflectometry as an acoustical inverse problem: Regularization of the bore reconstruction
NASA Astrophysics Data System (ADS)
Forbes, Barbara J.; Sharp, David B.; Kemp, Jonathan A.
2002-11-01
The theoretical basis of acoustic pulse reflectometry, a noninvasive method for the reconstruction of an acoustical duct from the reflections measured in response to an input pulse, is reviewed in terms of the inversion of the central Fredholm equation. It is known that this is an ill-posed problem in the context of finite-bandwidth experimental signals. Recent work by the authors has proposed the truncated singular value decomposition (TSVD) in the regularization of the transient input impulse response, a non-measurable quantity from which the spatial bore reconstruction is derived. In the present paper we further emphasize the relevance of the singular system framework to reflectometry applications, examining for the first time the transient bases of the system. In particular, by varying the truncation point for increasing condition numbers of the system matrix, it is found that the effects of out-of-bandwidth singular functions on the bore reconstruction can be systematically studied.
Mean-field theory for the inverse Ising problem at low temperatures.
Nguyen, H Chau; Berg, Johannes
2012-08-01
The large amounts of data from molecular biology and neuroscience have lead to a renewed interest in the inverse Ising problem: how to reconstruct parameters of the Ising model (couplings between spins and external fields) from a number of spin configurations sampled from the Boltzmann measure. To invert the relationship between model parameters and observables (magnetizations and correlations), mean-field approximations are often used, allowing the determination of model parameters from data. However, all known mean-field methods fail at low temperatures with the emergence of multiple thermodynamic states. Here, we show how clustering spin configurations can approximate these thermodynamic states and how mean-field methods applied to thermodynamic states allow an efficient reconstruction of Ising models also at low temperatures.
Regularization strategy for an inverse problem for a 1 + 1 dimensional wave equation
NASA Astrophysics Data System (ADS)
Korpela, Jussi; Lassas, Matti; Oksanen, Lauri
2016-06-01
An inverse boundary value problem for a 1 + 1 dimensional wave equation with a wave speed c(x) is considered. We give a regularization strategy for inverting the map { A } :c\\mapsto {{Λ }}, where Λ is the hyperbolic Neumann-to-Dirichlet map corresponding to the wave speed c. That is, we consider the case when we are given a perturbation of the Neumann-to-Dirichlet map \\tilde{{{Λ }}}={{Λ }}+{ E }, where { E } corresponds to the measurement errors, and reconstruct an approximative wave speed \\tilde{c}. We emphasize that \\tilde{{{Λ }}} may not be in the range of the map { A }. We show that the reconstructed wave speed \\tilde{c} satisfies \\parallel \\tilde{c}-c\\parallel ≤slant C\\parallel { E }{\\parallel }1/54. Our regularization strategy is based on a new formula to compute c from Λ.
S-Genius, a universal software platform with versatile inverse problem resolution for scatterometry
NASA Astrophysics Data System (ADS)
Fuard, David; Troscompt, Nicolas; El Kalyoubi, Ismael; Soulan, Sébastien; Besacier, Maxime
2013-05-01
S-Genius is a new universal scatterometry platform, which gathers all the LTM-CNRS know-how regarding the rigorous electromagnetic computation and several inverse problem solver solutions. This software platform is built to be a userfriendly, light, swift, accurate, user-oriented scatterometry tool, compatible with any ellipsometric measurements to fit and any types of pattern. It aims to combine a set of inverse problem solver capabilities — via adapted Levenberg- Marquard optimization, Kriging, Neural Network solutions — that greatly improve the reliability and the velocity of the solution determination. Furthermore, as the model solution is mainly vulnerable to materials optical properties, S-Genius may be coupled with an innovative material refractive indices determination. This paper will a little bit more focuses on the modified Levenberg-Marquardt optimization, one of the indirect method solver built up in parallel with the total SGenius software coding by yours truly. This modified Levenberg-Marquardt optimization corresponds to a Newton algorithm with an adapted damping parameter regarding the definition domains of the optimized parameters. Currently, S-Genius is technically ready for scientific collaboration, python-powered, multi-platform (windows/linux/macOS), multi-core, ready for 2D- (infinite features along the direction perpendicular to the incident plane), conical, and 3D-features computation, compatible with all kinds of input data from any possible ellipsometers (angle or wavelength resolved) or reflectometers, and widely used in our laboratory for resist trimming studies, etching features characterization (such as complex stack) or nano-imprint lithography measurements for instance. The work about kriging solver, neural network solver and material refractive indices determination is done (or about to) by other LTM members and about to be integrated on S-Genius platform.
Massively parallel solution of the inverse scattering problem for integrated circuit quality control
Leland, R.W.; Draper, B.L.; Naqvi, S.; Minhas, B.
1997-09-01
The authors developed and implemented a highly parallel computational algorithm for solution of the inverse scattering problem generated when an integrated circuit is illuminated by laser. The method was used as part of a system to measure diffraction grating line widths on specially fabricated test wafers and the results of the computational analysis were compared with more traditional line-width measurement techniques. The authors found they were able to measure the line width of singly periodic and doubly periodic diffraction gratings (i.e. 2D and 3D gratings respectively) with accuracy comparable to the best available experimental techniques. They demonstrated that their parallel code is highly scalable, achieving a scaled parallel efficiency of 90% or more on typical problems running on 1024 processors. They also made substantial improvements to the algorithmics and their original implementation of Rigorous Coupled Waveform Analysis, the underlying computational technique. These resulted in computational speed-ups of two orders of magnitude in some test problems. By combining these algorithmic improvements with parallelism the authors achieve speedups of between a few thousand and hundreds of thousands over the original engineering code. This made the laser diffraction measurement technique practical.
Double obstacle phase field approach to an inverse problem for a discontinuous diffusion coefficient
NASA Astrophysics Data System (ADS)
Deckelnick, Klaus; Elliott, Charles M.; Styles, Vanessa
2016-04-01
We propose a double obstacle phase field approach to the recovery of piece-wise constant diffusion coefficients for elliptic partial differential equations. The approach to this inverse problem is that of optimal control in which we have a quadratic fidelity term to which we add a perimeter regularization weighted by a parameter σ. This yields a functional which is optimized over a set of diffusion coefficients subject to a state equation which is the underlying elliptic PDE. In order to derive a problem which is amenable to computation the perimeter functional is relaxed using a gradient energy functional together with an obstacle potential in which there is an interface parameter ɛ. This phase field approach is justified by proving {{Γ }}- convergence to the functional with perimeter regularization as ε \\to 0. The computational approach is based on a finite element approximation. This discretization is shown to converge in an appropriate way to the solution of the phase field problem. We derive an iterative method which is shown to yield an energy decreasing sequence converging to a discrete critical point. The efficacy of the approach is illustrated with numerical experiments.
NASA Astrophysics Data System (ADS)
Shojaeefard, M. H.; Goudarzi, K.; Mazidi, M. Sh.
2009-06-01
The problems involving periodic contacting surfaces have different practical applications. An inverse heat conduction problem for estimating the periodic Thermal Contact Conductance (TCC) between one-dimensional, constant property contacting solids has been investigated with conjugate gradient method (CGM) of function estimation. This method converges very rapidly and is not so sensitive to the measurement errors. The advantage of the present method is that no a priori information is needed on the variation of the unknown quantities, since the solution automatically determines the functional form over the specified domain. A simple, straight forward technique is utilized to solve the direct, sensitivity and adjoint problems, in order to overcome the difficulties associated with numerical methods. Two general classes of results, the results obtained by applying inexact simulated measured data and the results obtained by using data taken from an actual experiment are presented. In addition, extrapolation method is applied to obtain actual results. Generally, the present method effectively improves the exact TCC when exact and inexact simulated measurements input to the analysis. Furthermore, the results obtained with CGM and the extrapolation results are in agreement and the little deviations can be negligible.
Cerebellum-inspired neural network solution of the inverse kinematics problem.
Asadi-Eydivand, Mitra; Ebadzadeh, Mohammad Mehdi; Solati-Hashjin, Mehran; Darlot, Christian; Abu Osman, Noor Azuan
2015-12-01
The demand today for more complex robots that have manipulators with higher degrees of freedom is increasing because of technological advances. Obtaining the precise movement for a desired trajectory or a sequence of arm and positions requires the computation of the inverse kinematic (IK) function, which is a major problem in robotics. The solution of the IK problem leads robots to the precise position and orientation of their end-effector. We developed a bioinspired solution comparable with the cerebellar anatomy and function to solve the said problem. The proposed model is stable under all conditions merely by parameter determination, in contrast to recursive model-based solutions, which remain stable only under certain conditions. We modified the proposed model for the simple two-segmented arm to prove the feasibility of the model under a basic condition. A fuzzy neural network through its learning method was used to compute the parameters of the system. Simulation results show the practical feasibility and efficiency of the proposed model in robotics. The main advantage of the proposed model is its generalizability and potential use in any robot. PMID:26438095
NASA Astrophysics Data System (ADS)
Schuh, A. E.; Denning, A. S.; Corbin, K. D.; Uliasz, M.; Zupanski, D.; Parazoo, N. C.
2007-12-01
Estimation of regional carbon fluxes from sparse atmospheric data by transport inversion is complicated by high- frequency variations in surface fluxes in both space and time. We assume that a forward coupled model of the vegetated land surface and atmosphere adequately captures most of the high-frequency variations (SiB-RAMS) as a `preprocessor` of input data from remote sensing and large-scale weather. We then use continuous CO2 observations and backward-in-time Lagrangian particle modeling to estimate persistent multiplicative biases in photosynthesis and ecosystem respiration, constraining the temporal pattern of these fluxes with the forward model. With a sparse network of continuous observing sites in North America, the inverse problem is still badly underconstrained for flux biases on the model grid scale. Previous studies have reduced the dimensionality of this problem by using large `regions` such as biomes or ecoregions, or by seeking a smooth solution in space. This could introduce substantial bias in the solution because the actual flux biases are likely to be quite heterogeneous. We have evaluated the degree to which carbon flux over large regions (500 to 1500 km) can be recovered when the true spatial pattern is not smooth. We performed ensembles of inversions for a 4-month case study in May- August, 2004 over North America with synthetic mid-day CO2 observations from a network of 8 towers. A smooth regional field of model biases was superposed with ensembles of various degrees of grid-scale `noise,` and these were then used to create synthetic concentration data. The pseudodata were then inverted to estimate gridded values of the biases, which were then combined with time-varying model fluxes to create regional maps of sources and sinks. We found that the degree to which corrections in regional fluxes are possible will depend on the relative amount of variance in the regional vs grid scales, but that the system is quite successful in estimating
NASA Astrophysics Data System (ADS)
Ebtehaj, Mohammad
The past decades have witnessed a remarkable emergence of new spaceborne and ground-based sources of multiscale remotely sensed geophysical data. Apart from applications related to the study of short-term climatic shifts, availability of these sources of information has improved dramatically our real-time hydro-meteorological forecast skills. Obtaining improved estimates of hydro-meteorological states from a single or multiple low-resolution observations and assimilating them into the background knowledge of a prognostic model have been a subject of growing research in the past decades. In this thesis, with particular emphasis on precipitation data, statistical structure of rainfall images have been thoroughly studied in transform domains (i.e., Fourier and Wavelet). It is mainly found that despite different underlying physical structure of storm events, there are general statistical signatures that can be robustly characterized and exploited as a prior knowledge for solving hydro-meteorological inverse problems such rainfall downscaling, data fusion, retrieval and data assimilation. In particular, it is observed that in the wavelet domain or derivative space, rainfall images are sparse. In other words, a large number of the rainfall expansion coefficients are very close to zero and only a small number of them are significantly non-zero, a manifestation of the non-Gaussian probabilistic structure of rainfall data. To explain this signature, relevant family of probability models including Generalized Gaussian Density (GGD) and a specific class of conditionally linear Gaussian Scale Mixtures (GSM) are studied. Capitalizing on this important but overlooked property of precipitation, new methodologies are proposed to optimally integrate and improve resolution of spaceborne and ground-based precipitation data. In particular, a unified framework is proposed that ties together the problems of downscaling, data fusion and data assimilation via a regularized variational
NASA Astrophysics Data System (ADS)
Gallovic, F.; Ampuero, J. P.
2015-12-01
Slip inversion methods differ in how the rupture model is parameterized and which regularizations or constraints are applied. However, there is still no consensus about which of the slip inversion methods are preferable and how reliable the inferred source models are due to the non-uniqueness or ill-posedness of the inverse problem. The 'Source Inversion Validation' (SIV) initiative aims to characterize and understand the performance of slip inversion methods (http://equake-rc.info/SIV/). Up to now, four benchmark test cases have been proposed, some of which were even conducted as blind tests. The next step is performing quantitative comparisons of the inverted rupture models. To this aim, we introduce a new comparison technique based on a Singular Value Decomposition (SVD) of the design matrix of the continuum inverse problem. We separate the range and null sub-spaces (representing resolved and unresolved features, respectively) by a selected 'cut-off' singular value, and compare different inverted models to the target (exact) model after projecting them on the range sub-space. This procedure effectively quantifies the ability of an inversion result to reproduce the resolvable features of the source. We find that even with perfect Green's functions the quality of an inverted model deteriorates with decreasing cut-off singular value due to applied regularization (smoothing and positivity constraints). Applying this approach to the inversion results of the SIV2a benchmark from various authors shows that the inferred source images are very similar to the target model when we consider a cut-off at ~1/10 of the largest singular value. Although the truncated model captures the overall rupture propagation, the final slip is biased significantly, showing distinct peaks below the stations lying above the rupture. We also show synthetic experiments to assess the role of station coverage, crustal velocity model, etc. on the conditioning of the slip inversion.
NASA Astrophysics Data System (ADS)
Kanguzhin, Baltabek; Tokmagambetov, Niyaz
2016-08-01
In this work, we research a boundary inverse problem of spectral analysis of a differential operator with integral boundary conditions in the functional space L2(0, b) where b < ∞. A uniqueness theorem of the inverse boundary problem in L2(0, b) is proved. Note that a boundary inverse problem of spectral analysis is the problem of recovering boundary conditions of the operator by its spectrum and some additional data.
NASA Astrophysics Data System (ADS)
Valentine, A. P.; Atkins, S.; Trampert, J.
2013-12-01
Generally, solving an inverse problem involves finding the global minimum of some 'misfit function', which provides a measure of how well any given set of model parameters explain available data. The misfit function also encapsulates information about the resolution properties and uncertainties associated with any solution, and accessing and understanding this is necessary if results are to be properly interpreted. In seismology -- where we are typically interested in adjusting earth or source models to bring synthetic waveforms (or measurements made thereon) into agreement with recorded data -- a variety of tools have been developed to enable misfit to be evaluated at any point in model-space. However, these calculations are computationally demanding, making it impossible to find best-fitting solutions via brute-force search. One way around this is to linearise the inverse problem, evaluate the gradient of the misfit function at a given location, and then use this information to iteratively step towards a minimum. However, unless the misfit function has a simple form, linearised algorithms may fail to converge to the global minimum. A second class of approaches involve directed random search, using various strategies to preferentially sample low-misfit regions of model space. This is computationally expensive, and may become infeasible as the dimension of the model space increases. We show that it is possible to construct an approximation to the misfit function using a learning algorithm. This assimilates information obtained by evaluating the forward problem, and interpolates between these samples. It is possible to progressively refine the approximation, by using its current state to direct the generation of new samples. Evaluating the approximation at any point in model space is computationally cheap, and it has a well-defined (and differentiable) functional form. The approximation may therefore be substituted for a full evaluation of the misfit in a wide range
NASA Astrophysics Data System (ADS)
Pham, H. V.; Elshall, A. S.; Tsai, F. T.; Yan, L.
2012-12-01
The inverse problem in groundwater modeling deals with a rugged (i.e. ill-conditioned and multimodal), nonseparable and noisy function since it involves solving second order nonlinear partial deferential equations with forcing terms. Derivative-based optimization algorithms may fail to reach a near global solution due to their stagnation at a local minimum solution. To avoid entrapment in a local optimum and enhance search efficiency, this study introduces the covariance matrix adaptation-evolution strategy (CMA-ES) as a local derivative-free optimization method. In the first part of the study, we compare CMA-ES with five commonly used heuristic methods and the traditional derivative-based Gauss-Newton method on a hypothetical problem. This problem involves four different cases to allow a rigorous assessment against ten criterions: ruggedness in terms of nonsmooth and multimodal, ruggedness in terms of ill-conditioning and high nonlinearity, nonseparablity, high dimensionality, noise, algorithm adaptation, algorithm tuning, performance, consistency, parallelization (scaling with number of cores) and invariance (solution vector and function values). The CMA-ES adapts a covariance matrix representing the pair-wise dependency between decision variables, which approximates the inverse of the Hessian matrix up to a certain factor. The solution is updated with the covariance matrix and an adaptable step size, which are adapted through two conjugates that implement heuristic control terms. The covariance matrix adaptation uses information from the current population of solutions and from the previous search path. Since such an elaborate search mechanism is not common in the other heuristic methods, CMA-ES proves to be more robust than other population-based heuristic methods in terms of reaching a near-optimal solution for a rugged, nonseparable and noisy inverse problem. Other favorable properties that the CMA-ES exhibits are the consistency of the solution for repeated
Assessment of Tikhonov-type regularization methods for solving atmospheric inverse problems
NASA Astrophysics Data System (ADS)
Xu, Jian; Schreier, Franz; Doicu, Adrian; Trautmann, Thomas
2016-11-01
Inverse problems occurring in atmospheric science aim to estimate state parameters (e.g. temperature or constituent concentration) from observations. To cope with nonlinear ill-posed problems, both direct and iterative Tikhonov-type regularization methods can be used. The major challenge in the framework of direct Tikhonov regularization (TR) concerns the choice of the regularization parameter λ, while iterative regularization methods require an appropriate stopping rule and a flexible λ-sequence. In the framework of TR, a suitable value of the regularization parameter can be generally determined based on a priori, a posteriori, and error-free selection rules. In this study, five practical regularization parameter selection methods, i.e. the expected error estimation (EEE), the discrepancy principle (DP), the generalized cross-validation (GCV), the maximum likelihood estimation (MLE), and the L-curve (LC), have been assessed. As a representative of iterative methods, the iteratively regularized Gauss-Newton (IRGN) algorithm has been compared with TR. This algorithm uses a monotonically decreasing λ-sequence and DP as an a posteriori stopping criterion. Practical implementations pertaining to retrievals of vertically distributed temperature and trace gas profiles from synthetic microwave emission measurements and from real far infrared data, respectively, have been conducted. Our numerical analysis demonstrates that none of the parameter selection methods dedicated to TR appear to be perfect and each has its own advantages and disadvantages. Alternatively, IRGN is capable of producing plausible retrieval results, allowing a more efficient manner for estimating λ.
A 2D inverse problem of predicting boiling heat transfer in a long fin
NASA Astrophysics Data System (ADS)
Orzechowski, Tadeusz
2016-10-01
A method for the determination of local values of the heat transfer coefficient on non-isothermal surfaces was analyzed on the example of a long smooth-surfaced fin made of aluminium. On the basis of the experimental data, two cases were taken into consideration: one-dimensional model for Bi < 0.1 and two-dimensional model for thicker elements. In the case when the drop in temperature over the thickness could be omitted, the rejected local values of heat fluxes were calculated from the integral of the equation describing temperature distribution on the fin. The corresponding boiling curve was plotted on the basis of temperature gradient distribution as a function of superheat. For thicker specimens, where Bi > 0.1, the problem was modelled using a 2-D heat conduction equation, for which the boundary conditions were posed on the surface observed with a thermovision camera. The ill-conditioned inverse problem was solved using a method of heat polynomials, which required validation.
An inverse problem of determining the implied volatility in option pricing
NASA Astrophysics Data System (ADS)
Deng, Zui-Cha; Yu, Jian-Ning; Yang, Liu
2008-04-01
In the Black-Scholes world there is the important quantity of volatility which cannot be observed directly but has a major impact on the option value. In practice, traders usually work with what is known as implied volatility which is implied by option prices observed in the market. In this paper, we use an optimal control framework to discuss an inverse problem of determining the implied volatility when the average option premium, namely the average value of option premium corresponding with a fixed strike price and all possible maturities from the current time to a chosen future time, is known. The issue is converted into a terminal control problem by Green function method. The existence and uniqueness of the minimum of the control functional are addressed by the optimal control method, and the necessary condition which must be satisfied by the minimum is also given. The results obtained in the paper may be useful for those who engage in risk management or volatility trading.
NASA Astrophysics Data System (ADS)
Ramezanpour, A.
2016-06-01
We study the inverse problem of constructing an appropriate Hamiltonian from a physically reasonable set of orthogonal wave functions for a quantum spin system. Usually, we are given a local Hamiltonian and our goal is to characterize the relevant wave functions and energies (the spectrum) of the system. Here, we take the opposite approach; starting from a reasonable collection of orthogonal wave functions, we try to characterize the associated parent Hamiltonians, to see how the wave functions and the energy values affect the structure of the parent Hamiltonian. Specifically, we obtain (quasi) local Hamiltonians by a complete set of (multilayer) product states and a local mapping of the energy values to the wave functions. On the other hand, a complete set of tree wave functions (having a tree structure) results to nonlocal Hamiltonians and operators which flip simultaneously all the spins in a single branch of the tree graph. We observe that even for a given set of basis states, the energy spectrum can significantly change the nature of interactions in the Hamiltonian. These effects can be exploited in a quantum engineering problem optimizing an objective functional of the Hamiltonian.
Inverse scattering for an exterior Dirichlet problem. [due to metallic cylinder
NASA Technical Reports Server (NTRS)
Hariharan, S. I.
1982-01-01
Scattering caused by a metallic cylinder in the field of a wire carrying a periodic current is studied, with a view to determining the location and shape of the cylinder in light of far field measurements between the cylinder and the wire. The associated direct problem is the exterior Dirichlet problem for the Helmholtz equation in two dimensions, and an improved low frequency estimate for its solution by integral equation methods is shown by inverse scattering calculations to be accurate to this estimate. The far field measurements are related to low frequency boundary integral equations whose solutions may be expressed in terms of a mapping function for the exterior of the unknown curve onto the exterior of a unit disk. The conformal transformation's Laurent expansion coefficients can be related to those of the far field, the first of which leads to the calculation of the distance between the source and the cylinder, while the other coefficients are determined by placing the source in a different location.
Circulation of the Carribean Sea: a well-resolved inverse problem
Roemmich, D.
1981-09-20
The Caribbean Sea is selected as a region where the large-scale circulation is well determined by historical hydrographic measurements through application of the inverse method. A simple example is used to illustrate the technique and to demonstrate how some physically relevant quantities may be well determined in the formally underdetermined inverse problem. The geostrophic flow field in the Caribbean is found by imposing mass and salt conservation constraints in seven layers separated by surfaces of constant potential density. An unsmoothed solution is displayed that has weak dependence on an initial choice of reference level. In addition, a unique smoothed solution is shown. Above sigma/sub theta/ = 27.4, the total flow leaving the western Caribbean is estimated to be 29 x 10/sup 6/ m/sup 3/ s/sup -1/, in agreement with direct measurements. This flow is made up of 22 x 10/sup 6/ m/sup 3/s /sup -1/ entering the Caribbean from the east and flowing across the southern half of the basin as the Caribbean Current, and 7 x 10/sup 6/ m/sup 3/ s/sup -1/, in agreement with direct measurements. This flow is made up of 22 x 10/sup 6/ m/sup 3/ s/sup -1/ entering the Caribbean from the east and flowing across the southern half of the basin as the Caribbean Current, and 7 x 10/sup 7/ m/sup 3/ s/sup -1/ entering from the north through Windward Passage. Both of these currents show small-scale variability that diminishes with distance from the respective passages. The deep flow has no net transport, as required by the shallow exit, but a well organized clockwise recirculation is found in the deep eastern Caribbean.
Gray, D.B.; McMechan, G.A.
1995-06-01
The analytic solution of the Lippman-Schwinger seismic inverse problem in three spatial dimensions, assuming a point source and a constant-density earth model, is valid in the spatial zero frequency limit. It is expressed as a two-dimensional inverse Fourier transform followed by an inverse Laplace transform. For the case of laterally homogeneous velocity, the analytic solution is correct when applied to a forward solution of the wave equation for a single-interface velocity model. Error surfaces of the non-linear, iterative, least-squares inversions corresponding to multiple, constant-velocity, horizontal layers have an absolute minimum at or near the location of the solution parameters for zero and low frequencies. The error surface for a scattered wavefield dataset generated by 3D finite-difference modeling combined with a priori constraints, produces nearly correct solutions for a range of low frequencies. Thus, this approach has potential for applicability to field data. 24 refs., 7 figs.
NASA Astrophysics Data System (ADS)
Hampel, Uwe; Freyer, Richard
1996-12-01
We present a reconstruction scheme which solves the inverse linear problem in optical absorption tomography for radially symmetric objects. This is a relevant geometry for optical diagnosis in soft tissues, e.g. breast, testis and even head. The algorithm utilizes an invariance property of the linear imaging operator in homogeneously scattering media. The inverse problem is solved in the Fourier space of the angular component leading to a considerable dimension reduction which allows to compute the inverse in a direct way using singular value decomposition. There are two major advantages of this approach. First the inverse operator can be stored in computer memory and the computation of the inverse problem comprises only a few matrix multiplications. This makes the algorithm very fast and suitable for parallel execution. On the other hand we obtain the spectrum of the imaging operator that allows conclusions about reconstruction limits in the presence of noise and gives a termination criterion for image synthesis. To demonstrate the capabilities of this scheme reconstruction results from synthetic and phantom data are presented.
NASA Technical Reports Server (NTRS)
Goel, N.; Strebel, D. E.
1983-01-01
An important but relatively uninvestigated problem in remote sensing is the inversion of vegetative canopy reflectance models to obtain agrophysical parameters, given measured reflectances. The problem is here formally defined and its solution outlined. Numerical nonlinear optimization techniques are used to implement this inversion to obtain the leaf area index using Suits' canopy reflectance model. The results for a variety of cases indicate that this can be done successfully using infrared reflectances at different views or azimuth angles or a combination thereof. The other parameters of the model must be known, although reasonable measurement errors can be tolerated without seriously degrading the accuracy of the inversion. The application of the technique to ground based remote-sensing experiments is potentially useful, but is limited to the degree to which the canopy reflectance model can accurately predict observed reflectances.
Representation and constraints: the inverse problem and the structure of visual space.
Hatfield, Gary
2003-11-01
Visual space can be distinguished from physical space. The first is found in visual experience, while the second is defined independently of perception. Theorists have wondered about the relation between the two. Some investigators have concluded that visual space is non-Euclidean, and that it does not have a single metric structure. Here it is argued (1) that visual space exhibits contraction in all three dimensions with increasing distance from the observer, (2) that experienced features of this contraction (including the apparent convergence of lines in visual experience that are produced from physically parallel stimuli in ordinary viewing conditions) are not the same as would be the experience of a perspective projection onto a frontoparallel plane, and (3) that such contraction is consistent with size constancy. These properties of visual space are different from those that would be predicted if spatial perception resulted from the successful solution of the inverse problem. They are consistent with the notion that optical constraints have been internalized. More generally, they are also consistent with the notion that visual spatial structures bear a resemblance relation to physical spatial structures. This notion supports a type of representational relation that is distinct from mere causal correspondence. The reticence of some philosophers and psychologists to discuss the structure of phenomenal space is diagnosed in terms of the simple materialism and the functionalism of the 1970s and 1980s.
Rostami, Mahboubeh Rahmati; Wu, Jincheng; Tzanakakis, Emmanuel S.
2015-01-01
The cultivation of stem cells as aggregates in scalable bioreactor cultures is an appealing modality for the large-scale manufacturing of stem cell products. Aggregation phenomena are central to such bioprocesses affecting the viability, proliferation and differentiation trajectory of stem cells but a quantitative framework is currently lacking. A population balance equation (PBE) model was used to describe the temporal evolution of the embryonic stem cell (ESC) cluster size distribution by considering collision-induced aggregation and cell proliferation in a stirred-suspension vessel. For ESC cultures at different agitation rates, the aggregation kernel representing the aggregation dynamics was successfully recovered as a solution of the inverse problem. The rate of change of the average aggregate size was greater at the intermediate rate tested suggesting a trade-off between increased collisions and agitation-induced shear. Results from forward simulation with obtained aggregation kernels were in agreement with transient aggregate size data from experiments. We conclude that the framework presented here can complement mechanistic studies offering insights into relevant stem cell clustering processes. More importantly from a process development standpoint, this strategy can be employed in the design and control of bioreactors for the generation of stem cell derivatives for drug screening, tissue engineering and regenerative medicine. PMID:26036699
Rostami, Mahboubeh Rahmati; Wu, Jincheng; Tzanakakis, Emmanuel S
2015-08-20
The cultivation of stem cells as aggregates in scalable bioreactor cultures is an appealing modality for the large-scale manufacturing of stem cell products. Aggregation phenomena are central to such bioprocesses affecting the viability, proliferation and differentiation trajectory of stem cells but a quantitative framework is currently lacking. A population balance equation (PBE) model was used to describe the temporal evolution of the embryonic stem cell (ESC) cluster size distribution by considering collision-induced aggregation and cell proliferation in a stirred-suspension vessel. For ESC cultures at different agitation rates, the aggregation kernel representing the aggregation dynamics was successfully recovered as a solution of the inverse problem. The rate of change of the average aggregate size was greater at the intermediate rate tested suggesting a trade-off between increased collisions and agitation-induced shear. Results from forward simulation with obtained aggregation kernels were in agreement with transient aggregate size data from experiments. We conclude that the framework presented here can complement mechanistic studies offering insights into relevant stem cell clustering processes. More importantly from a process development standpoint, this strategy can be employed in the design and control of bioreactors for the generation of stem cell derivatives for drug screening, tissue engineering and regenerative medicine.
General Features of Supersymmetric Signals at the ILC: Solving the LHC Inverse Problem
Berger, Carola F.; Gainer, James S.; Hewett, JoAnne L.; Lillie, Ben; Rizzo, Thomas G.
2007-12-19
We examine whether the {radical}s = 500 GeV International Linear Collider with 80% electron beam polarization can be used to solve the LHC Inverse Problem within the framework of the MSSM. We investigate 242 points in the MSSM parameter space, which we term models, that correspond to the 162 pairs of models found by Arkani-Hamed et al. to give indistinguishable signatures at the LHC. We first determine whether the production of the various SUSY particles is visible above the Standard Model background for each of these parameter space points, and then make a detailed comparison of their various signatures. Assuming an integrated luminosity of 500 fb{sup -1}, we find that only 82 out of 242 models lead to visible signatures of some kind with a significance {ge} 5 and that only 57(63) out of the 162 model pairs are distinguishable at 5(3){sigma}. Our analysis includes PYTHIA and CompHEP SUSY signal generation, full matrix element SM backgrounds for all 2 {yields} 2, 2 {yields} 4, and 2 {yields} 6 processes, ISR and beamstrahlung generated via WHIZARD/GuineaPig, and employs the fast SiD detector simulation org.lcsim.
Extraction of skin-friction fields from surface flow visualizations as an inverse problem
NASA Astrophysics Data System (ADS)
Liu, Tianshu
2013-12-01
Extraction of high-resolution skin-friction fields from surface flow visualization images as an inverse problem is discussed from a unified perspective. The surface flow visualizations used in this study are luminescent oil-film visualization and heat-transfer and mass-transfer visualizations with temperature- and pressure-sensitive paints (TSPs and PSPs). The theoretical foundations of these global methods are the thin-oil-film equation and the limiting forms of the energy- and mass-transport equations at a wall, which are projected onto the image plane to provide the relationships between a skin-friction field and the relevant quantities measured by using an imaging system. Since these equations can be re-cast in the same mathematical form as the optical flow equation, they can be solved by using the variational method in the image plane to extract relative or normalized skin-friction fields from images. Furthermore, in terms of instrumentation, essentially the same imaging system for measurements of luminescence can be used in these surface flow visualizations. Examples are given to demonstrate the applications of these methods in global skin-friction diagnostics of complex flows.
Inverse Problem for Color Doppler Ultrasound-Assisted Intracardiac Blood Flow Imaging
Jang, Jaeseong
2016-01-01
For the assessment of the left ventricle (LV), echocardiography has been widely used to visualize and quantify geometrical variations of LV. However, echocardiographic image itself is not sufficient to describe a swirling pattern which is a characteristic blood flow pattern inside LV without any treatment on the image. We propose a mathematical framework based on an inverse problem for three-dimensional (3D) LV blood flow reconstruction. The reconstruction model combines the incompressible Navier-Stokes equations with one-direction velocity component of the synthetic flow data (or color Doppler data) from the forward simulation (or measurement). Moreover, time-varying LV boundaries are extracted from the intensity data to determine boundary conditions of the reconstruction model. Forward simulations of intracardiac blood flow are performed using a fluid-structure interaction model in order to obtain synthetic flow data. The proposed model significantly reduces the local and global errors of the reconstructed flow fields. We demonstrate the feasibility and potential usefulness of the proposed reconstruction model in predicting dynamic swirling patterns inside the LV over a cardiac cycle. PMID:27313657
Inverse Problem for Color Doppler Ultrasound-Assisted Intracardiac Blood Flow Imaging.
Jang, Jaeseong; Ahn, Chi Young; Choi, Jung-Il; Seo, Jin Keun
2016-01-01
For the assessment of the left ventricle (LV), echocardiography has been widely used to visualize and quantify geometrical variations of LV. However, echocardiographic image itself is not sufficient to describe a swirling pattern which is a characteristic blood flow pattern inside LV without any treatment on the image. We propose a mathematical framework based on an inverse problem for three-dimensional (3D) LV blood flow reconstruction. The reconstruction model combines the incompressible Navier-Stokes equations with one-direction velocity component of the synthetic flow data (or color Doppler data) from the forward simulation (or measurement). Moreover, time-varying LV boundaries are extracted from the intensity data to determine boundary conditions of the reconstruction model. Forward simulations of intracardiac blood flow are performed using a fluid-structure interaction model in order to obtain synthetic flow data. The proposed model significantly reduces the local and global errors of the reconstructed flow fields. We demonstrate the feasibility and potential usefulness of the proposed reconstruction model in predicting dynamic swirling patterns inside the LV over a cardiac cycle. PMID:27313657
NASA Astrophysics Data System (ADS)
Bao, Xingxian; Cao, Aixia; Zhang, Jing
2016-07-01
Modal parameters estimation plays an important role for structural health monitoring. Accurately estimating the modal parameters of structures is more challenging as the measured vibration response signals are contaminated with noise. This study develops a mathematical algorithm of solving the partially described inverse singular value problem (PDISVP) combined with the complex exponential (CE) method to estimate the modal parameters. The PDISVP solving method is to reconstruct an L2-norm optimized (filtered) data matrix from the measured (noisy) data matrix, when the prescribed data constraints are one or several sets of singular triplets of the matrix. The measured data matrix is Hankel structured, which is constructed based on the measured impulse response function (IRF). The reconstructed matrix must maintain the Hankel structure, and be lowered in rank as well. Once the filtered IRF is obtained, the CE method can be applied to extract the modal parameters. Two physical experiments, including a steel cantilever beam with 10 accelerometers mounted, and a steel plate with 30 accelerometers mounted, excited by an impulsive load, respectively, are investigated to test the applicability of the proposed scheme. In addition, the consistency diagram is proposed to exam the agreement among the modal parameters estimated from those different accelerometers. Results indicate that the PDISVP-CE method can significantly remove noise from measured signals and accurately estimate the modal frequencies and damping ratios.
The direct and inverse problems of an air-saturated porous cylinder submitted to acoustic radiation.
Ogam, Erick; Depollier, Claude; Fellah, Z E A
2010-09-01
Gas-saturated porous skeleton materials such as geomaterials, polymeric and metallic foams, or biomaterials are fundamental in a diverse range of applications, from structural materials to energy technologies. Most polymeric foams are used for noise control applications and knowledge of the manner in which the energy of sound waves is dissipated with respect to the intrinsic acoustic properties is important for the design of sound packages. Foams are often employed in the audible, low frequency range where modeling and measurement techniques for the recovery of physical parameters responsible for energy loss are still few. Accurate acoustic methods of characterization of porous media are based on the measurement of the transmitted and/or reflected acoustic waves by platelike specimens at ultrasonic frequencies. In this study we develop an acoustic method for the recovery of the material parameters of a rigid-frame, air-saturated polymeric foam cylinder. A dispersion relation for sound wave propagation in the porous medium is derived from the propagation equations and a model solution is sought based on plane-wave decomposition using orthogonal cylindrical functions. The explicit analytical solution equation of the scattered field shows that it is also dependent on the intrinsic acoustic parameters of the porous cylinder, namely, porosity, tortuosity, and flow resistivity (permeability). The inverse problem of the recovery of the flow resistivity and porosity is solved by seeking the minima of the objective functions consisting of the sum of squared residuals of the differences between the experimental and theoretical scattered field data. PMID:20887001
Sensor Placement by Maximal Projection on Minimum Eigenspace for Linear Inverse Problems
NASA Astrophysics Data System (ADS)
Jiang, Chaoyang; Soh, Yeng Chai; Li, Hua
2016-11-01
This paper presents two new greedy sensor placement algorithms, named minimum nonzero eigenvalue pursuit (MNEP) and maximal projection on minimum eigenspace (MPME), for linear inverse problems, with greater emphasis on the MPME algorithm for performance comparison with existing approaches. We select the sensing locations one-by-one. In this way, the least number of required sensors can be determined by checking whether the estimation accuracy is satisfied after each sensing location is determined. The minimum eigenspace is defined as the eigenspace associated with the minimum eigenvalue of the dual observation matrix. For each sensing location, the projection of its observation vector onto the minimum eigenspace is shown to be monotonically decreasing w.r.t. the worst case error variance (WCEV) of the estimated parameters. We select the sensing location whose observation vector has the maximum projection onto the minimum eigenspace of the current dual observation matrix. The proposed MPME is shown to be one of the most computationally efficient algorithms. Our Monte-Carlo simulations showed that MPME outperforms the convex relaxation method [1], the SparSenSe method [2], and the FrameSense method [3] in terms of WCEV and the mean square error (MSE) of the estimated parameters, especially when the number of available sensor nodes is very limited.
Using eigenmodes to perform the inverse problem associated with resonant ultrasound spectroscopy
David Hurley; Farhad Farzbod
2012-11-01
In principle, resonant ultrasonic spectroscopy (RUS) can be used to characterize any parameter that influences the mechanical resonant response of a sample. Examples include the elastic constants, sample dimensions, and crystal orientation. Extracting the parameter of interest involves performing the inverse problem, which typically entails an iterative routine that compares calculated and measured eigenfrequencies. Here, we propose an alternative method based on laser-based resonant ultrasound spectroscopy (LRUS) that uses the eigenmodes. LRUS uses a pulsed laser to thermoelastically excite ultrasound and an interferometer to detect out-of-plane displacement associated with ultrasonic resonances. By raster scanning the probe along the sample surface, an image of the out-ofplane displacement pattern (i.e., eigenmode) is obtained. As an example of this method, we describe a technique to calculate the crystallographic orientation of a single-crystal high-purity copper sample. The crystallographic orientation is computed by comparing theoretical and experimental eigenmodes. The computed angle is shown to be in very good agreement with the angle obtained using electron backscatter diffraction. In addition, a comparison is made using eigenfrequencies and eigenmodes to calculate the crystallographic orientation. It is found for this particular application, the eigenmode method has superior sensitivity to crystal orientation.
NASA Astrophysics Data System (ADS)
Giudici, M.; Baratelli, F.; Comunian, A.; Vassena, C.; Cattaneo, L.
2014-10-01
Numerical modelling of the dynamic evolution of ice sheets and glaciers requires the solution of discrete equations which are based on physical principles (e.g. conservation of mass, linear momentum and energy) and phenomenological constitutive laws (e.g. Glen's and Fourier's laws). These equations must be accompanied by information on the forcing term and by initial and boundary conditions (IBCs) on ice velocity, stress and temperature; on the other hand the constitutive laws involve many physical parameters, some of which depend on the ice thermodynamical state. The proper forecast of the dynamics of ice sheets and glaciers requires a precise knowledge of several quantities which appear in the IBCs, in the forcing terms and in the phenomenological laws. As these quantities cannot be easily measured at the study scale in the field, they are often obtained through model calibration by solving an inverse problem (IP). The objective of this paper is to provide a thorough and rigorous conceptual framework for IPs in cryospheric studies and in particular: to clarify the role of experimental and monitoring data to determine the calibration targets and the values of the parameters that can be considered to be fixed; to define and characterise identifiability, a property related to the solution to the forward problem; to study well-posedness in a correct way, without confusing instability with ill-conditioning or with the properties of the method applied to compute a solution; to cast sensitivity analysis in a general framework and to differentiate between the computation of local sensitivity indicators with a one-at-a-time approach and first-order sensitivity indicators that consider the whole possible variability of the model parameters. The conceptual framework and the relevant properties are illustrated by means of a simple numerical example of isothermal ice flow, based on the shallow-ice approximation.
Inverse Tasks In The Tsunami Problem: Nonlinear Regression With Inaccurate Input Data
NASA Astrophysics Data System (ADS)
Lavrentiev, M.; Shchemel, A.; Simonov, K.
problem can be formally propounded this way: A distribution of various combinations of observed values should be estimated. Totality of the combinations is represented by the set of variables. The results of observations determine excerption of outputs. In the scope of the propounded problem continuous (along with its derivations) homomorphic reflec- tion of the space of hidden parameters to the space of observed parameters should be found. It allows to reconstruct lack information of the inputs when the number of the 1 inputs is not less than the number of hidden parameters and to estimate the distribution if information for synonymous prediction of unknown inputs is not sufficient. The following approach to build approximation based on the excerption is suggested: the excerption is supplemented with the hidden parameters, which are distributed uni- formly in a multidimensional limited space. Then one should find correspondence of model and observed outputs. Therefore the correspondence will provide that the best approximation is the most accurate. In the odd iterations dependence between hid- den inputs and outputs is being optimized (like the conventional problem is solved). Correspondence between tasks is changing in the case when the error is reducing and distribution of inputs remains intact. Therefore, a special transform is applied to reduce error at every iteration. If the mea- sure of distribution is constant, then the condition of transformations is simplified. Such transforms are named "canonical" or "volume invariant transforms" and, there- fore, are well known. This approach is suggested for solving main inverse task of the tsunami problem. Basing on registered tsunami in seaside and shelf to estimate parameters of tsunami's hearth. 2
Magnetic field topology of τ Scorpii. The uniqueness problem of Stokes V ZDI inversions
NASA Astrophysics Data System (ADS)
Kochukhov, O.; Wade, G. A.
2016-02-01
Context. The early B-type star τ Sco exhibits an unusually complex, relatively weak surface magnetic field. Its topology was previously studied with the Zeeman Doppler imaging (ZDI) modelling of high-resolution circular polarisation (Stokes V) observations. Aims: Here we assess the robustness of the Stokes V ZDI reconstruction of the magnetic field geometry of τ Sco and explore the consequences of using different parameterisations of the surface magnetic maps. Methods: This analysis is based on the archival ESPaDOnS high-resolution Stokes V observations and employs an independent ZDI magnetic inversion code. Results: We succeeded in reproducing previously published magnetic field maps of τ Sco using both general harmonic expansion and a direct, pixel-based representation of the magnetic field. These maps suggest that the field topology of τ Sco is comprised of comparable contributions of the poloidal and toroidal magnetic components. At the same time, we also found that available Stokes V observations can be successfully fitted with restricted harmonic expansions, by either neglecting the toroidal field altogether, or linking the radial and horizontal components of the poloidal field as required by the widely used potential field extrapolation technique. These alternative modelling approaches lead to a stronger and topologically more complex surface field structure. The field distributions, which were recovered with different ZDI options, differ significantly and yield indistinguishable Stokes V profiles but different linear polarisation (Stokes Q and U) signatures. Conclusions: Our investigation underscores the well-known problem of non-uniqueness of the Stokes V ZDI inversions. For the magnetic stars with properties similar to τ Sco (relatively complex field, slow rotation) the outcome of magnetic reconstruction strongly depends on the adopted field parameterisation, rendering photospheric magnetic mapping and determination of the extended magnetospheric
Pesin, Yakov B.; Niu, Xun; Latash, Mark L.
2010-01-01
We consider the problem of what is being optimized in human actions with respect to various aspects of human movements and different motor tasks. From the mathematical point of view this problem consists of finding an unknown objective function given the values at which it reaches its minimum. This problem is called the inverse optimization problem. Until now the main approach to this problems has been the cut-and-try method, which consists of introducing an objective function and checking how it reflects the experimental data. Using this approach, different objective functions have been proposed for the same motor action. In the current paper we focus on inverse optimization problems with additive objective functions and linear constraints. Such problems are typical in human movement science. The problem of muscle (or finger) force sharing is an example. For such problems we obtain sufficient conditions for uniqueness and propose a method for determining the objective functions. To illustrate our method we analyze the problem of force sharing among the fingers in a grasping task. We estimate the objective function from the experimental data and show that it can predict the force-sharing pattern for a vast range of external forces and torques applied to the grasped object. The resulting objective function is quadratic with essentially non-zero linear terms. PMID:19902213
NASA Astrophysics Data System (ADS)
Kelbert, A.; Schultz, A.; Egbert, G.
2006-12-01
We address the non-linear ill-posed inverse problem of reconstructing the global three-dimensional distribution of electrical conductivity in Earth's mantle. The authors have developed a numerical regularized least-squares inverse solution based on the non-linear conjugate gradients approach. We apply this methodology to the most current low-frequency global observatory data set by Fujii &Schultz (2002), that includes c- and d-responses. We obtain 4-8 layer models satisfying the data. We then describe the features common to all these models and discuss the resolution of our method.
Bakhos, Tania; Saibaba, Arvind K.; Kitanidis, Peter K.
2015-10-15
We consider the problem of estimating parameters in large-scale weakly nonlinear inverse problems for which the underlying governing equations is a linear, time-dependent, parabolic partial differential equation. A major challenge in solving these inverse problems using Newton-type methods is the computational cost associated with solving the forward problem and with repeated construction of the Jacobian, which represents the sensitivity of the measurements to the unknown parameters. Forming the Jacobian can be prohibitively expensive because it requires repeated solutions of the forward and adjoint time-dependent parabolic partial differential equations corresponding to multiple sources and receivers. We propose an efficient method based on a Laplace transform-based exponential time integrator combined with a flexible Krylov subspace approach to solve the resulting shifted systems of equations efficiently. Our proposed solver speeds up the computation of the forward and adjoint problems, thus yielding significant speedup in total inversion time. We consider an application from Transient Hydraulic Tomography (THT), which is an imaging technique to estimate hydraulic parameters related to the subsurface from pressure measurements obtained by a series of pumping tests. The algorithms discussed are applied to a synthetic example taken from THT to demonstrate the resulting computational gains of this proposed method.
Albocher, U.; Barbone, P.E.; Richards, M.S.; Oberai, A.A.; Harari, I.
2014-01-01
We apply the adjoint weighted equation method (AWE) to the direct solution of inverse problems of incompressible plane strain elasticity. We show that based on untreated noisy displacements, the reconstruction of the shear modulus can be very poor. We link this poor performance to loss of coercivity of the weak form when treating problems with discontinuous coefficients. We demonstrate that by smoothing the displacements and appending a regularization term to the AWE formulation, a dramatic improvement in the reconstruction can be achieved. With these improvements, the advantages of the AWE method as a direct solution approach can be extended to a wider range of problems. PMID:25383085
Piskozub, J.
1994-12-31
The multifrequency lidar inverse problem discussed consists of calculating the size distribution of sol particles from backscattered lidar data. Sea-water (marine) aerosol is particularly well suited for this kind of study as its scattering characteristics can be accurately represented by Mie theory as its particles are almost spherical and their complex index of refraction is well known. Here, a solution of the inverse problem concerning finding aerosol size distribution for a multifrequency lidar system working on a small number of wavelengths is proposed. The solution involves a best-fit method of finding parameters in a pre-set formula of particle size distribution. A comparison of results calculated with the algorithm from experimental lidar profiles with PMS data collected in Baltic Sea coastal zone is given.
NASA Astrophysics Data System (ADS)
Vasquez, M.; Schreier, F.; Gimeno Garcia, S.; Hedelt, P.; Trautmann, T.
2014-04-01
More than one thousand exoplanets have been discovered in the past two decades, with some dozen of them in the host stars' habitable zone and with size and mass similar to Earth. Furthermore, spectra of exoplanets become available with reasonable quality (resolution and noise) that trigger the question of remote sensing of the planet's atmosphere. The objective of this sensitivity study is to identify the optimal state vector representing the atmosphere in the inverse problem. Solving the inverse problem will ultimately allow to characterize the planet and determine its habitability. Using a high resolution infrared radiative transfer code with a line-by-line molecular absorption model, we calculate synthetic spectra of exoplanets orbiting dwarf stars. Key parameters describing the atmosphere (i.e., molecular abundances, temperature, pressure) are identified and the Jacobians (i.e. partial derivatives of the spectra) are evaluated to investigate the feasibility to retrieve the state of the planetary atmosphere.
Solving the inverse Ising problem by mean-field methods in a clustered phase space with many states.
Decelle, Aurélien; Ricci-Tersenghi, Federico
2016-07-01
In this work we explain how to properly use mean-field methods to solve the inverse Ising problem when the phase space is clustered, that is, many states are present. The clustering of the phase space can occur for many reasons, e.g., when a system undergoes a phase transition, but also when data are collected in different regimes (e.g., quiescent and spiking regimes in neural networks). Mean-field methods for the inverse Ising problem are typically used without taking into account the eventual clustered structure of the input configurations and may lead to very poor inference (e.g., in the low-temperature phase of the Curie-Weiss model). In this work we explain how to modify mean-field approaches when the phase space is clustered and we illustrate the effectiveness of our method on different clustered structures (low-temperature phases of Curie-Weiss and Hopfield models). PMID:27575082
Solving the inverse Ising problem by mean-field methods in a clustered phase space with many states
NASA Astrophysics Data System (ADS)
Decelle, Aurélien; Ricci-Tersenghi, Federico
2016-07-01
In this work we explain how to properly use mean-field methods to solve the inverse Ising problem when the phase space is clustered, that is, many states are present. The clustering of the phase space can occur for many reasons, e.g., when a system undergoes a phase transition, but also when data are collected in different regimes (e.g., quiescent and spiking regimes in neural networks). Mean-field methods for the inverse Ising problem are typically used without taking into account the eventual clustered structure of the input configurations and may lead to very poor inference (e.g., in the low-temperature phase of the Curie-Weiss model). In this work we explain how to modify mean-field approaches when the phase space is clustered and we illustrate the effectiveness of our method on different clustered structures (low-temperature phases of Curie-Weiss and Hopfield models).
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
NASA Astrophysics Data System (ADS)
Barenboim, Gabriela; Park, Wan-Il
2016-08-01
We investigate the gravitational wave background from a first order phase transition in a matter-dominated universe, and show that it has a unique feature from which important information about the properties of the phase transition and thermal history of the universe can be easily extracted. Also, we discuss the inverse problem of such a gravitational wave background in view of the degeneracy among macroscopic parameters governing the signal.
NASA Astrophysics Data System (ADS)
Giudici, Mauro; Baratelli, Fulvia; Vassena, Chiara; Cattaneo, Laura
2014-05-01
Numerical modelling of the dynamic evolution of ice sheets and glaciers requires the solution of discrete equations which are based on physical principles (e.g. conservation of mass, linear momentum and energy) and phenomenological constitutive laws (e.g. Glen's and Fourier's laws). These equations must be accompanied by information on the forcing term and by initial and boundary conditions (IBC) on ice velocity, stress and temperature; on the other hand the constitutive laws involves many physical parameters, which possibly depend on the ice thermodynamical state. The proper forecast of the dynamics of ice sheets and glaciers (forward problem, FP) requires a precise knowledge of several quantities which appear in the IBCs, in the forcing terms and in the phenomenological laws and which cannot be easily measured at the study scale in the field. Therefore these quantities can be obtained through model calibration, i.e. by the solution of an inverse problem (IP). Roughly speaking, the IP aims at finding the optimal values of the model parameters that yield the best agreement of the model output with the field observations and data. The practical application of IPs is usually formulated as a generalised least squares approach, which can be cast in the framework of Bayesian inference. IPs are well developed in several areas of science and geophysics and several applications were proposed also in glaciology. The objective of this paper is to provide a further step towards a thorough and rigorous theoretical framework in cryospheric studies. Although the IP is often claimed to be ill-posed, this is rigorously true for continuous domain models, whereas for numerical models, which require the solution of algebraic equations, the properties of the IP must be analysed with more care. First of all, it is necessary to clarify the role of experimental and monitoring data to determine the calibration targets and the values of the parameters that can be considered to be fixed
NASA Astrophysics Data System (ADS)
Dorn, O.; Lesselier, D.
2010-07-01
Inverse problems in electromagnetics have a long history and have stimulated exciting research over many decades. New applications and solution methods are still emerging, providing a rich source of challenging topics for further investigation. The purpose of this special issue is to combine descriptions of several such developments that are expected to have the potential to fundamentally fuel new research, and to provide an overview of novel methods and applications for electromagnetic inverse problems. There have been several special sections published in Inverse Problems over the last decade addressing fully, or partly, electromagnetic inverse problems. Examples are: Electromagnetic imaging and inversion of the Earth's subsurface (Guest Editors: D Lesselier and T Habashy) October 2000 Testing inversion algorithms against experimental data (Guest Editors: K Belkebir and M Saillard) December 2001 Electromagnetic and ultrasonic nondestructive evaluation (Guest Editors: D Lesselier and J Bowler) December 2002 Electromagnetic characterization of buried obstacles (Guest Editors: D Lesselier and W C Chew) December 2004 Testing inversion algorithms against experimental data: inhomogeneous targets (Guest Editors: K Belkebir and M Saillard) December 2005 Testing inversion algorithms against experimental data: 3D targets (Guest Editors: A Litman and L Crocco) February 2009 In a certain sense, the current issue can be understood as a continuation of this series of special sections on electromagnetic inverse problems. On the other hand, its focus is intended to be more general than previous ones. Instead of trying to cover a well-defined, somewhat specialized research topic as completely as possible, this issue aims to show the broad range of techniques and applications that are relevant to electromagnetic imaging nowadays, which may serve as a source of inspiration and encouragement for all those entering this active and rapidly developing research area. Also, the
PREFACE: 6th International Conference on Inverse Problems in Engineering: Theory and Practice
NASA Astrophysics Data System (ADS)
Bonnet, Marc
2008-07-01
The 6th International Conference on Inverse Problems in Engineering: Theory and Practice (ICIPE 2008) belongs to a successful series of conferences held up to now following a three-year cycle. Previous conferences took place in Palm Coast, Florida, USA (1993), Le Croisic, France (1996), Port Ludlow, Washington, USA (1999), Angra dos Reis, Brazil (2002), and Cambridge, UK (2005). The conference has its roots on the informal seminars organized by Professor J V Beck at Michigan State University, which were initiated in 1987. The organization of this Conference, which took place in Dourdan (Paris) France, 15-19 June 2008, was made possible through a joint effort by four research departments from four different universities: LEMTA (Laboratoire de Mécanique Théorique et Appliquée, Nancy-Université) LMS (Laboratoire de Mécanique des Solides, Ecole Polytechnique, Paris) LMAC (Laboratoire de Mathématiques Appliquées, UTC Compiègne) LTN (Laboratoire de Thermocinétique, Université de Nantes) It received support from three organizations: SFT (Société Française de Thermique: French Heat Transfer Association) ACSM (Association Calcul de Structures et Simulation : Computational Structural Mechanics Association) GdR Ondes - CNRS (`Waves' Network, French National Center for Scientific Research) The objective of the conference was to provide the opportunity for interaction and cross-fertilization between designers of inverse methods and practitioners. The delegates came from very different fields, such as applied mathematics, heat transfer, solid mechanics, tomography.... Consequently the sessions were organised along mostly methodological topics in order to facilitate interaction among participants who might not meet otherwise. The present proceedings, published in the Journal of Physics: Conference Series, gathers the four plenary invited lectures and the full-length versions of 103 presentations. The latter have been reviewed by the scientific committee (see
NASA Technical Reports Server (NTRS)
Sidi, Avram; Pennline, James A.
1999-01-01
In this paper we are concerned with high-accuracy quadrature method solutions of nonlinear Fredholm integral equations of the form y(x) = r(x) + definite integral of g(x, t)F(t,y(t))dt with limits between 0 and 1,0 less than or equal to x les than or equal to 1, where the kernel function g(x,t) is continuous, but its partial derivatives have finite jump discontinuities across x = t. Such integral equations arise, e.g., when one applied Green's function techniques to nonlinear two-point boundary value problems of the form y "(x) =f(x,y(x)), 0 less than or equal to x less than or equal to 1, with y(0) = y(sub 0) and y(l) = y(sub l), or other linear boundary conditions. A quadrature method that is especially suitable and that has been employed for such equations is one based on the trepezoidal rule that has a low accuracy. By analyzing the corresponding Euler-Maclaurin expansion, we derive suitable correction terms that we add to the trapezoidal rule, thus obtaining new numerical quadrature formulas of arbitrarily high accuracy that we also use in defining quadrature methods for the integral equations above. We prove an existence and uniqueness theorem for the quadrature method solutions, and show that their accuracy is the same as that of the underlying quadrature formula. The solution of the nonlinear systems resulting from the quadrature methods is achieved through successive approximations whose convergence is also proved. The results are demonstrated with numerical examples.
NASA Technical Reports Server (NTRS)
Sidi, Avram; Pennline, James A.
1999-01-01
In this paper we are concerned with high-accuracy quadrature method solutions of nonlinear Fredholm integral equations of the form y(x) = r(x) + integral(0 to 1) g(x,t) F(t, y(t)) dt, 0 less than or equal to x less than or equal to 1, where the kernel function g(x,t) is continuous, but its partial derivatives have finite jump discontinuities across x = t. Such integrals equations arise, e.g., when one applies Green's function techniques to nonlinear two-point boundary value problems of the form U''(x) = f(x,y(x)), 0 less than or equal to x less than or equal to 1, with y(0) = y(sub 0) and g(l) = y(sub 1), or other linear boundary conditions. A quadrature method that is especially suitable and that has been employed for such equations is one based on the trapezoidal rule that has a low accuracy. By analyzing the corresponding Euler-Maclaurin expansion, we derive suitable correction terms that we add to the trapezoidal thus obtaining new numerical quadrature formulas of arbitrarily high accuracy that we also use in defining quadrature methods for the integral equations above. We prove an existence and uniqueness theorem for the quadrature method solutions, and show that their accuracy is the same as that of the underlying quadrature formula. The solution of the nonlinear systems resulting from the quadrature methods is achieved through successive approximations whose convergence is also proved. The results are demonstrated with numerical examples.
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by
HT2DINV: A 2D forward and inverse code for steady-state and transient hydraulic tomography problems
NASA Astrophysics Data System (ADS)
Soueid Ahmed, A.; Jardani, A.; Revil, A.; Dupont, J. P.
2015-12-01
Hydraulic tomography is a technique used to characterize the spatial heterogeneities of storativity and transmissivity fields. The responses of an aquifer to a source of hydraulic stimulations are used to recover the features of the estimated fields using inverse techniques. We developed a 2D free source Matlab package for performing hydraulic tomography analysis in steady state and transient regimes. The package uses the finite elements method to solve the ground water flow equation for simple or complex geometries accounting for the anisotropy of the material properties. The inverse problem is based on implementing the geostatistical quasi-linear approach of Kitanidis combined with the adjoint-state method to compute the required sensitivity matrices. For undetermined inverse problems, the adjoint-state method provides a faster and more accurate approach for the evaluation of sensitivity matrices compared with the finite differences method. Our methodology is organized in a way that permits the end-user to activate parallel computing in order to reduce the computational burden. Three case studies are investigated demonstrating the robustness and efficiency of our approach for inverting hydraulic parameters.
Lopez, C.; Koski, J.A.; Razani, A.
2000-01-06
A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360{degree}, 180{degree}, and 90{degree} sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360{degree}, 180{degree}, and 90{degree} cases, respectively.
Analysis of forward and inverse problems in chemical dynamics and spectroscopy
Rabitz, H.
1992-01-01
The forward aspects of the research were concerned with mapping the relation between input potential surface structure, and laboratory dynamical and kinetic observables. The research on inverse analysis complemented the forward analysis studies; objective was to develop algorithms for inversion of quality laboratory data, back to underlying potential surfaces. 24 items of research in molecular dynamics and chemical kinetics are reported. The following collisions/reactions were studied: H + H[sub 2], He - H[sub 2], He - Xe/C(0001), thermal explosions, CO/H[sub 2]/O[sub 2], H[sub 2] + HD, H[sup +] + F([sup 2]P[sub 1/2]), He[sup +] + Ne(2p[sup 6]), Na + I, F + H[sub 2], CO - H[sub 2] - O[sub 2].
NASA Astrophysics Data System (ADS)
Jiang, Jun
This dissertation summarizes a procedure to design blades with finite thickness in three dimensions. In this inverse method, the prescribed quantities are the blade pressure loading shape, the inlet and outlet spanwise distributions of swirl, and the blade thickness distributions, and the primary calculated quantity is the blade geometry. The method is formulated in the fully inverse mode for design of three-dimensional blades in rotational and compressible flows whereby the blade shape is determined iteratively using the flow tangency condition along the blade surfaces. This technique is demonstrated here in the first instance for the design of two-dimensional cascaded and three-dimensional blades with finite thickness in inviscid and incompressible flows. In addition, the incoming flow is assumed irrotational so that the only vorticity present in the flowfield is the blade bound and shed vorticities. Design calculations presented for two-dimensional cascaded blades include an inlet guide vane, an impulse turbine blade, and a compressor blade. Consistency check is carried out for these cascaded blade design calculations using a panel analysis method and the analytical solution for the Gostelow profile. Free-vortex design results are also shown for fully three-dimensional blades with finite thickness such as an inlet guide vane, a rotor of axial-flow pumps, and a high-flow-coefficient pump inducer with design parameters typically found in industrial applications. These three-dimensional inverse design results are verified using Adamczyk's inviscid code.
NASA Astrophysics Data System (ADS)
Nielsen, Bjørn Fredrik; Lysaker, Marius; Tveito, Aslak
2007-01-01
The electrical activity in the heart is modeled by a complex, nonlinear, fully coupled system of differential equations. Several scientists have studied how this model, referred to as the bidomain model, can be modified to incorporate the effect of heart infarctions on simulated ECG (electrocardiogram) recordings. We are concerned with the associated inverse problem; how can we use ECG recordings and mathematical models to identify the position, size and shape of heart infarctions? Due to the extreme CPU efforts needed to solve the bidomain equations, this model, in its full complexity, is not well-suited for this kind of problems. In this paper we show how biological knowledge about the resting potential in the heart and level set techniques can be combined to derive a suitable stationary model, expressed in terms of an elliptic PDE, for such applications. This approach leads to a nonlinear ill-posed minimization problem, which we propose to regularize and solve with a simple iterative scheme. Finally, our theoretical findings are illuminated through a series of computer simulations for an experimental setup involving a realistic heart in torso geometry. More specifically, experiments with synthetic ECG recordings, produced by solving the bidomain model, indicate that our method manages to identify the physical characteristics of the ischemic region(s) in the heart. Furthermore, the ill-posed nature of this inverse problem is explored, i.e. several quantitative issues of our scheme are explored.
NASA Technical Reports Server (NTRS)
Devasia, Santosh; Bayo, Eduardo
1993-01-01
This paper addresses the problem of inverse dynamics for articulated flexible structures with both lumped and distributed actuators. This problem arises, for example, in the combined vibration minimization and trajectory control of space robots and structures. A new inverse dynamics scheme for computing the nominal lumped and distributed inputs for tracking a prescribed trajectory is given.
PREFACE: 6th International Conference on Inverse Problems in Engineering: Theory and Practice
NASA Astrophysics Data System (ADS)
Bonnet, Marc
2008-07-01
The 6th International Conference on Inverse Problems in Engineering: Theory and Practice (ICIPE 2008) belongs to a successful series of conferences held up to now following a three-year cycle. Previous conferences took place in Palm Coast, Florida, USA (1993), Le Croisic, France (1996), Port Ludlow, Washington, USA (1999), Angra dos Reis, Brazil (2002), and Cambridge, UK (2005). The conference has its roots on the informal seminars organized by Professor J V Beck at Michigan State University, which were initiated in 1987. The organization of this Conference, which took place in Dourdan (Paris) France, 15-19 June 2008, was made possible through a joint effort by four research departments from four different universities: LEMTA (Laboratoire de Mécanique Théorique et Appliquée, Nancy-Université) LMS (Laboratoire de Mécanique des Solides, Ecole Polytechnique, Paris) LMAC (Laboratoire de Mathématiques Appliquées, UTC Compiègne) LTN (Laboratoire de Thermocinétique, Université de Nantes) It received support from three organizations: SFT (Société Française de Thermique: French Heat Transfer Association) ACSM (Association Calcul de Structures et Simulation : Computational Structural Mechanics Association) GdR Ondes - CNRS (`Waves' Network, French National Center for Scientific Research) The objective of the conference was to provide the opportunity for interaction and cross-fertilization between designers of inverse methods and practitioners. The delegates came from very different fields, such as applied mathematics, heat transfer, solid mechanics, tomography.... Consequently the sessions were organised along mostly methodological topics in order to facilitate interaction among participants who might not meet otherwise. The present proceedings, published in the Journal of Physics: Conference Series, gathers the four plenary invited lectures and the full-length versions of 103 presentations. The latter have been reviewed by the scientific committee (see
Akcelik, Volkan; Flath, Pearl; Ghattas, Omar; Hill, Judith C; Van Bloemen Waanders, Bart; Wilcox, Lucas
2011-01-01
We consider the problem of estimating the uncertainty in large-scale linear statistical inverse problems with high-dimensional parameter spaces within the framework of Bayesian inference. When the noise and prior probability densities are Gaussian, the solution to the inverse problem is also Gaussian, and is thus characterized by the mean and covariance matrix of the posterior probability density. Unfortunately, explicitly computing the posterior covariance matrix requires as many forward solutions as there are parameters, and is thus prohibitive when the forward problem is expensive and the parameter dimension is large. However, for many ill-posed inverse problems, the Hessian matrix of the data misfit term has a spectrum that collapses rapidly to zero. We present a fast method for computation of an approximation to the posterior covariance that exploits the lowrank structure of the preconditioned (by the prior covariance) Hessian of the data misfit. Analysis of an infinite-dimensional model convection-diffusion problem, and numerical experiments on large-scale 3D convection-diffusion inverse problems with up to 1.5 million parameters, demonstrate that the number of forward PDE solves required for an accurate low-rank approximation is independent of the problem dimension. This permits scalable estimation of the uncertainty in large-scale ill-posed linear inverse problems at a small multiple (independent of the problem dimension) of the cost of solving the forward problem.
A linear regularization scheme for inverse problems with unbounded linear operators on Banach spaces
NASA Astrophysics Data System (ADS)
Kohr, Holger
2013-06-01
This paper extends the linear regularization scheme known as the approximate inverse to unbounded linear operators on Banach spaces. The principle of feature reconstruction is adapted from bounded operators to the unbounded scenario and, in addition, a new situation is examined where the data need to be pre-processed to fit into the mathematical model. In all these cases, invariance and regularization properties are surveyed and established for the example of fractional differentiation. Numerical results confirm the derived characteristics of the presented methods.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.; Kuvshinov, Alexey V.
2016-05-01
This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.
Arnold, Alexander; Bruhns, Otto T; Mosler, Jörn
2011-07-21
A novel finite element formulation suitable for computing efficiently the stiffness distribution in soft biological tissue is presented in this paper. For that purpose, the inverse problem of finite strain hyperelasticity is considered and solved iteratively. In line with Arnold et al (2010 Phys. Med. Biol. 55 2035), the computing time is effectively reduced by using adaptive finite element methods. In sharp contrast to previous approaches, the novel mesh adaption relies on an r-adaption (re-allocation of the nodes within the finite element triangulation). This method allows the detection of material interfaces between healthy and diseased tissue in a very effective manner. The evolution of the nodal positions is canonically driven by the same minimization principle characterizing the inverse problem of hyperelasticity. Consequently, the proposed mesh adaption is variationally consistent. Furthermore, it guarantees that the quality of the numerical solution is improved. Since the proposed r-adaption requires only a relatively coarse triangulation for detecting material interfaces, the underlying finite element spaces are usually not rich enough for predicting the deformation field sufficiently accurately (the forward problem). For this reason, the novel variational r-refinement is combined with the variational h-adaption (Arnold et al 2010) to obtain a variational hr-refinement algorithm. The resulting approach captures material interfaces well (by using r-adaption) and predicts a deformation field in good agreement with that observed experimentally (by using h-adaption).
NASA Astrophysics Data System (ADS)
Arnold, Alexander; Bruhns, Otto T.; Mosler, Jörn
2011-07-01
A novel finite element formulation suitable for computing efficiently the stiffness distribution in soft biological tissue is presented in this paper. For that purpose, the inverse problem of finite strain hyperelasticity is considered and solved iteratively. In line with Arnold et al (2010 Phys. Med. Biol. 55 2035), the computing time is effectively reduced by using adaptive finite element methods. In sharp contrast to previous approaches, the novel mesh adaption relies on an r-adaption (re-allocation of the nodes within the finite element triangulation). This method allows the detection of material interfaces between healthy and diseased tissue in a very effective manner. The evolution of the nodal positions is canonically driven by the same minimization principle characterizing the inverse problem of hyperelasticity. Consequently, the proposed mesh adaption is variationally consistent. Furthermore, it guarantees that the quality of the numerical solution is improved. Since the proposed r-adaption requires only a relatively coarse triangulation for detecting material interfaces, the underlying finite element spaces are usually not rich enough for predicting the deformation field sufficiently accurately (the forward problem). For this reason, the novel variational r-refinement is combined with the variational h-adaption (Arnold et al 2010) to obtain a variational hr-refinement algorithm. The resulting approach captures material interfaces well (by using r-adaption) and predicts a deformation field in good agreement with that observed experimentally (by using h-adaption).
Baghram, Shant; Rahvar, Sohrab
2009-12-15
We introduce in f(R) gravity-Palatini formalism the method of the inverse problem to extract the action from the expansion history of the Universe. First, we use an ansatz for the scale factor and apply the inverse method to derive an appropriate action for the gravity. In the second step we use the supernova type Ia data set from the Union sample and obtain a smoothed function for the Hubble parameter up to the redshift 1.7. We apply the smoothed Hubble parameter in the inverse approach and reconstruct the corresponding action in f(R) gravity. In the next step we investigate the viability of reconstruction method, doing a Monte Carlo simulation we generate synthetic SNIa data with the quality of the Union sample and show that roughly more than 1500 SNIa data is essential to reconstruct correct action. Finally, with enough SNIa data, we propose two diagnosis in order to distinguish between the {lambda}CDM model and an alternative theory for the acceleration of the Universe.
NASA Astrophysics Data System (ADS)
Panchenko, Yurii N.; De Maré, George R.
2002-06-01
The peculiarities characterising the traditional approach used in calculational vibrational spectroscopy and the approach based on using scaled quantum mechanical force fields are considered. Some results on the determination of the equilibrium geometry of benzene in both the harmonic approximation and in the approximation taking into account the kinematic and dynamic anharmonicity corrections by solving the inverse vibrational problem are discussed. Using the quantum mechanical force fields of the C 2F 6 molecule, calculated at three different theoretical levels as an example, the results of the determination of scale factors by different mathematical techniques are compared.
NASA Astrophysics Data System (ADS)
Yang, Ying; Wei, Guangsheng
2016-09-01
The inverse spectral and scattering problems for the radial Schrödinger equation on the half-line {[0,∞)} are considered for a real-valued, integrable potential having a finite first moment. It is shown that the potential is uniquely determined in terms of the mixed spectral or scattering data which consist of the partial knowledge of the potential given on the finite interval {[0,ɛ]} for some {ɛ > 0} and either the amplitude or phase (being equivalent to scattering function) of the Jost function, without bound state data.
Chemyakin, Eduard; Burton, Sharon; Kolgotin, Alexei; Müller, Detlef; Hostetler, Chris; Ferrare, Richard
2016-03-20
We present an investigation of some important mathematical and numerical features related to the retrieval of microphysical parameters [complex refractive index, single-scattering albedo, effective radius, total number, surface area, and volume concentrations] of ambient aerosol particles using multiwavelength Raman or high-spectral-resolution lidar. Using simple examples, we prove the non-uniqueness of an inverse solution to be the major source of the retrieval difficulties. Some theoretically possible ways of partially compensating for these difficulties are offered. For instance, an increase in the variety of input data via combination of lidar and certain passive remote sensing instruments will be helpful to reduce the error of estimation of the complex refractive index. We also demonstrate a significant interference between Aitken and accumulation aerosol modes in our inversion algorithm, and confirm that the solutions can be better constrained by limiting the particle radii. Applying a combination of an analytical approach and numerical simulations, we explain the statistical behavior of the microphysical size parameters. We reveal and clarify why the total surface area concentration is consistent even in the presence of non-unique solution sets and is on average the most stable parameter to be estimated, as long as at least one extinction optical coefficient is employed. We find that for selected particle size distributions, the total surface area and volume concentrations can be quickly retrieved with fair precision using only single extinction coefficients in a simple arithmetical relationship. PMID:27140552
NASA Astrophysics Data System (ADS)
Filippi, Anthony Matthew
For complex systems, sufficient a priori knowledge is often lacking about the mathematical or empirical relationship between cause and effect or between inputs and outputs of a given system. Automated machine learning may offer a useful solution in such cases. Coastal marine optical environments represent such a case, as the optical remote sensing inverse problem remains largely unsolved. A self-organizing, cybernetic mathematical modeling approach known as the group method of data handling (GMDH), a type of statistical learning network (SLN), was used to generate explicit spectral inversion models for optically shallow coastal waters. Optically shallow water light fields represent a particularly difficult challenge in oceanographic remote sensing. Several algorithm-input data treatment combinations were utilized in multiple experiments to automatically generate inverse solutions for various inherent optical property (IOP), bottom optical property (BOP), constituent concentration, and bottom depth estimations. The objective was to identify the optimal remote-sensing reflectance Rrs(lambda) inversion algorithm. The GMDH also has the potential of inductive discovery of physical hydro-optical laws. Simulated data were used to develop generalized, quasi-universal relationships. The Hydrolight numerical forward model, based on radiative transfer theory, was used to compute simulated above-water remote-sensing reflectance Rrs(lambda) psuedodata, matching the spectral channels and resolution of the experimental Naval Research Laboratory Ocean PHILLS (Portable Hyperspectral Imager for Low-Light Spectroscopy) sensor. The input-output pairs were for GMDH and artificial neural network (ANN) model development, the latter of which was used as a baseline, or control, algorithm. Both types of models were applied to in situ and aircraft data. Also, in situ spectroradiometer-derived Rrs(lambda) were used as input to an optimization-based inversion procedure. Target variables
NASA Astrophysics Data System (ADS)
Wyatt, Philip
2009-03-01
The electromagnetic inverse scattering problem suggests that if a homogeneous and non-absorbing object be illuminated with a monochromatic light source and if the far field scattered light intensity is known at sufficient scattering angles, then, in principle, one could derive the dielectric structure of the scattering object. In general, this is an ill-posed problem and methods must be developed to regularize the search for unique solutions. An iterative procedure often begins with a model of the scattering object, solves the forward scattering problem using this model, and then compares these calculated results with the measured values. Key to any such solution is instrumentation capable of providing adequate data. To this end, the development of the first laser based absolute light scattering photometers is described together with their continuing evolution and some of the remarkable discoveries made with them. For particles much smaller than the wavelength of the incident light (e.g. macromolecules), the inverse scattering problems are easily solved. Among the many solutions derived with this instrumentation are the in situ structure of bacterial cells, new drug delivery mechanisms, the development of new vaccines and other biologicals, characterization of wines, the possibility of custom chemotherapy, development of new polymeric materials, identification of protein crystallization conditions, and a variety discoveries concerning protein interactions. A new form of the problem is described to address bioterrorist threats. Over the many years of development and refinement, one element stands out as essential for the successes that followed: the R and D teams were always directed and executed by physics trained theorists and experimentalists. 14 Ph. D. physicists each made his/her unique contribution to the development of these evolving instruments and the interpretation of their results.
On the numerical solution of a three-dimensional inverse medium scattering problem
NASA Astrophysics Data System (ADS)
Hohage, Thorsten
2001-12-01
We examine the scattering of time-harmonic acoustic waves in inhomogeneous media. The problem is to recover a spatially varying refractive index in a three-dimensional medium from far-field measurements of scattered waves corresponding to incoming waves from all directions. This problem is exponentially ill-posed and of a large scale since a solution of the direct problem corresponds to solving a partial differential equation in R3 for each incident wave. We construct a preconditioner for the conjugate gradient method applied to the normal equation to solve the regularized linearized operator equation in each Newton step. This reduces the number of operator evaluations dramatically compared to standard regularized Newton methods. Our method can also be applied effectively to other exponentially ill-posed problems, for example, in impedance tomography, heat conduction and obstacle scattering. To solve the direct problems, we use an improved fast solver for the Lippmann-Schwinger equation suggested by Vainikko.
Farina, Dmytro; Jiang, Y; Dössel, O
2009-12-01
The distributions of transmembrane voltage (TMV) within the cardiac tissue are linearly connected with the patient's body surface potential maps (BSPMs) at every time instant. The matrix describing the relation between the respective distributions is referred to as the transfer matrix. This matrix can be employed to carry out forward calculations in order to find the BSPM for any given distribution of TMV inside the heart. Its inverse can be used to reconstruct the cardiac activity non-invasively, which can be an important diagnostic tool in the clinical practice. The computation of this matrix using the finite element method can be quite time-consuming. In this work, a method is proposed allowing to speed up this process by computing an approximate transfer matrix instead of the precise one. The method is tested on three realistic anatomical models of real-world patients. It is shown that the computation time can be reduced by 50% without loss of accuracy.
NASA Astrophysics Data System (ADS)
Ruggeri, Paolo; Irving, James; Holliger, Klaus
2015-08-01
We critically examine the performance of sequential geostatistical resampling (SGR) as a model proposal mechanism for Bayesian Markov-chain-Monte-Carlo (MCMC) solutions to near-surface geophysical inverse problems. Focusing on a series of simple yet realistic synthetic crosshole georadar tomographic examples characterized by different numbers of data, levels of data error and degrees of model parameter spatial correlation, we investigate the efficiency of three different resampling strategies with regard to their ability to generate statistically independent realizations from the Bayesian posterior distribution. Quite importantly, our results show that, no matter what resampling strategy is employed, many of the examined test cases require an unreasonably high number of forward model runs to produce independent posterior samples, meaning that the SGR approach as currently implemented will not be computationally feasible for a wide range of problems. Although use of a novel gradual-deformation-based proposal method can help to alleviate these issues, it does not offer a full solution. Further, we find that the nature of the SGR is found to strongly influence MCMC performance; however no clear rule exists as to what set of inversion parameters and/or overall proposal acceptance rate will allow for the most efficient implementation. We conclude that although the SGR methodology is highly attractive as it allows for the consideration of complex geostatistical priors as well as conditioning to hard and soft data, further developments are necessary in the context of novel or hybrid MCMC approaches for it to be considered generally suitable for near-surface geophysical inversions.
NASA Astrophysics Data System (ADS)
Fernández Martínez, Juan L.; García Gonzalo, Esperanza; Fernández Álvarez, José P.; Kuzma, Heidi A.; Menéndez Pérez, César O.
2010-05-01
PSO is an optimization technique inspired by the social behavior of individuals in nature (swarms) that has been successfully used in many different engineering fields. In addition, the PSO algorithm can be physically interpreted as a stochastic damped mass-spring system. This analogy has served to introduce the PSO continuous model and to deduce a whole family of PSO algorithms using different finite-differences schemes. These algorithms are characterized in terms of convergence by their respective first and second order stability regions. The performance of these new algorithms is first checked using synthetic functions showing a degree of ill-posedness similar to that found in many geophysical inverse problems having their global minimum located on a very narrow flat valley or surrounded by multiple local minima. Finally we present the application of these PSO algorithms to the analysis and solution of a VES inverse problem associated with a seawater intrusion in a coastal aquifer in southern Spain. PSO family members are successfully compared to other well known global optimization algorithms (binary genetic algorithms and simulated annealing) in terms of their respective convergence curves and the sea water intrusion depth posterior histograms.
NASA Astrophysics Data System (ADS)
Montoya-Martínez, Jair; Artés-Rodríguez, Antonio; Pontil, Massimiliano; Hansen, Lars Kai
2014-12-01
We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the ℓ 21-norm of the coding matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios, the performance of our method with respect to the Group Lasso and Trace Norm regularizers when they are applied directly to the target matrix.
Application of Dynamic Logic Algorithm to Inverse Scattering Problems Related to Plasma Diagnostics
NASA Astrophysics Data System (ADS)
Perlovsky, L.; Deming, R. W.; Sotnikov, V.
2010-11-01
In plasma diagnostics scattering of electromagnetic waves is widely used for identification of density and wave field perturbations. In the present work we use a powerful mathematical approach, dynamic logic (DL), to identify the spectra of scattered electromagnetic (EM) waves produced by the interaction of the incident EM wave with a Langmuir soliton in the presence of noise. The problem is especially difficult since the spectral amplitudes of the noise pattern are comparable with the amplitudes of the scattered waves. In the past DL has been applied to a number of complex problems in artificial intelligence, pattern recognition, and signal processing, resulting in revolutionary improvements. Here we demonstrate its application to plasma diagnostic problems. [4pt] Perlovsky, L.I., 2001. Neural Networks and Intellect: using model-based concepts. Oxford University Press, New York, NY.
NASA Astrophysics Data System (ADS)
Cordier, G.; Choi, J.; Raguin, L. G.
2008-11-01
Skin microcirculation plays an important role in diseases such as chronic venous insufficiency and diabetes. Magnetic resonance imaging (MRI) can provide quantitative information with a better penetration depth than other noninvasive methods, such as laser Doppler flowmetry or optical coherence tomography. Moreover, successful MRI skin studies have recently been reported. In this article, we investigate three potential inverse models to quantify skin microcirculation using diffusion-weighted MRI (DWI), also known as q-space MRI. The model parameters are estimated based on nonlinear least-squares (NLS). For each of the three models, an optimal DWI sampling scheme is proposed based on D-optimality in order to minimize the size of the confidence region of the NLS estimates and thus the effect of the experimental noise inherent to DWI. The resulting covariance matrices of the NLS estimates are predicted by asymptotic normality and compared to the ones computed by Monte-Carlo simulations. Our numerical results demonstrate the effectiveness of the proposed models and corresponding DWI sampling schemes as compared to conventional approaches.
A unified framework for approximation in inverse problems for distributed parameter systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.
1988-01-01
A theoretical framework is presented that can be used to treat approximation techniques for very general classes of parameter estimation problems involving distributed systems that are either first or second order in time. Using the approach developed, one can obtain both convergence and stability (continuous dependence of parameter estimates with respect to the observations) under very weak regularity and compactness assumptions on the set of admissible parameters. This unified theory can be used for many problems found in the recent literature and in many cases offers significant improvements to existing results.
Semenov, Alexander; Zaikin, Oleg
2016-01-01
In this paper we propose an approach for constructing partitionings of hard variants of the Boolean satisfiability problem (SAT). Such partitionings can be used for solving corresponding SAT instances in parallel. For the same SAT instance one can construct different partitionings, each of them is a set of simplified versions of the original SAT instance. The effectiveness of an arbitrary partitioning is determined by the total time of solving of all SAT instances from it. We suggest the approach, based on the Monte Carlo method, for estimating time of processing of an arbitrary partitioning. With each partitioning we associate a point in the special finite search space. The estimation of effectiveness of the particular partitioning is the value of predictive function in the corresponding point of this space. The problem of search for an effective partitioning can be formulated as a problem of optimization of the predictive function. We use metaheuristic algorithms (simulated annealing and tabu search) to move from point to point in the search space. In our computational experiments we found partitionings for SAT instances encoding problems of inversion of some cryptographic functions. Several of these SAT instances with realistic predicted solving time were successfully solved on a computing cluster and in the volunteer computing project SAT@home. The solving time agrees well with estimations obtained by the proposed method. PMID:27190753
Semenov, Alexander; Zaikin, Oleg
2016-01-01
In this paper we propose an approach for constructing partitionings of hard variants of the Boolean satisfiability problem (SAT). Such partitionings can be used for solving corresponding SAT instances in parallel. For the same SAT instance one can construct different partitionings, each of them is a set of simplified versions of the original SAT instance. The effectiveness of an arbitrary partitioning is determined by the total time of solving of all SAT instances from it. We suggest the approach, based on the Monte Carlo method, for estimating time of processing of an arbitrary partitioning. With each partitioning we associate a point in the special finite search space. The estimation of effectiveness of the particular partitioning is the value of predictive function in the corresponding point of this space. The problem of search for an effective partitioning can be formulated as a problem of optimization of the predictive function. We use metaheuristic algorithms (simulated annealing and tabu search) to move from point to point in the search space. In our computational experiments we found partitionings for SAT instances encoding problems of inversion of some cryptographic functions. Several of these SAT instances with realistic predicted solving time were successfully solved on a computing cluster and in the volunteer computing project SAT@home. The solving time agrees well with estimations obtained by the proposed method.
Turbomachine blading with splitter blades designed by solving the inverse flow field problem
NASA Astrophysics Data System (ADS)
Luu, T. S.; Viney, B.; Bencherif, L.
1992-04-01
The paper presents an inverse method for the turbomachine blading design in incompressible non viscous flow in order to avoid cavitation and gives a new approach of the boundary conditions to be settled in relation with bound vorticity distribution on the blades. Treating first the 2D cascade design, it shows how the blade must be generated with the given thickness distribution and must be loaded in order to obtain the desired outlet flow angle. The 3D design is analysed by two steps S2-S1 approach proposed by Wu[1]. For the meridian flow (S2 approach), the blade thickness is taken into account by the modification of metric tensor in the continuity equation. The governing one is provided by the hub to shroud equilibrium condition and the meridian stream function is choosen to define the flow field. This step leads to the determination of axisymmetrical stream sheets as well as the approximate camber surface of the blades. In the second step, blade to blade flow (S1 approach) is analyzed. The governing equation is deduced from the momentum equation which implies that the vorticity of the absolute velocity must be tangential to the stream sheet. The bound vorticity distribution must be the same one as in S2 approach and the residual flux crossing over the blade be conservative (transpiration model). These two relations constitute the boundary conditions for the S1 flow. The detection of this residual flux due to the normal component of the relative velocity on the blade surface leads to the rectification of the camber surface. The optimized design of the blading of a centifugal impeller with splitter blades is presented. Pour définir la géométrie des aubages, les méthodes conventionnelles prennent la distribution de vitesse sur les deux faces de l'aube comme données initiales. En appliquant cette approche, on perd le contrôle de l'épaisseur de l'aube. Pour y remédier, la présente méthode suggère une méthode inverse en représentant les aubes par une
NASA Astrophysics Data System (ADS)
Elizondo, D.; Cappelaere, B.; Faure, Ch.
2002-04-01
Emerging tools for automatic differentiation (AD) of computer programs should be of great benefit for the implementation of many derivative-based numerical methods such as those used for inverse modeling. The Odyssée software, one such tool for Fortran 77 codes, has been tested on a sample model that solves a 2D non-linear diffusion-type equation. Odyssée offers both the forward and the reverse differentiation modes, that produce the tangent and the cotangent models, respectively. The two modes have been implemented on the sample application. A comparison is made with a manually-produced differentiated code for this model (MD), obtained by solving the adjoint equations associated with the model's discrete state equations. Following a presentation of the methods and tools and of their relative advantages and drawbacks, the performances of the codes produced by the manual and automatic methods are compared, in terms of accuracy and of computing efficiency (CPU and memory needs). The perturbation method (finite-difference approximation of derivatives) is also used as a reference. Based on the test of Taylor, the accuracy of the two AD modes proves to be excellent and as high as machine precision permits, a good indication of Odyssée's capability to produce error-free codes. In comparison, the manually-produced derivatives (MD) sometimes appear to be slightly biased, which is likely due to the fact that a theoretical model (state equations) and a practical model (computer program) do not exactly coincide, while the accuracy of the perturbation method is very uncertain. The MD code largely outperforms all other methods in computing efficiency, a subject of current research for the improvement of AD tools. Yet these tools can already be of considerable help for the computer implementation of many numerical methods, avoiding the tedious task of hand-coding the differentiation of complex algorithms.
NASA Astrophysics Data System (ADS)
Hunziker, J.; Thorbecke, J.; Slob, E. C.
2014-12-01
Commonly, electromagnetic measurements for exploring and monitoring hydrocarbon reservoirs are inverted for the subsurface conductivity distribution by minimizing the difference between the actual data and a forward modeled dataset. The convergence of the inversion process to the correct solution strongly depends on the shape of the solution space. Since this is a non-linear problem, there exist a multitude of minima of which only the global one provides the correct conductivity values. To easily find the global minimum we desire it to have a broad cone of attraction, while it should also feature a very narrow bottom in order to obtain the subsurface conductivity with high resolution. In this study, we aim to determine which combination of input data corresponds to a favorable shape of the solution space. Since the solution space is N-dimensional, with N being the number of unknown subsurface parameters, plotting it is out of the question. In our approach, we use a genetic algorithm (Goldberg, 1989) to probe the solution space. Such algorithms have the advantage that every run of the same problem will end up at a different solution. Most of these solutions are expected to lie close to the global minimum. A situation where only few runs end up in the global minimum indicates that the solution space consists of a lot of local minima or that the cone of attraction of the global minimum is small. If a lot of runs end up with a similar data-misfit but with a large spread of the subsurface medium parameters in one or more direction, it can be concluded that the chosen data-input is not sensitive with respect to that direction. Compared to the study of Hunziker et al. 2014, we allow also to invert for subsurface boundaries and include more combinations of input datasets. The results so far suggest that it is essential to include the magnetic field in the inversion process in order to find the anisotropic conductivity values. ReferencesGoldberg, D. E., 1989. Genetic
Stability results for the parameter identification inverse problem in cardiac electrophysiology
NASA Astrophysics Data System (ADS)
Lassoued, Jamila; Mahjoub, Moncef; Zemzemi, Néjib
2016-11-01
In this paper we prove a stability estimate of the parameter identification problem in cardiac electrophysiology modeling. We use the monodomain model which is a reaction diffusion parabolic equation where the reaction term is obtained by solving an ordinary differential equation (ODE). We are interested in proving the stability of the identification of the parameter {τ }{in}, which is the parameter that multiplies the cubic term in the reaction term. The proof of the result is based on a new Carleman-type estimate for both partial differential equation (PDE) and ODE problems. As a consequence of the stability result we prove the uniqueness of the parameter {τ }{in} giving some observations of both state variables at a given time t 0 in the whole domain and in the PDE variable in a non empty open subset w 0 of the domain.
Binary optimization for source localization in the inverse problem of ECG.
Potyagaylo, Danila; Cortés, Elisenda Gil; Schulze, Walther H W; Dössel, Olaf
2014-09-01
The goal of ECG-imaging (ECGI) is to reconstruct heart electrical activity from body surface potential maps. The problem is ill-posed, which means that it is extremely sensitive to measurement and modeling errors. The most commonly used method to tackle this obstacle is Tikhonov regularization, which consists in converting the original problem into a well-posed one by adding a penalty term. The method, despite all its practical advantages, has however a serious drawback: The obtained solution is often over-smoothed, which can hinder precise clinical diagnosis and treatment planning. In this paper, we apply a binary optimization approach to the transmembrane voltage (TMV)-based problem. For this, we assume the TMV to take two possible values according to a heart abnormality under consideration. In this work, we investigate the localization of simulated ischemic areas and ectopic foci and one clinical infarction case. This affects only the choice of the binary values, while the core of the algorithms remains the same, making the approximation easily adjustable to the application needs. Two methods, a hybrid metaheuristic approach and the difference of convex functions (DC), algorithm were tested. For this purpose, we performed realistic heart simulations for a complex thorax model and applied the proposed techniques to the obtained ECG signals. Both methods enabled localization of the areas of interest, hence showing their potential for application in ECGI. For the metaheuristic algorithm, it was necessary to subdivide the heart into regions in order to obtain a stable solution unsusceptible to the errors, while the analytical DC scheme can be efficiently applied for higher dimensional problems. With the DC method, we also successfully reconstructed the activation pattern and origin of a simulated extrasystole. In addition, the DC algorithm enables iterative adjustment of binary values ensuring robust performance.
NASA Astrophysics Data System (ADS)
Yao, Zhewei; Hu, Zixi; Li, Jinglai
2016-07-01
Many scientific and engineering problems require to perform Bayesian inferences in function spaces, where the unknowns are of infinite dimension. In such problems, choosing an appropriate prior distribution is an important task. In particular, when the function to infer is subject to sharp jumps, the commonly used Gaussian measures become unsuitable. On the other hand, the so-called total variation (TV) prior can only be defined in a finite-dimensional setting, and does not lead to a well-defined posterior measure in function spaces. In this work we present a TV-Gaussian (TG) prior to address such problems, where the TV term is used to detect sharp jumps of the function, and the Gaussian distribution is used as a reference measure so that it results in a well-defined posterior measure in the function space. We also present an efficient Markov Chain Monte Carlo (MCMC) algorithm to draw samples from the posterior distribution of the TG prior. With numerical examples we demonstrate the performance of the TG prior and the efficiency of the proposed MCMC algorithm.
FELIX: advances in modeling forward and inverse ice-sheet problems
NASA Astrophysics Data System (ADS)
Gunzburger, Max; Hoffman, Mattew; Leng, Wei; Perego, Mauro; Price, Stephen; Salinger, Andrew; Stadler, Georg; Ju, Lili
2013-04-01
Several models of different complexity and accuracy have been proposed for describing ice-sheet dynamics. We introduce a parallel, finite element framework for implementing these models, which range from the "shallow ice approximation" up through nonlinear Stokes flow. These models make up the land ice dynamical core of FELIX, which is being developed under the Community Ice Sheet Model. We present results from large-scale simulations of the Greenland ice-sheet, compare models of differing complexity and accuracy, and explore different solution methods for the resulting linear and nonlinear systems. We also address the problem of finding an optimal initial state for Greenland ice-sheet via estimating the spatially varying linear-friction coefficient at the ice-bedrock interface. The problem, which consists of minimizing the mismatch between a specified and computed surface mass balance and/or the mismatch between observed and modeled surface velocities, is solved as an optimal control problem constrained by the governing model equations.
NASA Astrophysics Data System (ADS)
Nikitenko, N. I.
1981-12-01
The paper develops a difference method for solving inverse geometric problems of heat conduction relating to the determination of the coordinates of a moving boundary with respect to steps along the time axis. At every step, the desired function is expanded in a power series of the time coordinate; and the coefficients of this series are found through the multiple solution of direct heat-conduction problems for systems with moving phase boundaries. Numerical results indicate that the error of the solution of incorrectly stated inverse geometric problems differs only insignificantly from the error of the initial data.
Long, Christopher J.; Purdon, Patrick L.; Temereanca, Simona; Desai, Neil U.; Hämäläinen, Matti S.; Brown, Emery N.
2011-01-01
Determining the magnitude and location of neural sources within the brain that are responsible for generating magnetoencephalography (MEG) signals measured on the surface of the head is a challenging problem in functional neuroimaging. The number of potential sources within the brain exceeds by an order of magnitude the number of recording sites. As a consequence, the estimates for the magnitude and location of the neural sources will be ill-conditioned because of the underdetermined nature of the problem. One well-known technique designed to address this imbalance is the minimum norm estimator (MNE). This approach imposes an L2 regularization constraint that serves to stabilize and condition the source parameter estimates. However, these classes of regularizer are static in time and do not consider the temporal constraints inherent to the biophysics of the MEG experiment. In this paper we propose a dynamic state-space model that accounts for both spatial and temporal correlations within and across candidate intra-cortical sources. In our model, the observation model is derived from the steady-state solution to Maxwell's equations while the latent model representing neural dynamics is given by a random walk process. We show that the Kalman filter (KF) and the Kalman smoother [also known as the fixed-interval smoother (FIS)] may be used to solve the ensuing high-dimensional state-estimation problem. Using a well-known relationship between Bayesian estimation and Kalman filtering, we show that the MNE estimates carry a significant zero bias. Calculating these high-dimensional state estimates is a computationally challenging task that requires High Performance Computing (HPC) resources. To this end, we employ the NSF Teragrid Supercomputing Network to compute the source estimates. We demonstrate improvement in performance of the state-space algorithm relative to MNE in analyses of simulated and actual somatosensory MEG experiments. Our findings establish the benefits
Cięszczyk, Sławomir; Kisała, Piotr
2016-02-20
We propose and experimentally demonstrate a method for the detection of steel material defects utilizing a fiber Bragg grating sensor. The considered defects are periodic grooves along the length of the tested steel profile. Direct measurement of the spectral reflectance characteristics of the fiber is performed, and the related inverse problem of indirect defect shape determination is solved. It has been demonstrated that the defect periodicity estimation is 2.5 mm, with an error of less than 0.1. Furthermore, it has been shown that for periodic intervals of the order of 5 mm, the difference between the strain amplitude calculated using our method and the amplitude obtained via the finite element method was 1.4 mϵ. PMID:26906595
di Volo, Matteo; Burioni, Raffaella; Casartelli, Mario; Livi, Roberto; Vezzani, Alessandro
2016-01-01
We study the dynamics of networks with inhibitory and excitatory leak-integrate-and-fire neurons with short-term synaptic plasticity in the presence of depressive and facilitating mechanisms. The dynamics is analyzed by a heterogeneous mean-field approximation, which allows us to keep track of the effects of structural disorder in the network. We describe the complex behavior of different classes of excitatory and inhibitory components, which give rise to a rich dynamical phase diagram as a function of the fraction of inhibitory neurons. Using the same mean-field approach, we study and solve a global inverse problem: reconstructing the degree probability distributions of the inhibitory and excitatory components and the fraction of inhibitory neurons from the knowledge of the average synaptic activity field. This approach unveils new perspectives on the numerical study of neural network dynamics and the possibility of using these models as a test bed for the analysis of experimental data. PMID:26871090
NASA Astrophysics Data System (ADS)
di Volo, Matteo; Burioni, Raffaella; Casartelli, Mario; Livi, Roberto; Vezzani, Alessandro
2016-01-01
We study the dynamics of networks with inhibitory and excitatory leak-integrate-and-fire neurons with short-term synaptic plasticity in the presence of depressive and facilitating mechanisms. The dynamics is analyzed by a heterogeneous mean-field approximation, which allows us to keep track of the effects of structural disorder in the network. We describe the complex behavior of different classes of excitatory and inhibitory components, which give rise to a rich dynamical phase diagram as a function of the fraction of inhibitory neurons. Using the same mean-field approach, we study and solve a global inverse problem: reconstructing the degree probability distributions of the inhibitory and excitatory components and the fraction of inhibitory neurons from the knowledge of the average synaptic activity field. This approach unveils new perspectives on the numerical study of neural network dynamics and the possibility of using these models as a test bed for the analysis of experimental data.
NASA Technical Reports Server (NTRS)
Mutterperl, William
1944-01-01
A method of conformal transformation is developed that maps an airfoil into a straight line, the line being chosen as the extended chord line of the airfoil. The mapping is accomplished by operating directly with the airfoil ordinates. The absence of any preliminary transformation is found to shorten the work substantially over that of previous methods. Use is made of the superposition of solutions to obtain a rigorous counterpart of the approximate methods of thin-airfoils theory. The method is applied to the solution of the direct and inverse problems for arbitrary airfoils and pressure distributions. Numerical examples are given. Applications to more general types of regions, in particular to biplanes and to cascades of airfoils, are indicated. (author)
NASA Astrophysics Data System (ADS)
Usui, Y.; Uehara, M.
2009-12-01
The scanning magnetometory reveals fine-scale magnetic field images over geological samples, offering unique paleomagnetic information. Recent scanning magnetometory reaches high moment sensitivity and high spatial resolution (less than 1 mm), owing to the high field-sensitivity sensors (e.g., SQUID, Magneto-Impedance (MI), Giant-Magneto-Resistance) and small sample-to-sensor distance. Using an MI sensor driven by low-noise circuit, we successfully obtained magnetic field images of thin-sectioned geological samples carrying natural remanent magnetization. The main challenge in the scanning magnetometory is to invert the obtained field data into a magnetization pattern (Weiss et al., 2007). Since the magnetization pattern is a continuous function of position, the magnetic inverse problem is essentially underdetermined. Consequently, there should always be limitations in the resolving power of the estimated magnetization. Assessing the resolving power is essential to properly interpret the solution of inverse problems. Nevertheless, this has not been done for the scanning magnetometory. In this study, we used the model-resolution to assess the problem. We developed software to calculate and visualize the model-resolution for a single target point using the Backus-Gilbert method and iterative least-square calculation. Examples using results obtained by our MI scanning magnetometer will be presented. Any solution to a linear inverse problem can be expressed as a linear combination of data. The corresponding combination of data kernel constructs an averaging kernel. Thus, any solution is a weighted average of the true magnetization pattern of the sample. In other words, the resolving power of the solution, or the model-resolution, can be assessed by drawing the averaging kernel. Since the scanning magnetometory measures 2-dimensional samples, the averaging kernel for single target point becomes 2-dimensioanl image. Preliminary calculation was performed on a field image
Cięszczyk, Sławomir; Kisała, Piotr
2016-02-20
We propose and experimentally demonstrate a method for the detection of steel material defects utilizing a fiber Bragg grating sensor. The considered defects are periodic grooves along the length of the tested steel profile. Direct measurement of the spectral reflectance characteristics of the fiber is performed, and the related inverse problem of indirect defect shape determination is solved. It has been demonstrated that the defect periodicity estimation is 2.5 mm, with an error of less than 0.1. Furthermore, it has been shown that for periodic intervals of the order of 5 mm, the difference between the strain amplitude calculated using our method and the amplitude obtained via the finite element method was 1.4 mϵ.
NASA Astrophysics Data System (ADS)
Giudici, Mauro; Casabianca, Davide; Comunian, Alessandro
2015-04-01
The basic classical inverse problem of groundwater hydrology aims at determining aquifer transmissivity (T ) from measurements of hydraulic head (h), estimates or measures of source terms and with the least possible knowledge on hydraulic transmissivity. The theory of inverse problems shows that this is an example of ill-posed problem, for which non-uniqueness and instability (or at least ill-conditioning) might preclude the computation of a physically acceptable solution. One of the methods to reduce the problems with non-uniqueness, ill-conditioning and instability is a tomographic approach, i.e., the use of data corresponding to independent flow situations. The latter might correspond to different hydraulic stimulations of the aquifer, i.e., to different pumping schedules and flux rates. Three inverse methods have been analyzed and tested to profit from the use of multiple sets of data: the Differential System Method (DSM), the Comparison Model Method (CMM) and the Double Constraint Method (DCM). DSM and CMM need h all over the domain and thus the first step for their application is the interpolation of measurements of h at sparse points. Moreover, they also need the knowledge of the source terms (aquifer recharge, well pumping rates) all over the aquifer. DSM is intrinsically based on the use of multiple data sets, which permit to write a first-order partial differential equation for T , whereas CMM and DCM were originally proposed to invert a single data set and have been extended to work with multiple data sets in this work. CMM and DCM are based on Darcy's law, which is used to update an initial guess of the T field with formulas based on a comparison of different hydraulic gradients. In particular, the CMM algorithm corrects the T estimate with ratio of the observed hydraulic gradient and that obtained with a comparison model which shares the same boundary conditions and source terms as the model to be calibrated, but a tentative T field. On the other hand
Inverse eigenvalue problems in vibration absorption: Passive modification and active control
NASA Astrophysics Data System (ADS)
Mottershead, John E.; Ram, Yitshak M.
2006-01-01
The abiding problem of vibration absorption has occupied engineering scientists for over a century and there remain abundant examples of the need for vibration suppression in many industries. For example, in the automotive industry the resolution of noise, vibration and harshness (NVH) problems is of extreme importance to customer satisfaction. In rotorcraft it is vital to avoid resonance close to the blade passing speed and its harmonics. An objective of the greatest importance, and extremely difficult to achieve, is the isolation of the pilot's seat in a helicopter. It is presently impossible to achieve the objectives of vibration absorption in these industries at the design stage because of limitations inherent in finite element models. Therefore, it is necessary to develop techniques whereby the dynamic of the system (possibly a car or a helicopter) can be adjusted after it has been built. There are two main approaches: structural modification by passive elements and active control. The state of art of the mathematical theory of vibration absorption is presented and illustrated for the benefit of the reader with numerous simple examples.
Blakeslee, Barbara; McCourt, Mark E.
2015-01-01
Research in lightness perception centers on understanding the prior assumptions and processing strategies the visual system uses to parse the retinal intensity distribution (the proximal stimulus) into the surface reflectance and illumination components of the scene (the distal stimulus—ground truth). It is agreed that the visual system must compare different regions of the visual image to solve this inverse problem; however, the nature of the comparisons and the mechanisms underlying them are topics of intense debate. Perceptual illusions are of value because they reveal important information about these visual processing mechanisms. We propose a framework for lightness research that resolves confusions and paradoxes in the literature, and provides insight into the mechanisms the visual system employs to tackle the inverse problem. The main idea is that much of the debate and confusion in the literature stems from the fact that lightness, defined as apparent reflectance, is underspecified and refers to three different types of judgments that are not comparable. Under stimulus conditions containing a visible illumination component, such as a shadow boundary, observers can distinguish and match three independent dimensions of achromatic experience: apparent intensity (brightness), apparent local intensity ratio (brightness-contrast), and apparent reflectance (lightness). In the absence of a visible illumination boundary, however, achromatic vision reduces to two dimensions and, depending on stimulus conditions and observer instructions, judgments of lightness are identical to judgments of brightness or brightness-contrast. Furthermore, because lightness judgments are based on different information under different conditions, they can differ greatly in their degree of difficulty and in their accuracy. This may, in part, explain the large variability in lightness constancy across studies. PMID:25954181
NASA Astrophysics Data System (ADS)
Ahn, Chi Young; Jeon, Kiwan; Park, Won-Kwang
2015-06-01
This study analyzes the well-known MUltiple SIgnal Classification (MUSIC) algorithm to identify unknown support of thin penetrable electromagnetic inhomogeneity from scattered field data collected within the so-called multi-static response matrix in limited-view inverse scattering problems. The mathematical theories of MUSIC are partially discovered, e.g., in the full-view problem, for an unknown target of dielectric contrast or a perfectly conducting crack with the Dirichlet boundary condition (Transverse Magnetic-TM polarization) and so on. Hence, we perform further research to analyze the MUSIC-type imaging functional and to certify some well-known but theoretically unexplained phenomena. For this purpose, we establish a relationship between the MUSIC imaging functional and an infinite series of Bessel functions of integer order of the first kind. This relationship is based on the rigorous asymptotic expansion formula in the existence of a thin inhomogeneity with a smooth supporting curve. Various results of numerical simulation are presented in order to support the identified structure of MUSIC. Although a priori information of the target is needed, we suggest a least condition of range of incident and observation directions to apply MUSIC in the limited-view problem.
NASA Astrophysics Data System (ADS)
Koepke, C.; Irving, J.
2015-12-01
Bayesian solutions to inverse problems in near-surface geophysics and hydrology have gained increasing popularity as a means of estimating not only subsurface model parameters, but also their corresponding uncertainties that can be used in probabilistic forecasting and risk analysis. In particular, Markov-chain-Monte-Carlo (MCMC) methods have attracted much recent attention as a means of statistically sampling from the Bayesian posterior distribution. In this regard, two approaches are commonly used to improve the computational tractability of the Bayesian-MCMC approach: (i) Forward models involving a simplification of the underlying physics are employed, which offer a significant reduction in the time required to calculate data, but generally at the expense of model accuracy, and (ii) the model parameter space is represented using a limited set of spatially correlated basis functions as opposed to a more intuitive high-dimensional pixel-based parameterization. It has become well understood that model inaccuracies resulting from (i) can lead to posterior parameter distributions that are highly biased and overly confident. Further, when performing model reduction as described in (ii), it is not clear how the prior distribution for the basis weights should be defined because simple (e.g., Gaussian or uniform) priors that may be suitable for a pixel-based parameterization may result in a strong prior bias when used for the weights. To address the issue of model error resulting from known forward model approximations, we generate a set of error training realizations and analyze them with principal component analysis (PCA) in order to generate a sparse basis. The latter is used in the MCMC inversion to remove the main model-error component from the residuals. To improve issues related to prior bias when performing model reduction, we also use a training realization approach, but this time models are simulated from the prior distribution and analyzed using independent
NASA Astrophysics Data System (ADS)
Gurarslan, Gurhan; Karahan, Halil
2015-09-01
In this study, an accurate model was developed for solving problems of groundwater-pollution-source identification. In the developed model, the numerical simulations of flow and pollutant transport in groundwater were carried out using MODFLOW and MT3DMS software. The optimization processes were carried out using a differential evolution algorithm. The performance of the developed model was tested on two hypothetical aquifer models using real and noisy observation data. In the first model, the release histories of the pollution sources were determined assuming that the numbers, locations and active stress periods of the sources are known. In the second model, the release histories of the pollution sources were determined assuming that there is no information on the sources. The results obtained by the developed model were found to be better than those reported in literature.
Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim
2014-02-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.
Borghero, Francesco; Demontis, Francesco
2016-09-01
In the framework of geometrical optics, we consider the following inverse problem: given a two-parameter family of curves (congruence) (i.e., f(x,y,z)=c_{1},g(x,y,z)=c_{2}), construct the refractive-index distribution function n=n(x,y,z) of a 3D continuous transparent inhomogeneous isotropic medium, allowing for the creation of the given congruence as a family of monochromatic light rays. We solve this problem by following two different procedures: 1. By applying Fermat's principle, we establish a system of two first-order linear nonhomogeneous PDEs in the unique unknown function n=n(x,y,z) relating the assigned congruence of rays with all possible refractive-index profiles compatible with this family. Moreover, we furnish analytical proof that the family of rays must be a normal congruence. 2. By applying the eikonal equation, we establish a second system of two first-order linear homogeneous PDEs whose solutions give the equation S(x,y,z)=const. of the geometric wavefronts and, consequently, all pertinent refractive-index distribution functions n=n(x,y,z). Finally, we make a comparison between the two procedures described above, discussing appropriate examples having exact solutions. PMID:27607492
NASA Astrophysics Data System (ADS)
Tian, Wenyi; Yuan, Xiaoming
2016-11-01
Linear inverse problems with total variation regularization can be reformulated as saddle-point problems; the primal and dual variables of such a saddle-point reformulation can be discretized in piecewise affine and constant finite element spaces, respectively. Thus, the well-developed primal-dual approach (a.k.a. the inexact Uzawa method) is conceptually applicable to such a regularized and discretized model. When the primal-dual approach is applied, the resulting subproblems may be highly nontrivial and it is necessary to discuss how to tackle them and thus make the primal-dual approach implementable. In this paper, we suggest linearizing the data-fidelity quadratic term of the hard subproblems so as to obtain easier ones. A linearized primal-dual method is thus proposed. Inspired by the fact that the linearized primal-dual method can be explained as an application of the proximal point algorithm, a relaxed version of the linearized primal-dual method, which can often accelerate the convergence numerically with the same order of computation, is also proposed. The global convergence and worst-case convergence rate measured by the iteration complexity are established for the new algorithms. Their efficiency is verified by some numerical results.
NASA Astrophysics Data System (ADS)
Lan, Bo; Lowe, Michael J. S.; Dunne, Fionn P. E.
2015-10-01
A new spherical convolution approach has been presented which couples HCP single crystal wave speed (the kernel function) with polycrystal c-axis pole distribution function to give the resultant polycrystal wave speed response. The three functions have been expressed as spherical harmonic expansions thus enabling application of the de-convolution technique to enable any one of the three to be determined from knowledge of the other two. Hence, the forward problem of determination of polycrystal wave speed from knowledge of single crystal wave speed response and the polycrystal pole distribution has been solved for a broad range of experimentally representative HCP polycrystal textures. The technique provides near-perfect representation of the sensitivity of wave speed to polycrystal texture as well as quantitative prediction of polycrystal wave speed. More importantly, a solution to the inverse problem is presented in which texture, as a c-axis distribution function, is determined from knowledge of the kernel function and the polycrystal wave speed response. It has also been explained why it has been widely reported in the literature that only texture coefficients up to 4th degree may be obtained from ultrasonic measurements. Finally, the de-convolution approach presented provides the potential for the measurement of polycrystal texture from ultrasonic wave speed measurements.
NASA Astrophysics Data System (ADS)
Cressie, N.; Wang, R.; Smyth, M.; Miller, C. E.
2016-05-01
Remote sensing of the atmosphere is typically achieved through measurements that are high-resolution radiance spectra. In this article, our goal is to characterize the first-moment and second-moment properties of the errors obtained when solving the regularized inverse problem associated with space-based atmospheric CO2 retrievals, specifically for the dry air mole fraction of CO2 in a column of the atmosphere. The problem of estimating (or retrieving) state variables is usually ill posed, leading to a solution based on regularization that is often called Optimal Estimation (OE). The difference between the estimated state and the true state is defined to be the retrieval error; error analysis for OE uses a linear approximation to the forward model, resulting in a calculation where the first moment of the retrieval error (the bias) is identically zero. This is inherently unrealistic and not seen in real or simulated retrievals. Nonzero bias is expected since the forward model of radiative transfer is strongly nonlinear in the atmospheric state. In this article, we extend and improve OE's error analysis based on a first-order, multivariate Taylor series expansion, by inducing the second-order terms in the expansion. Specifically, we approximate the bias through the second derivative of the forward model, which results in a formula involving the Hessian array. We propose a stable estimate of it, from which we obtain a second-order expression for the bias and the mean square prediction error of the retrieval.
NASA Astrophysics Data System (ADS)
Cipolatti, Rolci; Yamamoto, Masahiro
2011-09-01
We consider a solution u(p, g, a, b) to an initial value-boundary value problem for a wave equation: \\fl \\partial _t^2 u(x,t) = \\Delta u(x,t) + p(x)u(x,t), \\qquad x \\in \\Omega,\\qquad \\thinspace 0 < t < T\\\\ \\fl u(x,0) = a(x), \\qquad \\partial _tu(x,0) = b(x), \\qquad x \\in \\Omega,\\\\ \\fl u(x,t) = g(x,t), \\qquad x \\in \\partial \\Omega,\\qquad 0 < t < T, and we discuss an inverse problem of determining a coefficient p(x) and a, b by observations of u(p, g, a, b)(x, t) in a neighbourhood ω of ∂Ω over a time interval (0, T) and ∂itu(p, g, a, b)(x, T0), x in Ω, i = 0, 1, with T0 < T. We prove that if T - T0 and T0 are larger than the diameter of Ω, then we can choose a finite number of Dirichlet boundary inputs g1, ..., gN, so that the mapping \\fl \\lbrace u(p,g_j,a_j,b_j)\\vert _{\\omega \\times (0,T)}, \\partial _t^iu(p,g_j,a_j,b_j)(\\cdot,T_0)\\rbrace _{i=0,1, 1\\le j\\le N}\
NASA Astrophysics Data System (ADS)
Xue, Haile; Shen, Xueshun; Chou, Jifan
2015-10-01
Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the prediction equations can be estimated inversely by using the past data, which are presumed to represent the imperfection of the NWP model (model error, denoted as ME). In this first paper of a two-part series, an iteration method for obtaining the MEs in past intervals is presented, and the results from testing its convergence in idealized experiments are reported. Moreover, two batches of iteration tests were applied in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August 2009 and January-February 2010. The datasets associated with the initial conditions and sea surface temperature (SST) were both based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results showed that 6th h forecast errors were reduced to 10% of their original value after a 20-step iteration. Then, off-line forecast error corrections were estimated linearly based on the 2-month mean MEs and compared with forecast errors. The estimated error corrections agreed well with the forecast errors, but the linear growth rate of the estimation was steeper than the forecast error. The advantage of this iteration method is that the MEs can provide the foundation for online correction. A larger proportion of the forecast errors can be expected to be canceled out by properly introducing the model error correction into GRAPES-GFS.
NASA Astrophysics Data System (ADS)
Jiminez-Rodriguez, Luis O.; Rodriguez-Diaz, Eladio; Velez-Reyes, Miguel; DiMarzio, Charles A.
2003-05-01
Hyperspectral Remote Sensing has the potential to be used as an effective coral monitoring system from space. The problems to be addressed in hyperspectral imagery of coastal waters are related to the medium, clutter, and the object to be detected. In coastal waters the variability due to the interaction between the coast and the sea can bring significant disparity in the optical properties of the water column and the sea bottom. In terms of the medium, there is high scattering and absorption. Related to clutter we have the ocean floor, dissolved salt and gases, and dissolved organic matter. The object to be detected, in this case the coral reefs, has a weak signal, with temporal and spatial variation. In real scenarios the absorption and backscattering coefficients have spatial variation due to different sources of variability (river discharge, different depths of shallow waters, water currents) and temporal fluctuations. The retrieval of information about an object beneath some medium with high scattering and absorption properties requires the development of mathematical models and processing tools in the area of inversion, image reconstruction and detection. This paper presents the development of algorithms for retrieving information and its application to the recognition and classification of coral reefs under water with particles that provide high absorption and scattering. The data was gathered using a high resolution imaging spectrometer (hyperspectral) sensor. A mathematical model that simplifies the radiative transfer equation was used to quantify the interaction between the object of interest, the medium and the sensor. Tikhonov method of regularization was used in the inversion process to estimate the bottom albedo, ρ, of the ocean floor using a priori information. The a priori information is in the form of measured spectral signatures of objects of interest, such as sand, corals, and sea grass.
NASA Astrophysics Data System (ADS)
Hansen, Thomas Mejer; Cordua, Knud Skou; Looms, Majken Caroline; Mosegaard, Klaus
2013-03-01
We present an application of the SIPPI Matlab toolbox, to obtain a sample from the a posteriori probability density function for the classical tomographic inversion problem. We consider a number of different forward models, linear and non-linear, such as ray based forward models that rely on the high frequency approximation of the wave-equation and 'fat' ray based forward models relying on finite frequency theory. In order to sample the a posteriori probability density function we make use of both least squares based inversion, for linear Gaussian inverse problems, and the extended Metropolis sampler, for non-linear non-Gaussian inverse problems. To illustrate the applicability of the SIPPI toolbox to a tomographic field data set we use a cross-borehole traveltime data set from Arrenæs, Denmark. Both the computer code and the data are released in the public domain using open source and open data licenses. The code has been developed to facilitate inversion of 2D and 3D travel time tomographic data using a wide range of possible a priori models and choices of forward models.
NASA Astrophysics Data System (ADS)
Zakharova, Natalia; Piskovatsky, Nicolay; Gusev, Anatoly
2014-05-01
Development of Informational-Computational Systems (ICS) for data assimilation procedures is one of multidisciplinary problems. To study and solve these problems one needs to apply modern results from different disciplines and recent developments in: mathematical modeling; theory of adjoint equations and optimal control; inverse problems; numerical methods theory; numerical algebra and scientific computing. The above problems are studied in the Institute of Numerical Mathematics of the Russian Academy of Science (INM RAS) in ICS for personal computers. In this work the results on the Special data base development for ICS "INM RAS - Black Sea" are presented. In the presentation the input information for ICS is discussed, some special data processing procedures are described. In this work the results of forecast using ICS "INM RAS - Black Sea" with operational observation data assimilation are presented. This study was supported by the Russian Foundation for Basic Research (project No 13-01-00753) and by Presidium Program of Russian Academy of Sciences (project P-23 "Black sea as an imitational ocean model"). References 1. V.I. Agoshkov, M.V. Assovskii, S.A. Lebedev, Numerical simulation of Black Sea hydrothermodynamics taking into account tide-forming forces. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 5-31. 2. E.I. Parmuzin, V.I. Agoshkov, Numerical solution of the variational assimilation problem for sea surface temperature in the model of the Black Sea dynamics. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 69-94. 3. V.B. Zalesny, N.A. Diansky, V.V. Fomin, S.N. Moshonkin, S.G. Demyshev, Numerical model of the circulation of Black Sea and Sea of Azov. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 95-111. 4. Agoshkov V.I.,Assovsky M.B., Giniatulin S. V., Zakharova N.B., Kuimov G.V., Parmuzin E.I., Fomin V.V. Informational Computational system of variational assimilation of observation data "INM RAS - Black sea"// Ecological
NASA Astrophysics Data System (ADS)
Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.
2012-04-01
We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to
Meschiari, Stefano; Laughlin, Gregory P.
2010-07-20
Transit timing variations (TTVs)-deviations from strict periodicity between successive passages of a transiting planet-can be used to probe the structure and dynamics of multiple-planet systems. In this paper, we examine prospects for numerically solving the so-called inverse problem, the determination of the orbital elements of a perturbing body from the TTVs it induces. We assume that the planetary systems under examination have a limited number of Doppler velocity measurements and show that a more extensive radial velocity (RV) characterization with precision comparable to the semi-amplitude of the perturber may remove degeneracies in the solution. We examine several configurations of interest, including (1) a prototypical non-resonant system, modeled after HD 40307 b and c, which contains multiple super-Earth-mass planets, (2) a hypothetical system containing a transiting giant planet with a terrestrial-mass companion trapped in low-order mean motion resonance, and (3) the HAT-P-13 system, in which forced precession by an outer perturbing body that is well characterized by Doppler RV measurements can give insight into the interior structure of a perturbing planet, and for which the determination of mutual inclination between the transiting planet and its perturber is a key issue.
NASA Astrophysics Data System (ADS)
Sarry, L.; Peng, Y. J.; Boire, J. Y.
2002-01-01
In previously published studies, blood flow velocity from x-ray biplane angiography was measured by solving an inverse advection problem, relating velocity to bolus densities summed across sections. Both spatial and temporal velocity variations were recovered through a computationally expensive parameter estimation algorithm. Here we prove the existence and uniqueness of the solution on three sub-domains of the plane defined by the axial position along the vessel and the time of the angiographic sequence. A fast direct scheme was designed in conjunction with a regularization step stemming from the volume flow conservation law applied on consecutive segments. Its accuracy and immunity towards noise were tested on both simulated and real densitometric data. The relative error between the estimated and expected velocities was less than 5% for more than 90% of the points of the spatiotemporal plane with simulated densities normalized to 1.0 and a Gaussian additive noise of standard deviation 0.01. For densities reconstructed from a biplane angiographic sequence, increase in velocity is used as a functional index for the stenosis ratio and to characterize the sharing of flow at bifurcation.
Dixneuf, Sophie; Rachet, Florent; Chrysos, Michael
2015-02-28
Owing in part to the p orbitals of its filled L shell, neon has repeatedly come on stage for its peculiar properties. In the context of collision-induced Raman spectroscopy, in particular, we have shown, in a brief report published a few years ago [M. Chrysos et al., Phys. Rev. A 80, 054701 (2009)], that the room-temperature anisotropic Raman lineshape of Ne-Ne exhibits, in the far wing of the spectrum, a peculiar structure with an aspect other than a smooth wing (on a logarithmic plot) which contrasts with any of the existing studies, and whose explanation lies in the distinct way in which overlap and exchange interactions interfere with the classical electrostatic ones in making the polarizability anisotropy, α∥ - α⊥. Here, we delve deeper into that study by reporting data for that spectrum up to 450 cm(-1) and for even- and odd-order spectral moments up to M6, as well as quantum lineshapes, generated from SCF, CCSD, and CCSD(T) models for α∥ - α⊥, which are critically compared with the experiment. On account of the knowledge of the spectrum over the augmented frequency domain, we show how the inverse scattering problem can be tackled both effectively and economically, and we report an analytic function for the anisotropy whose quantum lineshape faithfully reproduces our observations. PMID:25725726
Dixneuf, Sophie; Rachet, Florent; Chrysos, Michael
2015-02-28
Owing in part to the p orbitals of its filled L shell, neon has repeatedly come on stage for its peculiar properties. In the context of collision-induced Raman spectroscopy, in particular, we have shown, in a brief report published a few years ago [M. Chrysos et al., Phys. Rev. A 80, 054701 (2009)], that the room-temperature anisotropic Raman lineshape of Ne–Ne exhibits, in the far wing of the spectrum, a peculiar structure with an aspect other than a smooth wing (on a logarithmic plot) which contrasts with any of the existing studies, and whose explanation lies in the distinct way in which overlap and exchange interactions interfere with the classical electrostatic ones in making the polarizability anisotropy, α{sub ∥} − α{sub ⊥}. Here, we delve deeper into that study by reporting data for that spectrum up to 450 cm{sup −1} and for even- and odd-order spectral moments up to M{sub 6}, as well as quantum lineshapes, generated from SCF, CCSD, and CCSD(T) models for α{sub ∥} − α{sub ⊥}, which are critically compared with the experiment. On account of the knowledge of the spectrum over the augmented frequency domain, we show how the inverse scattering problem can be tackled both effectively and economically, and we report an analytic function for the anisotropy whose quantum lineshape faithfully reproduces our observations.
Dixneuf, Sophie; Rachet, Florent; Chrysos, Michael
2015-02-28
Owing in part to the p orbitals of its filled L shell, neon has repeatedly come on stage for its peculiar properties. In the context of collision-induced Raman spectroscopy, in particular, we have shown, in a brief report published a few years ago [M. Chrysos et al., Phys. Rev. A 80, 054701 (2009)], that the room-temperature anisotropic Raman lineshape of Ne-Ne exhibits, in the far wing of the spectrum, a peculiar structure with an aspect other than a smooth wing (on a logarithmic plot) which contrasts with any of the existing studies, and whose explanation lies in the distinct way in which overlap and exchange interactions interfere with the classical electrostatic ones in making the polarizability anisotropy, α∥ - α⊥. Here, we delve deeper into that study by reporting data for that spectrum up to 450 cm(-1) and for even- and odd-order spectral moments up to M6, as well as quantum lineshapes, generated from SCF, CCSD, and CCSD(T) models for α∥ - α⊥, which are critically compared with the experiment. On account of the knowledge of the spectrum over the augmented frequency domain, we show how the inverse scattering problem can be tackled both effectively and economically, and we report an analytic function for the anisotropy whose quantum lineshape faithfully reproduces our observations.
Juhás, Pavol; Farrow, Christopher L; Yang, Xiaohao; Knox, Kevin R; Billinge, Simon J L
2015-11-01
A strategy is described for regularizing ill posed structure and nanostructure scattering inverse problems (i.e. structure solution) from complex material structures. This paper describes both the philosophy and strategy of the approach, and a software implementation, DiffPy Complex Modeling Infrastructure (DiffPy-CMI). PMID:26522405
NASA Technical Reports Server (NTRS)
Kozdoba, L. A.; Krivoshei, F. A.
1985-01-01
The solution of the inverse problem of nonsteady heat conduction is discussed, based on finding the coefficient of the heat conduction and the coefficient of specific volumetric heat capacity. These findings are included in the equation used for the electrical model of this phenomenon.
NASA Astrophysics Data System (ADS)
Zhuang, Qiao; Yu, Bo; Jiang, Xiaoyun
2015-01-01
In this paper, a time-fractional heat conduction problem is mathematically proposed for an experimental heat conduction process in a 3-layer composite medium. A numerical solution to the direct problem is obtained with finite difference method. In regard to the inverse problem, the optimal order of Caputo fractional derivative is estimated with Levenberg-Marquardt method. Comparing with the carbon-carbon experimental data, the results show that the time-fractional heat conduction model provides an effective and accurate simulation of the experimental data. The rationality of the proposed time-fractional model and validity of Levenberg-Marquardt method in solving the time-fractional inverse heat conduction problem are also manifested according to the results. By conducting the sensitivity analysis, the feasibility of the parameter estimation is further discussed.
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2014-08-01
The {{\\chi }^{2}} principle generalizes the Morozov discrepancy principle to the augmented residual of the Tikhonov regularized least squares problem. For weighting of the data fidelity by a known Gaussian noise distribution on the measured data, when the stabilizing, or regularization, term is considered to be weighted by unknown inverse covariance information on the model parameters, the minimum of the Tikhonov functional becomes a random variable that follows a {{\\chi }^{2}}-distribution with m+p-n degrees of freedom for the model matrix G of size m\\times n, m\\geqslant n, and regularizer L of size p × n. Then, a Newton root-finding algorithm, employing the generalized singular value decomposition, or singular value decomposition when L = I, can be used to find the regularization parameter α. Here the result and algorithm are extended to the underdetermined case, m, with m+p\\geqslant n. Numerical results first contrast and verify the generalized cross validation, unbiased predictive risk estimation and {{\\chi }^{2}} algorithms when m, with regularizers L approximating zeroth to second order derivative approximations. The inversion of underdetermined 2D focusing gravity data produces models with non-smooth properties, for which typical solvers in the field use an iterative minimum support stabilizer, with both regularizer and regularizing parameter updated each iteration. The {{\\chi }^{2}} and unbiased predictive risk estimator of the regularization parameter are used for the first time in this context. For a simulated underdetermined data set with noise, these regularization parameter estimation methods, as well as the generalized cross validation method, are contrasted with the use of the L-curve and the Morozov discrepancy principle. Experiments demonstrate the efficiency and robustness of the {{\\chi }^{2}} principle and unbiased predictive risk estimator, moreover showing that the L-curve and Morozov discrepancy principle are outperformed in general
NASA Astrophysics Data System (ADS)
Goretzki, Nora; Inbar, Nimrod; Siebert, Christian; Möller, Peter; Rosenthal, Eliyahu; Schneider, Michael; Magri, Fabien
2015-04-01
Salty and thermal springs exist along the lakeshore of the Sea of Galilee, which covers most of the Tiberias Basin (TB) in the northern Jordan- Dead Sea Transform, Israel/Jordan. As it is the only freshwater reservoir of the entire area, it is important to study the salinisation processes that pollute the lake. Simulations of thermohaline flow along a 35 km NW-SE profile show that meteoric and relic brines are flushed by the regional flow from the surrounding heights and thermally induced groundwater flow within the faults (Magri et al., 2015). Several model runs with trial and error were necessary to calibrate the hydraulic conductivity of both faults and major aquifers in order to fit temperature logs and spring salinity. It turned out that the hydraulic conductivity of the faults ranges between 30 and 140 m/yr whereas the hydraulic conductivity of the Upper Cenomanian aquifer is as high as 200 m/yr. However, large-scale transport processes are also dependent on other physical parameters such as thermal conductivity, porosity and fluid thermal expansion coefficient, which are hardly known. Here, inverse problems (IP) are solved along the NW-SE profile to better constrain the physical parameters (a) hydraulic conductivity, (b) thermal conductivity and (c) thermal expansion coefficient. The PEST code (Doherty, 2010) is applied via the graphical interface FePEST in FEFLOW (Diersch, 2014). The results show that both thermal and hydraulic conductivity are consistent with the values determined with the trial and error calibrations. Besides being an automatic approach that speeds up the calibration process, the IP allows to cover a wide range of parameter values, providing additional solutions not found with the trial and error method. Our study shows that geothermal systems like TB are more comprehensively understood when inverse models are applied to constrain coupled fluid flow processes over large spatial scales. References Diersch, H.-J.G., 2014. FEFLOW Finite
NASA Astrophysics Data System (ADS)
Li, Zhenhai; Nie, Chenwei; Yang, Guijun; Xu, Xingang; Jin, Xiuliang; Gu, Xiaohe
2014-10-01
Leaf area index (LAI) and LCC, as the two most important crop growth variables, are major considerations in management decisions, agricultural planning and policy making. Estimation of canopy biophysical variables from remote sensing data was investigated using a radiative transfer model. However, the ill-posed problem is unavoidable for the unique solution of the inverse problem and the uncertainty of measurements and model assumptions. This study focused on the use of agronomy mechanism knowledge to restrict and remove the ill-posed inversion results. For this purpose, the inversion results obtained using the PROSAIL model alone (NAMK) and linked with agronomic mechanism knowledge (AMK) were compared. The results showed that AMK did not significantly improve the accuracy of LAI inversion. LAI was estimated with high accuracy, and there was no significant improvement after considering AMK. The validation results of the determination coefficient (R2) and the corresponding root mean square error (RMSE) between measured LAI and estimated LAI were 0.635 and 1.022 for NAMK, and 0.637 and 0.999 for AMK, respectively. LCC estimation was significantly improved with agronomy mechanism knowledge; the R2 and RMSE values were 0.377 and 14.495 μg cm-2 for NAMK, and 0.503 and 10.661 μg cm-2 for AMK, respectively. Results of the comparison demonstrated the need for agronomy mechanism knowledge in radiative transfer model inversion.
Sanz, E.; Voss, C.I.
2006-01-01
Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only
Paracentric inversions do not normally generate monocentric recombinant chromosomes
Sutherland, G.R.; Callen, D.F.; Gardner, R.J.M.
1995-11-20
Dr. Pettenati et al. recently reported a review of paracentric inversions in humans in which they concluded that carriers of these have a 3.8% risk of viable offspring with recombinant chromosomes. We are of the view that there are serious problems with this estimate which should be much closer to zero. The only recombinant chromosomes which can be generated by a paracentric inversion undergoing a normal meiotic division are dicentrics and acentric fragments. Only two such cases were found by Pettenati et al. Several of the alleged monocentric recombinants were originally reported as arising from parental insertions (3-break rearrangements) and it is not legitimate to include them in any analysis of paracentric inversions. Any monocentric recombinant chromosome can only arise from a paracentric inversion by some abnormal process which must involve chromatid breakage and reunion. 4 refs.
NASA Astrophysics Data System (ADS)
Hart, Vern Philip, II
A methodology is presented for creating tomographic reconstructions from various projection data, and the relevance of the results to applications in atmospheric science and biomedical imaging is analyzed. The fundamental differences between transform and iterative methods are described and the properties of the imaging configurations are addressed. The presented results are particularly suited for highly ill-conditioned inverse problems in which the imaging data are restricted as a result of poor angular coverage, limited detector arrays, or insufficient access to an imaging region. The class of reconstruction algorithms commonly used in sparse tomography, the algebraic reconstruction techniques, is presented, analyzed, and compared. These algorithms are iterative in nature and their accuracy depends significantly on the initialization of the algorithm, the so-called initial guess. A considerable amount of research was conducted into novel initialization techniques as a means of improving the accuracy. The main body of this paper is comprised of three smaller papers, which describe the application of the presented methods to atmospheric and medical imaging modalities. The first paper details the measurement of mesospheric airglow emissions at two camera sites operated by Utah State University. Reconstructions of vertical airglow emission profiles are presented, including three-dimensional models of the layer formed using a novel fanning technique. The second paper describes the application of the method to the imaging of polar mesospheric clouds (PMCs) by NASA's Aeronomy of Ice in the Mesosphere (AIM) satellite. The contrasting elements of straight-line and diffusive tomography are also discussed in the context of ill-conditioned imaging problems. A number of developing modalities in medical tomography use near infrared light, which interacts strongly with biological tissue and results in significant optical scattering. In order to perform tomography on the
NASA Astrophysics Data System (ADS)
Klibanov, Michael V.; Timonov, Alexandre
2001-12-01
Some coefficient inverse problems of electromagnetic frequency sounding of inhomogeneous media are considered. Such problems occur in many areas of applied physics, such as the geophysical exploration of gas, oil and mineral deposits, reservoir monitoring, marine acoustics and electromagnetics, optical sensing, and radio physics. Reformulating these problems in terms of nonlinear least squares, also known in the applied literature as matched field processing, often leads to a multiextremal and multidimensional objective function. This makes it extremely difficult to find its global extremum which corresponds to the solution of the original problem. It is shown in this paper that an inverse problem of frequency sounding can first be identically transformed to a certain boundary value problem which does not explicitly contain an unknown coefficient. The nonlinear least squares are then applied to the transformed problem. Using the weight functions associated with the Carleman estimates for the Laplace operator, an objective function is constructed in such a way that it is strictly convex on a certain compact set. The feasibility of the proposed approach is demonstrated in computational experiments with a model problem of magnetotelluric frequency sounding of layered media.
Volkov, Yu. O. Kozhevnikov, I. V.; Roshchin, B. S.; Filatova, E. O.; Asadchikov, V. E.
2013-01-15
The key features of the inverse problem of X-ray reflectometry (i.e., the reconstruction of the depth profile of the dielectric constant using an experimental angular dependence of reflectivity) are discussed and essential factors leading to the ambiguity of its solution are analyzed. A simple approach to studying the internal structure of HfO{sub 2} films, which is based on the application of a physically reasonable model, is considered. The principles for constructing a film model and the criteria for choosing a minimal number of fitting parameters are discussed. It is shown that the ambiguity of the solution to the inverse problem is retained even for the simplest single-film models. Approaches allowing one to pick out the most realistic solution from several variants are discussed.
NASA Astrophysics Data System (ADS)
Burbidge, Adam S.; Strassburg, Julia A.; Hartmann, Christoph
2008-07-01
We discuss the perception of grittiness in the human mouth from the perspective of continuum mechanics and draw some conclusions about the likely interactions between hydrodynamically arising stress fluctuations and stimulation of biological mechanoreceptor structures. Two classes of mechanoreceptors exist, responding to either static or dynamic stresses. It is apparent that the static stresses arising from inclusions are very small relative to the background stresses generated by the squeeze flow unless the inclusion is very close to either the palate, tongue or free surface. The situation for dynamical stress fluctuations in less clear.
NASA Astrophysics Data System (ADS)
Dorn, Oliver; Lionheart, Bill
2010-11-01
This proceeding combines selected contributions from participants of the Workshop on Electromagnetic Inverse Problems which was hosted by the University of Manchester in June 2009. The workshop was organized by the two guest editors of this conference proceeding and ran in parallel to the 10th International Conference on Electrical Impedance Tomography, which was guided by Bill Lionheart, Richard Bayford, and Eung Je Woo. Both events shared plenary talks and several selected sessions. One reason for combining these two events was the goal of bringing together scientists from various related disciplines who normally might not attend the same conferences, and to enhance discussions between these different groups. So, for example, one day of the workshop was dedicated to the broader area of geophysical inverse problems (including inverse problems in petroleum engineering), where participants from the EIT community and from the medical imaging community were also encouraged to participate, with great success. Other sessions concentrated on microwave medical imaging, on inverse scattering, or on eddy current imaging, with active feedback also from geophysically oriented scientists. Furthermore, several talks addressed such diverse topics as optical tomography, photoacoustic tomography, time reversal, or electrosensing fish. As a result of the workshop, speakers were invited to contribute extended papers to this conference proceeding. All submissions were thoroughly reviewed and, after a thoughtful revision by the authors, combined in this proceeding. The resulting set of six papers presenting the work of in total 22 authors from 5 different countries provides a very interesting overview of several of the themes which were represented at the workshop. These can be divided into two important categories, namely (i) modelling and (ii) data inversion. The first three papers of this selection, as outlined below, focus more on modelling aspects, being an essential component of
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1993-01-01
The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.
Moebius inversion formula and inverting lattice sums
NASA Astrophysics Data System (ADS)
Millane, Rick P.
2000-11-01
The Mobius inversion formula is an interesting theorem from number theory that has application to a number inverse problems, particularly lattice problems. Specific inverse problems, however, often require related Mobius inversion formulae that can be derived from the fundamental formula. Derivation of such formulae is not easy for the non- specialist, however. Examples of the kinds of inversion formulae that can be derived and their application to inverse lattice problems are described.
Fuentes, D.; Elliott, A.; Weinberg, J. S.; Shetty, A.; Hazle, J. D.; Stafford, R. J.
2012-01-01
Quantification of local variations in the optical properties of tumor tissue introduced by the presence of gold-silica nanoparticles (NP) presents significant opportunities in monitoring and control of NP-mediated laser induced thermal therapy (LITT) procedures. Finite element methods of inverse parameter recovery constrained by a Pennes bioheat transfer model were applied to estimate the optical parameters. Magnetic resonance temperature imaging (MRTI) acquired during a NP-mediated LITT of a canine transmissible venereal tumor in brain was used in the presented statistical inverse problem formulation. The maximum likelihood (ML) value of the optical parameters illustrated a marked change in the periphery of the tumor corresponding with the expected location of nanoparticles and area of selective heating observed on MRTI. Parameter recovery information became increasingly difficult to infer in distal regions of tissue where photon fluence had been significantly attenuated. Finite element temperature predictions using the ML parameter values obtained from the solution of the inverse problem are able to reproduce the nanoparticles selective heating within 5°C of measured MRTI estimations along selected temperature profiles. Results indicate the maximum likelihood solution found is able to sufficiently reproduce the selectivity of the NP mediated laser induced heating and therefore the ML solution is likely to return useful optical parameters within the region of significant laser fluence. PMID:22918665
NASA Astrophysics Data System (ADS)
Dorn, Oliver; Lionheart, Bill
2010-11-01
This proceeding combines selected contributions from participants of the Workshop on Electromagnetic Inverse Problems which was hosted by the University of Manchester in June 2009. The workshop was organized by the two guest editors of this conference proceeding and ran in parallel to the 10th International Conference on Electrical Impedance Tomography, which was guided by Bill Lionheart, Richard Bayford, and Eung Je Woo. Both events shared plenary talks and several selected sessions. One reason for combining these two events was the goal of bringing together scientists from various related disciplines who normally might not attend the same conferences, and to enhance discussions between these different groups. So, for example, one day of the workshop was dedicated to the broader area of geophysical inverse problems (including inverse problems in petroleum engineering), where participants from the EIT community and from the medical imaging community were also encouraged to participate, with great success. Other sessions concentrated on microwave medical imaging, on inverse scattering, or on eddy current imaging, with active feedback also from geophysically oriented scientists. Furthermore, several talks addressed such diverse topics as optical tomography, photoacoustic tomography, time reversal, or electrosensing fish. As a result of the workshop, speakers were invited to contribute extended papers to this conference proceeding. All submissions were thoroughly reviewed and, after a thoughtful revision by the authors, combined in this proceeding. The resulting set of six papers presenting the work of in total 22 authors from 5 different countries provides a very interesting overview of several of the themes which were represented at the workshop. These can be divided into two important categories, namely (i) modelling and (ii) data inversion. The first three papers of this selection, as outlined below, focus more on modelling aspects, being an essential component of
Kostin, A B
2013-10-31
We study the inverse problem for a parabolic equation of recovering the source, that is, the right-hand side F(x,t)=h(x,t)f(x), where the function f(x) is unknown. To find f(x), along with the initial and boundary conditions, we also introduce an additional condition of nonlocal observation of the form ∫{sub 0}{sup T}u(x,t) dμ(t)=χ(x). We prove the Fredholm property for the problem stated in this way, and obtain sufficient conditions for the existence and uniqueness of a solution. These conditions are of the form of readily verifiable inequalities and put no restrictions on the value of T>0 or the diameter of the domain Ω under consideration. The proof uses a priori estimates and the qualitative properties of solutions of initial-boundary value problems for parabolic equations. Bibliography: 40 titles.
Sarode, Ketan Dinkar; Kumar, V Ravi; Kulkarni, B D
2016-05-01
An efficient inverse problem approach for parameter estimation, state and structure identification from dynamic data by embedding training functions in a genetic algorithm methodology (ETFGA) is proposed for nonlinear dynamical biosystems using S-system canonical models. Use of multiple shooting and decomposition approach as training functions has been shown for handling of noisy datasets and computational efficiency in studying the inverse problem. The advantages of the methodology are brought out systematically by studying it for three biochemical model systems of interest. By studying a small-scale gene regulatory system described by a S-system model, the first example demonstrates the use of ETFGA for the multifold aims of the inverse problem. The estimation of a large number of parameters with simultaneous state and network identification is shown by training a generalized S-system canonical model with noisy datasets. The results of this study bring out the superior performance of ETFGA on comparison with other metaheuristic approaches. The second example studies the regulation of cAMP oscillations in Dictyostelium cells now assuming limited availability of noisy data. Here, flexibility of the approach to incorporate partial system information in the identification process is shown and its effect on accuracy and predictive ability of the estimated model are studied. The third example studies the phenomenological toy model of the regulation of circadian oscillations in Drosophila that follows rate laws different from S-system power-law. For the limited noisy data, using a priori information about properties of the system, we could estimate an alternate S-system model that showed robust oscillatory behavior with predictive abilities. PMID:26968929
A two-model iteration algorithm for solving the inverse boundary-value problem of heat conduction
NASA Astrophysics Data System (ADS)
Balakovskii, S. L.
1987-12-01
A method is proposed for restoring the heat flux density on the boundary of a body which consists of the sequential solution of the direct problem for an adequate complex model and the invserse problem for a simplified heat transmission model.
NASA Astrophysics Data System (ADS)
Szyszkiewicz-Warzecha, Krzysztof; Jasielec, Jerzy J.; Fausek, Janusz; Filipek, Robert
2016-08-01
Transport properties of ions have significant impact on the possibility of rebars corrosion thus the knowledge of a diffusion coefficient is important for reinforced concrete durability. Numerous tests for the determination of diffusion coefficients have been proposed but analysis of some of these tests show that they are too simplistic or even not valid. Hence, more rigorous models to calculate the coefficients should be employed. Here we propose the Nernst-Planck and Poisson equations, which take into account the concentration and electric potential field. Based on this model a special inverse method is presented for determination of a chloride diffusion coefficient. It requires the measurement of concentration profiles or flux on the boundary and solution of the NPP model to define the goal function. Finding the global minimum is equivalent to the determination of diffusion coefficients. Typical examples of the application of the presented method are given.
Kozunov, Vladimir V.; Ossadtchi, Alexei
2015-01-01
Although MEG/EEG signals are highly variable between subjects, they allow characterizing systematic changes of cortical activity in both space and time. Traditionally a two-step procedure is used. The first step is a transition from sensor to source space by the means of solving an ill-posed inverse problem for each subject individually. The second is mapping of cortical regions consistently active across subjects. In practice the first step often leads to a set of active cortical regions whose location and timecourses display a great amount of interindividual variability hindering the subsequent group analysis. We propose Group Analysis Leads to Accuracy (GALA)—a solution that combines the two steps into one. GALA takes advantage of individual variations of cortical geometry and sensor locations. It exploits the ensuing variability in electromagnetic forward model as a source of additional information. We assume that for different subjects functionally identical cortical regions are located in close proximity and partially overlap and their timecourses are correlated. This relaxed similarity constraint on the inverse solution can be expressed within a probabilistic framework, allowing for an iterative algorithm solving the inverse problem jointly for all subjects. A systematic simulation study showed that GALA, as compared with the standard min-norm approach, improves accuracy of true activity recovery, when accuracy is assessed both in terms of spatial proximity of the estimated and true activations and correct specification of spatial extent of the activated regions. This improvement obtained without using any noise normalization techniques for both solutions, preserved for a wide range of between-subject variations in both spatial and temporal features of regional activation. The corresponding activation timecourses exhibit significantly higher similarity across subjects. Similar results were obtained for a real MEG dataset of face-specific evoked responses
Romero, V.J.; Eldred, M.S.; Bohnhoff, W.J.; Outka, D.E.
1995-07-01
Thermal optimization procedures have been applied to determine the worst-case heating boundary conditions that a safety device can be credibly subjected to. There are many interesting aspects of this work in the areas of thermal transport, optimization, discrete modeling, and computing. The forward problem involves transient simulations with a nonlinear 3-D finite element model solving a coupled conduction/radiation problem. Coupling to the optimizer requires that boundary conditions in the thermal model be parameterized in terms of the optimization variables. The optimization is carried out over a diverse multi-dimensional parameter space where the forward evaluations are computationally expensive and of unknown duration a priori. The optimization problem is complicated by numerical artifacts resulting from discrete approximation and finite computer precision, as well as theoretical difficulties associated with navigating to a global minimum on a nonconvex objective function having a fold and several local minima. In this paper we report on the solution of the optimization problem, discuss implications of some of the features of this problem on selection of a suitable and efficient optimization algorithm, and share lessons learned, fixes implemented, and research issues identified along the way.
NASA Astrophysics Data System (ADS)
Romero, V. J.; Eldred, M. S.; Bohnhoff, W. J.; Outka, D. E.
1995-05-01
Thermal optimization procedures have been applied to determine the worst-case heating boundary conditions that a safety device can be credibly subjected to. There are many interesting aspects of this work in the areas of thermal transport, optimization, discrete modeling, and computing. The forward problem involves transient simulations with a nonlinear 3-D finite element model solving a coupled conduction/radiation problem. Coupling to the optimizer requires that boundary conditions in the thermal model be parameterized in terms of the optimization variables. The optimization is carried out over a diverse multi-dimensional parameter space where the forward evaluations are computationally expensive and of unknown duration a priori. The optimization problem is complicated by numerical artifacts resulting from discrete approximation and finite computer precision, as well as theoretical difficulties associated with navigating to a global minimum on a nonconvex objective function having a fold and several local minima. In this paper we report on the solution of the optimization problem, discuss implications of some of the features of this problem on selection of a suitable and efficient optimization algorithm, and share lessons learned, fixes implemented, and research issues identified along the way.
ERIC Educational Resources Information Center
Iqbal, M.
2002-01-01
In this paper we have converted the Laplace transform into an integral equation of the first kind of convolution type, which is an ill-posed problem, and used a statistical regularization method to solve it. The method is applied to three examples. It gives a good approximation to the true solution and compares well with the method given by…
NASA Astrophysics Data System (ADS)
Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.
2016-01-01
We have developed an algorithm, which we call HexMT, for 3-D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permit incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used throughout, including the forward solution, parameter Jacobians and model parameter update. In Part I, the forward simulator and Jacobian calculations are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequencies or small material admittivities, the E-field requires divergence correction. With the help of Hodge decomposition, the correction may be applied in one step after the forward solution is calculated. This allows accurate E-field solutions in dielectric air. The system matrix factorization and source vector solutions are computed using the MKL PARDISO library, which shows good scalability through 24 processor cores. The factorized matrix is used to calculate the forward response as well as the Jacobians of electromagnetic (EM) field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure, several synthetic topographic models and the natural topography of Mount Erebus in Antarctica. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of EM waves normal to the slopes at high frequencies. Run-time tests of the parallelized algorithm indicate that for meshes as large as 176 × 176 × 70 elements, MT forward responses and Jacobians can be calculated in ˜1.5 hr per frequency. Together with an efficient inversion parameter step described in Part II, MT inversion problems of 200-300 stations are computable with total run times