Sample records for linear inverse modelling

  1. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  2. Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods.

    PubMed

    Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H

    2016-05-01

    The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.

  3. Informativeness of Wind Data in Linear Madden-Julian Oscillation Prediction

    DTIC Science & Technology

    2016-08-15

    Linear inverse models (LIMs) are used to explore predictability and information content of the Madden–Julian Oscillation (MJO). Hindcast skill for...mostly at the largest scales, adds 1–2 days of skill. Keywords: linear inverse modeling; Madden–Julian Oscillation; sub-seasonal prediction 1...tion that may reflect on the MJO’s incompletely under- stood dynamics. Cavanaugh et al. (2014, hereafter C14) explored the skill of linear inverse

  4. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    ERIC Educational Resources Information Center

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  5. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  6. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  7. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  8. Neural-Based Compensation of Nonlinearities in an Airplane Longitudinal Model with Dynamic-Inversion Control

    PubMed Central

    Li, YuHui; Jin, FeiTeng

    2017-01-01

    The inversion design approach is a very useful tool for the complex multiple-input-multiple-output nonlinear systems to implement the decoupling control goal, such as the airplane model and spacecraft model. In this work, the flight control law is proposed using the neural-based inversion design method associated with the nonlinear compensation for a general longitudinal model of the airplane. First, the nonlinear mathematic model is converted to the equivalent linear model based on the feedback linearization theory. Then, the flight control law integrated with this inversion model is developed to stabilize the nonlinear system and relieve the coupling effect. Afterwards, the inversion control combined with the neural network and nonlinear portion is presented to improve the transient performance and attenuate the uncertain effects on both external disturbances and model errors. Finally, the simulation results demonstrate the effectiveness of this controller. PMID:29410680

  9. Incorporation of diet information derived from Bayesian stable isotope mixing models into mass-balanced marine ecosystem models: A case study from the Marennes-Oleron Estuary, France

    EPA Science Inventory

    We investigated the use of output from Bayesian stable isotope mixing models as constraints for a linear inverse food web model of a temperate intertidal seagrass system in the Marennes-Oléron Bay, France. Linear inverse modeling (LIM) is a technique that estimates a complete net...

  10. Probability density of spatially distributed soil moisture inferred from crosshole georadar traveltime measurements

    NASA Astrophysics Data System (ADS)

    Linde, N.; Vrugt, J. A.

    2009-04-01

    Geophysical models are increasingly used in hydrological simulations and inversions, where they are typically treated as an artificial data source with known uncorrelated "data errors". The model appraisal problem in classical deterministic linear and non-linear inversion approaches based on linearization is often addressed by calculating model resolution and model covariance matrices. These measures offer only a limited potential to assign a more appropriate "data covariance matrix" for future hydrological applications, simply because the regularization operators used to construct a stable inverse solution bear a strong imprint on such estimates and because the non-linearity of the geophysical inverse problem is not explored. We present a parallelized Markov Chain Monte Carlo (MCMC) scheme to efficiently derive the posterior spatially distributed radar slowness and water content between boreholes given first-arrival traveltimes. This method is called DiffeRential Evolution Adaptive Metropolis (DREAM_ZS) with snooker updater and sampling from past states. Our inverse scheme does not impose any smoothness on the final solution, and uses uniform prior ranges of the parameters. The posterior distribution of radar slowness is converted into spatially distributed soil moisture values using a petrophysical relationship. To benchmark the performance of DREAM_ZS, we first apply our inverse method to a synthetic two-dimensional infiltration experiment using 9421 traveltimes contaminated with Gaussian errors and 80 different model parameters, corresponding to a model discretization of 0.3 m × 0.3 m. After this, the method is applied to field data acquired in the vadose zone during snowmelt. This work demonstrates that fully non-linear stochastic inversion can be applied with few limiting assumptions to a range of common two-dimensional tomographic geophysical problems. The main advantage of DREAM_ZS is that it provides a full view of the posterior distribution of spatially distributed soil moisture, which is key to appropriately treat geophysical parameter uncertainty and infer hydrologic models.

  11. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  12. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  13. A systematic linear space approach to solving partially described inverse eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Hu, Sau-Lon James; Li, Haujun

    2008-06-01

    Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.

  14. [Baseline correction of spectrum for the inversion of chlorophyll-a concentration in the turbidity water].

    PubMed

    Wei, Yu-Chun; Wang, Guo-Xiang; Cheng, Chun-Mei; Zhang, Jing; Sun, Xiao-Peng

    2012-09-01

    Suspended particle material is the main factor affecting remote sensing inversion of chlorophyll-a concentration (Chla) in turbidity water. According to the optical property of suspended material in water, the present paper proposed a linear baseline correction method to weaken the suspended particle contribution in the spectrum above turbidity water surface. The linear baseline was defined as the connecting line of reflectance from 450 to 750 nm, and baseline correction is that spectrum reflectance subtracts the baseline. Analysis result of field data in situ of Meiliangwan, Taihu Lake in April, 2011 and March, 2010 shows that spectrum linear baseline correction can improve the inversion precision of Chl a and produce the better model diagnoses. As the data in March, 2010, RMSE of band ratio model built by original spectrum is 4.11 mg x m(-3), and that built by spectrum baseline correction is 3.58 mg x m(-3). Meanwhile, residual distribution and homoscedasticity in the model built by baseline correction spectrum is improved obviously. The model RMSE of April, 2011 shows the similar result. The authors suggest that using linear baseline correction as the spectrum processing method to improve Chla inversion accuracy in turbidity water without algae bloom.

  15. Frequency-domain full-waveform inversion with non-linear descent directions

    NASA Astrophysics Data System (ADS)

    Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.

    2018-05-01

    Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a benchmark FWI approach involving the standard gradient.

  16. Modeling and Inverse Controller Design for an Unmanned Aerial Vehicle Based on the Self-Organizing Map

    NASA Technical Reports Server (NTRS)

    Cho, Jeongho; Principe, Jose C.; Erdogmus, Deniz; Motter, Mark A.

    2005-01-01

    The next generation of aircraft will have dynamics that vary considerably over the operating regime. A single controller will have difficulty to meet the design specifications. In this paper, a SOM-based local linear modeling scheme of an unmanned aerial vehicle (UAV) is developed to design a set of inverse controllers. The SOM selects the operating regime depending only on the embedded output space information and avoids normalization of the input data. Each local linear model is associated with a linear controller, which is easy to design. Switching of the controllers is done synchronously with the active local linear model that tracks the different operating conditions. The proposed multiple modeling and control strategy has been successfully tested in a simulator that models the LoFLYTE UAV.

  17. A Non-linear Geodetic Data Inversion Using ABIC for Slip Distribution on a Fault With an Unknown dip Angle

    NASA Astrophysics Data System (ADS)

    Fukahata, Y.; Wright, T. J.

    2006-12-01

    We developed a method of geodetic data inversion for slip distribution on a fault with an unknown dip angle. When fault geometry is unknown, the problem of geodetic data inversion is non-linear. A common strategy for obtaining slip distribution is to first determine the fault geometry by minimizing the square misfit under the assumption of a uniform slip on a rectangular fault, and then apply the usual linear inversion technique to estimate a slip distribution on the determined fault. It is not guaranteed, however, that the fault determined under the assumption of a uniform slip gives the best fault geometry for a spatially variable slip distribution. In addition, in obtaining a uniform slip fault model, we have to simultaneously determine the values of the nine mutually dependent parameters, which is a highly non-linear, complicated process. Although the inverse problem is non-linear for cases with unknown fault geometries, the non-linearity of the problems is actually weak, when we can assume the fault surface to be flat. In particular, when a clear fault trace is observed on the EarthOs surface after an earthquake, we can precisely estimate the strike and the location of the fault. In this case only the dip angle has large ambiguity. In geodetic data inversion we usually need to introduce smoothness constraints in order to compromise reciprocal requirements for model resolution and estimation errors in a natural way. Strictly speaking, the inverse problem with smoothness constraints is also non-linear, even if the fault geometry is known. The non-linearity has been dissolved by introducing AkaikeOs Bayesian Information Criterion (ABIC), with which the optimal value of the relative weight of observed data to smoothness constraints is objectively determined. In this study, using ABIC in determining the optimal dip angle, we dissolved the non-linearity of the inverse problem. We applied the method to the InSAR data of the 1995 Dinar, Turkey earthquake and obtained a much shallower dip angle than before.

  18. Bayesian Approach to the Joint Inversion of Gravity and Magnetic Data, with Application to the Ismenius Area of Mars

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Raymond, C.; Smrekar, S.; Millbury, C.

    2004-01-01

    This viewgraph presentation reviews a Bayesian approach to the inversion of gravity and magnetic data with specific application to the Ismenius Area of Mars. Many inverse problems encountered in geophysics and planetary science are well known to be non-unique (i.e. inversion of gravity the density structure of a body). In hopes of reducing the non-uniqueness of solutions, there has been interest in the joint analysis of data. An example is the joint inversion of gravity and magnetic data, with the assumption that the same physical anomalies generate both the observed magnetic and gravitational anomalies. In this talk, we formulate the joint analysis of different types of data in a Bayesian framework and apply the formalism to the inference of the density and remanent magnetization structure for a local region in the Ismenius area of Mars. The Bayesian approach allows prior information or constraints in the solutions to be incorporated in the inversion, with the "best" solutions those whose forward predictions most closely match the data while remaining consistent with assumed constraints. The application of this framework to the inversion of gravity and magnetic data on Mars reveals two typical challenges - the forward predictions of the data have a linear dependence on some of the quantities of interest, and non-linear dependence on others (termed the "linear" and "non-linear" variables, respectively). For observations with Gaussian noise, a Bayesian approach to inversion for "linear" variables reduces to a linear filtering problem, with an explicitly computable "error" matrix. However, for models whose forward predictions have non-linear dependencies, inference is no longer given by such a simple linear problem, and moreover, the uncertainty in the solution is no longer completely specified by a computable "error matrix". It is therefore important to develop methods for sampling from the full Bayesian posterior to provide a complete and statistically consistent picture of model uncertainty, and what has been learned from observations. We will discuss advanced numerical techniques, including Monte Carlo Markov

  19. Applying a probabilistic seismic-petrophysical inversion and two different rock-physics models for reservoir characterization in offshore Nile Delta

    NASA Astrophysics Data System (ADS)

    Aleardi, Mattia

    2018-01-01

    We apply a two-step probabilistic seismic-petrophysical inversion for the characterization of a clastic, gas-saturated, reservoir located in offshore Nile Delta. In particular, we discuss and compare the results obtained when two different rock-physics models (RPMs) are employed in the inversion. The first RPM is an empirical, linear model directly derived from the available well log data by means of an optimization procedure. The second RPM is a theoretical, non-linear model based on the Hertz-Mindlin contact theory. The first step of the inversion procedure is a Bayesian linearized amplitude versus angle (AVA) inversion in which the elastic properties, and the associated uncertainties, are inferred from pre-stack seismic data. The estimated elastic properties constitute the input to the second step that is a probabilistic petrophysical inversion in which we account for the noise contaminating the recorded seismic data and the uncertainties affecting both the derived rock-physics models and the estimated elastic parameters. In particular, a Gaussian mixture a-priori distribution is used to properly take into account the facies-dependent behavior of petrophysical properties, related to the different fluid and rock properties of the different litho-fluid classes. In the synthetic and in the field data tests, the very minor differences between the results obtained by employing the two RPMs, and the good match between the estimated properties and well log information, confirm the applicability of the inversion approach and the suitability of the two different RPMs for reservoir characterization in the investigated area.

  20. A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Zuo, B.; Hu, X.; Li, H.

    2011-12-01

    A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.

  1. Simultaneous source and attenuation reconstruction in SPECT using ballistic and single scattering data

    NASA Astrophysics Data System (ADS)

    Courdurier, M.; Monard, F.; Osses, A.; Romero, F.

    2015-09-01

    In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.

  2. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    NASA Astrophysics Data System (ADS)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good performance of the algorithm. Both versions and documentation are available on Internet and anybody can download them. The goal of this presentation is to offer the algorithm and computer codes for anybody interested in the solution to inverse problems.

  3. 3D CSEM inversion based on goal-oriented adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Key, K.

    2016-12-01

    We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.

  4. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  5. Inverse scattering method and soliton double solution family for the general symplectic gravity model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao Yajun

    A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.

  6. Steady induction effects in geomagnetism. Part 1B: Geomagnetic estimation of steady surficial core motions: A non-linear inverse problem

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1993-01-01

    The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.

  7. Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.

    2010-12-01

    Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.

  8. Functional Effects of Parasites on Food Web Properties during the Spring Diatom Bloom in Lake Pavin: A Linear Inverse Modeling Analysis

    PubMed Central

    Niquil, Nathalie; Jobard, Marlène; Saint-Béat, Blanche; Sime-Ngando, Télesphore

    2011-01-01

    This study is the first assessment of the quantitative impact of parasitic chytrids on a planktonic food web. We used a carbon-based food web model of Lake Pavin (Massif Central, France) to investigate the effects of chytrids during the spring diatom bloom by developing models with and without chytrids. Linear inverse modelling procedures were employed to estimate undetermined flows in the lake. The Monte Carlo Markov chain linear inverse modelling procedure provided estimates of the ranges of model-derived fluxes. Model results support recent theories on the probable impact of parasites on food web function. In the lake, during spring, when ‘inedible’ algae (unexploited by planktonic herbivores) were the dominant primary producers, the epidemic growth of chytrids significantly reduced the sedimentation loss of algal carbon to the detritus pool through the production of grazer-exploitable zoospores. We also review some theories about the potential influence of parasites on ecological network properties and argue that parasitism contributes to longer carbon path lengths, higher levels of activity and specialization, and lower recycling. Considering the “structural asymmetry” hypothesis as a stabilizing pattern, chytrids should contribute to the stability of aquatic food webs. PMID:21887240

  9. Time-domain induced polarization - an analysis of Cole-Cole parameter resolution and correlation using Markov Chain Monte Carlo inversion

    NASA Astrophysics Data System (ADS)

    Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest

    2017-12-01

    The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo (MCMC) inversion algorithm, a full non-linear uncertainty analysis of the parameters and the parameter correlations can be accessed. This is essential to understand to what degree the spectral Cole-Cole parameters can be resolved from TDIP data. MCMC inversions of synthetic TDIP data, which show bell-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations increase and become non-linear. It is further investigated how waveform and parameter values influence the resolution of the Cole-Cole parameters. A limiting factor is the value of the frequency exponent, C. As C decreases, the resolution of all the Cole-Cole parameters decreases and the results become increasingly non-linear. While the values of the time constant, τ, must be in the acquisition range to resolve the parameters well, the choice between a 50 per cent and a 100 per cent duty cycle for the current injection does not have an influence on the parameter resolution. The limits of resolution and linearity are also studied in a comparison between the MCMC and a linearized gradient-based inversion approach. The two methods are consistent for resolved models, but the linearized approach tends to underestimate the uncertainties for poorly resolved parameters due to the corresponding non-linear features. Finally, an MCMC inversion of 1-D field data verifies that spectral Cole-Cole parameters can also be resolved from TD field measurements.

  10. Inverse kinematics of a dual linear actuator pitch/roll heliostat

    NASA Astrophysics Data System (ADS)

    Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh

    2017-06-01

    This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.

  11. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  12. Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas

    2017-04-01

    Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.

  13. Bayesian inversion of refraction seismic traveltime data

    NASA Astrophysics Data System (ADS)

    Ryberg, T.; Haberland, Ch

    2018-03-01

    We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test for a synthetic data set from a known model is also presented.

  14. Mixed linear-nonlinear fault slip inversion: Bayesian inference of model, weighting, and smoothing parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, J.; Johnson, K. M.

    2009-12-01

    Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress drop.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitanidis, Peter

    As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO 2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic,more » tracer and thermal tests before CO 2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO 2 storage examples.« less

  16. Elastic robot control - Nonlinear inversion and linear stabilization

    NASA Technical Reports Server (NTRS)

    Singh, S. N.; Schy, A. A.

    1986-01-01

    An approach to the control of elastic robot systems for space applications using inversion, servocompensation, and feedback stabilization is presented. For simplicity, a robot arm (PUMA type) with three rotational joints is considered. The third link is assumed to be elastic. Using an inversion algorithm, a nonlinear decoupling control law u(d) is derived such that in the closed-loop system independent control of joint angles by the three joint torquers is accomplished. For the stabilization of elastic oscillations, a linear feedback torquer control law u(s) is obtained applying linear quadratic optimization to the linearized arm model augmented with a servocompensator about the terminal state. Simulation results show that in spite of uncertainties in the payload and vehicle angular velocity, good joint angle control and damping of elastic oscillations are obtained with the torquer control law u = u(d) + u(s).

  17. Modelling and Inverse-Modelling: Experiences with O.D.E. Linear Systems in Engineering Courses

    ERIC Educational Resources Information Center

    Martinez-Luaces, Victor

    2009-01-01

    In engineering careers courses, differential equations are widely used to solve problems concerned with modelling. In particular, ordinary differential equations (O.D.E.) linear systems appear regularly in Chemical Engineering, Food Technology Engineering and Environmental Engineering courses, due to the usefulness in modelling chemical kinetics,…

  18. A linear-encoding model explains the variability of the target morphology in regeneration

    PubMed Central

    Lobo, Daniel; Solano, Mauricio; Bubenik, George A.; Levin, Michael

    2014-01-01

    A fundamental assumption of today's molecular genetics paradigm is that complex morphology emerges from the combined activity of low-level processes involving proteins and nucleic acids. An inherent characteristic of such nonlinear encodings is the difficulty of creating the genetic and epigenetic information that will produce a given self-assembling complex morphology. This ‘inverse problem’ is vital not only for understanding the evolution, development and regeneration of bodyplans, but also for synthetic biology efforts that seek to engineer biological shapes. Importantly, the regenerative mechanisms in deer antlers, planarian worms and fiddler crabs can solve an inverse problem: their target morphology can be altered specifically and stably by injuries in particular locations. Here, we discuss the class of models that use pre-specified morphological goal states and propose the existence of a linear encoding of the target morphology, making the inverse problem easy for these organisms to solve. Indeed, many model organisms such as Drosophila, hydra and Xenopus also develop according to nonlinear encodings producing linear encodings of their final morphologies. We propose the development of testable models of regeneration regulation that combine emergence with a top-down specification of shape by linear encodings of target morphology, driving transformative applications in biomedicine and synthetic bioengineering. PMID:24402915

  19. A full potential inverse method based on a density linearization scheme for wing design

    NASA Technical Reports Server (NTRS)

    Shankar, V.

    1982-01-01

    A mixed analysis inverse procedure based on the full potential equation in conservation form was developed to recontour a given base wing to produce density linearization scheme in applying the pressure boundary condition in terms of the velocity potential. The FL030 finite volume analysis code was modified to include the inverse option. The new surface shape information, associated with the modified pressure boundary condition, is calculated at a constant span station based on a mass flux integration. The inverse method is shown to recover the original shape when the analysis pressure is not altered. Inverse calculations for weakening of a strong shock system and for a laminar flow control (LFC) pressure distribution are presented. Two methods for a trailing edge closure model are proposed for further study.

  20. Trimming and procrastination as inversion techniques

    NASA Astrophysics Data System (ADS)

    Backus, George E.

    1996-12-01

    By examining the processes of truncating and approximating the model space (trimming it), and by committing to neither the objectivist nor the subjectivist interpretation of probability (procrastinating), we construct a formal scheme for solving linear and non-linear geophysical inverse problems. The necessary prior information about the correct model xE can be either a collection of inequalities or a probability measure describing where xE was likely to be in the model space X before the data vector y0 was measured. The results of the inversion are (1) a vector z0 that estimates some numerical properties zE of xE; (2) an estimate of the error δz = z0 - zE. As y0 is finite dimensional, so is z0, and hence in principle inversion cannot describe all of xE. The error δz is studied under successively more specialized assumptions about the inverse problem, culminating in a complete analysis of the linear inverse problem with a prior quadratic bound on xE. Our formalism appears to encompass and provide error estimates for many of the inversion schemes current in geomagnetism, and would be equally applicable in geodesy and seismology if adequate prior information were available there. As an idealized example we study the magnetic field at the core-mantle boundary, using satellite measurements of field elements at sites assumed to be almost uniformly distributed on a single spherical surface. Magnetospheric currents are neglected and the crustal field is idealized as a random process with rotationally invariant statistics. We find that an appropriate data compression diagonalizes the variance matrix of the crustal signal and permits an analytic trimming of the idealized problem.

  1. 3-D linear inversion of gravity data: method and application to Basse-Terre volcanic island, Guadeloupe, Lesser Antilles

    NASA Astrophysics Data System (ADS)

    Barnoud, Anne; Coutant, Olivier; Bouligand, Claire; Gunawan, Hendra; Deroussi, Sébastien

    2016-04-01

    We use a Bayesian formalism combined with a grid node discretization for the linear inversion of gravimetric data in terms of 3-D density distribution. The forward modelling and the inversion method are derived from seismological inversion techniques in order to facilitate joint inversion or interpretation of density and seismic velocity models. The Bayesian formulation introduces covariance matrices on model parameters to regularize the ill-posed problem and reduce the non-uniqueness of the solution. This formalism favours smooth solutions and allows us to specify a spatial correlation length and to perform inversions at multiple scales. We also extract resolution parameters from the resolution matrix to discuss how well our density models are resolved. This method is applied to the inversion of data from the volcanic island of Basse-Terre in Guadeloupe, Lesser Antilles. A series of synthetic tests are performed to investigate advantages and limitations of the methodology in this context. This study results in the first 3-D density models of the island of Basse-Terre for which we identify: (i) a southward decrease of densities parallel to the migration of volcanic activity within the island, (ii) three dense anomalies beneath Petite Plaine Valley, Beaugendre Valley and the Grande-Découverte-Carmichaël-Soufrière Complex that may reflect the trace of former major volcanic feeding systems, (iii) shallow low-density anomalies in the southern part of Basse-Terre, especially around La Soufrière active volcano, Piton de Bouillante edifice and along the western coast, reflecting the presence of hydrothermal systems and fractured and altered rocks.

  2. Detection of Natural Fractures from Observed Surface Seismic Data Based on a Linear-Slip Model

    NASA Astrophysics Data System (ADS)

    Chen, Huaizhen; Zhang, Guangzhi

    2018-03-01

    Natural fractures play an important role in migration of hydrocarbon fluids. Based on a rock physics effective model, the linear-slip model, which defines fracture parameters (fracture compliances) for quantitatively characterizing the effects of fractures on rock total compliance, we propose a method to detect natural fractures from observed seismic data via inversion for the fracture compliances. We first derive an approximate PP-wave reflection coefficient in terms of fracture compliances. Using the approximate reflection coefficient, we derive azimuthal elastic impedance as a function of fracture compliances. An inversion method to estimate fracture compliances from seismic data is presented based on a Bayesian framework and azimuthal elastic impedance, which is implemented in a two-step procedure: a least-squares inversion for azimuthal elastic impedance and an iterative inversion for fracture compliances. We apply the inversion method to synthetic and real data to verify its stability and reasonability. Synthetic tests confirm that the method can make a stable estimation of fracture compliances in the case of seismic data containing a moderate signal-to-noise ratio for Gaussian noise, and the test on real data reveals that reasonable fracture compliances are obtained using the proposed method.

  3. Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media

    NASA Astrophysics Data System (ADS)

    Jakobsen, Morten; Tveit, Svenn

    2018-05-01

    We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.

  4. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data balancing normalization weights for the joint inversion of two or more data sets encourages the inversion to fit each data type equally well. A synthetic joint inversion of marine CSEM and MT data illustrates the algorithm's performance and parallel scaling on up to 480 processing cores. CSEM inversion of data from the Middle America Trench offshore Nicaragua demonstrates a real world application. The source code and MATLAB interface tools are freely available at http://mare2dem.ucsd.edu.

  5. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    NASA Astrophysics Data System (ADS)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).

  6. Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations

    NASA Astrophysics Data System (ADS)

    Zhi, Longxiao; Gu, Hanming

    2018-03-01

    The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor series expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain the P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion doesn't need certain assumptions and can estimate more parameters simultaneously. It has a better applicability. Meanwhile, by using the generalized linear method, the inversion is easily implemented and its calculation cost is small. We use the theoretical model to generate synthetic seismic records to test and analyze the influence of random noise. The results can prove the availability and anti-noise-interference ability of our method. We also apply the inversion to actual field data and prove the feasibility of our method in actual situation.

  7. Kinematic inversion of the 2008 Mw7 Iwate-Miyagi (Japan) earthquake by two independent methods: Sensitivity and resolution analysis

    NASA Astrophysics Data System (ADS)

    Gallovic, Frantisek; Cirella, Antonella; Plicka, Vladimir; Piatanesi, Alessio

    2013-04-01

    On 14 June 2008, UTC 23:43, the border of Iwate and Miyagi prefectures was hit by an Mw7 reverse-fault type crustal earthquake. The event is known to have the largest ground acceleration observed to date (~4g), which was recorded at station IWTH25. We analyze observed strong motion data with the objective to image the event rupture process and the associated uncertainties. Two different slip inversion approaches are used, the difference between the two methods being only in the parameterization of the source model. To minimize mismodeling of the propagation effects we use crustal model obtained by full waveform inversion of aftershock records in the frequency range between 0.05-0.3 Hz. In the first method, based on linear formulation, the parameters are represented by samples of slip velocity functions along the (finely discretized) fault in a time window spanning the whole rupture duration. Such a source description is very general with no prior constraint on the nucleation point, rupture velocity, shape of the velocity function. Thus the inversion could resolve very general (unexpected) features of the rupture evolution, such as multiple rupturing, rupture-propagation reversals, etc. On the other hand, due to the relatively large number of model parameters, the inversion result is highly non-unique, with possibility of obtaining a biased solution. The second method is a non-linear global inversion technique, where each point on the fault can slip only once, following a prescribed functional form of the source time function. We invert simultaneously for peak slip velocity, slip angle, rise time and rupture time by allowing a given range of variability for each kinematic model parameter. For this reason, unlike to the linear inversion approach, the rupture process needs a smaller number of parameters to be retrieved, and is more constrained with a proper control on the allowed range of parameter values. In order to test the resolution and reliability of the retrieved models, we present a thorough analysis of the performance of the two inversion approaches. In fact, depending on the inversion strategy and the intrinsic 'non-uniqueness' of the inverse problem, the final slip maps and distribution of rupture onset times are generally different, sometimes even incompatible with each other. Great emphasis is devoted to the uncertainty estimate of both techniques. Thus we do not compare only the best fitting models, but their 'compatibility' in terms of the uncertainty limits.

  8. Stability and uncertainty of finite-fault slip inversions: Application to the 2004 Parkfield, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.; Mendoza, C.; Ji, C.; Larson, K.M.

    2007-01-01

    The 2004 Parkfield, California, earthquake is used to investigate stability and uncertainty aspects of the finite-fault slip inversion problem with different a priori model assumptions. We utilize records from 54 strong ground motion stations and 13 continuous, 1-Hz sampled, geodetic instruments. Two inversion procedures are compared: a linear least-squares subfault-based methodology and a nonlinear global search algorithm. These two methods encompass a wide range of the different approaches that have been used to solve the finite-fault slip inversion problem. For the Parkfield earthquake and the inversion of velocity or displacement waveforms, near-surface related site response (top 100 m, frequencies above 1 Hz) is shown to not significantly affect the solution. Results are also insensitive to selection of slip rate functions with similar duration and to subfault size if proper stabilizing constraints are used. The linear and nonlinear formulations yield consistent results when the same limitations in model parameters are in place and the same inversion norm is used. However, the solution is sensitive to the choice of inversion norm, the bounds on model parameters, such as rake and rupture velocity, and the size of the model fault plane. The geodetic data set for Parkfield gives a slip distribution different from that of the strong-motion data, which may be due to the spatial limitation of the geodetic stations and the bandlimited nature of the strong-motion data. Cross validation and the bootstrap method are used to set limits on the upper bound for rupture velocity and to derive mean slip models and standard deviations in model parameters. This analysis shows that slip on the northwestern half of the Parkfield rupture plane from the inversion of strong-motion data is model dependent and has a greater uncertainty than slip near the hypocenter.

  9. The incomplete inverse and its applications to the linear least squares problem

    NASA Technical Reports Server (NTRS)

    Morduch, G. E.

    1977-01-01

    A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.

  10. Peeling linear inversion of upper mantle velocity structure with receiver functions

    NASA Astrophysics Data System (ADS)

    Shen, Xuzhang; Zhou, Huilan

    2012-02-01

    A peeling linear inversion method is presented to study the upper mantle (from Moho to 800 km depth) velocity structures with receiver functions. The influences of the crustal and upper mantle velocity ratio error on the inversion results are analyzed, and three valid measures are taken for its reduction. This method is tested with the IASP91 and the PREM models, and the upper mantle structures beneath the stations GTA, LZH, and AXX in northwestern China are then inverted. The results indicate that this inversion method is feasible to quantify upper mantle discontinuities, besides the discontinuities between 3 h M ( h M denotes the depth of Moho) and 5 h M due to the interference of multiples from Moho. Smoothing is used to overcome possible false discontinuities from the multiples and ensure the stability of the inversion results, but the detailed information on the depth range between 3 h M and 5 h M is sacrificed.

  11. Blocky inversion of multichannel elastic impedance for elastic parameters

    NASA Astrophysics Data System (ADS)

    Mozayan, Davoud Karami; Gholami, Ali; Siahkoohi, Hamid Reza

    2018-04-01

    Petrophysical description of reservoirs requires proper knowledge of elastic parameters like P- and S-wave velocities (Vp and Vs) and density (ρ), which can be retrieved from pre-stack seismic data using the concept of elastic impedance (EI). We propose an inversion algorithm which recovers elastic parameters from pre-stack seismic data in two sequential steps. In the first step, using the multichannel blind seismic inversion method (exploited recently for recovering acoustic impedance from post-stack seismic data), high-resolution blocky EI models are obtained directly from partial angle-stacks. Using an efficient total-variation (TV) regularization, each angle-stack is inverted independently in a multichannel form without prior knowledge of the corresponding wavelet. The second step involves inversion of the resulting EI models for elastic parameters. Mathematically, under some assumptions, the EI's are linearly described by the elastic parameters in the logarithm domain. Thus a linear weighted least squares inversion is employed to perform this step. Accuracy of the concept of elastic impedance in predicting reflection coefficients at low and high angles of incidence is compared with that of exact Zoeppritz elastic impedance and the role of low frequency content in the problem is discussed. The performance of the proposed inversion method is tested using synthetic 2D data sets obtained from the Marmousi model and also 2D field data sets. The results confirm the efficiency and accuracy of the proposed method for inversion of pre-stack seismic data.

  12. Development of the WRF-CO2 4D-Var assimilation system v1.0

    NASA Astrophysics Data System (ADS)

    Zheng, Tao; French, Nancy H. F.; Baxter, Martin

    2018-05-01

    Regional atmospheric CO2 inversions commonly use Lagrangian particle trajectory model simulations to calculate the required influence function, which quantifies the sensitivity of a receptor to flux sources. In this paper, an adjoint-based four-dimensional variational (4D-Var) assimilation system, WRF-CO2 4D-Var, is developed to provide an alternative approach. This system is developed based on the Weather Research and Forecasting (WRF) modeling system, including the system coupled to chemistry (WRF-Chem), with tangent linear and adjoint codes (WRFPLUS), and with data assimilation (WRFDA), all in version 3.6. In WRF-CO2 4D-Var, CO2 is modeled as a tracer and its feedback to meteorology is ignored. This configuration allows most WRF physical parameterizations to be used in the assimilation system without incurring a large amount of code development. WRF-CO2 4D-Var solves for the optimized CO2 flux scaling factors in a Bayesian framework. Two variational optimization schemes are implemented for the system: the first uses the limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) minimization algorithm (L-BFGS-B) and the second uses the Lanczos conjugate gradient (CG) in an incremental approach. WRFPLUS forward, tangent linear, and adjoint models are modified to include the physical and dynamical processes involved in the atmospheric transport of CO2. The system is tested by simulations over a domain covering the continental United States at 48 km × 48 km grid spacing. The accuracy of the tangent linear and adjoint models is assessed by comparing against finite difference sensitivity. The system's effectiveness for CO2 inverse modeling is tested using pseudo-observation data. The results of the sensitivity and inverse modeling tests demonstrate the potential usefulness of WRF-CO2 4D-Var for regional CO2 inversions.

  13. Full-wave Moment Tensor and Tomographic Inversions Based on 3D Strain Green Tensor

    DTIC Science & Technology

    2010-01-31

    propagation in three-dimensional (3D) earth, linearizes the inverse problem by iteratively updating the earth model , and provides an accurate way to...self-consistent FD-SGT databases constructed from finite-difference simulations of wave propagation in full-wave tomographic models can be used to...determine the moment tensors within minutes after a seismic event, making it possible for real time monitoring using 3D models . 15. SUBJECT TERMS

  14. A new frequency domain analytical solution of a cascade of diffusive channels for flood routing

    NASA Astrophysics Data System (ADS)

    Cimorelli, Luigi; Cozzolino, Luca; Della Morte, Renata; Pianese, Domenico; Singh, Vijay P.

    2015-04-01

    Simplified flood propagation models are often employed in practical applications for hydraulic and hydrologic analyses. In this paper, we present a new numerical method for the solution of the Linear Parabolic Approximation (LPA) of the De Saint Venant equations (DSVEs), accounting for the space variation of model parameters and the imposition of appropriate downstream boundary conditions. The new model is based on the analytical solution of a cascade of linear diffusive channels in the Laplace Transform domain. The time domain solutions are obtained using a Fourier series approximation of the Laplace Inversion formula. The new Inverse Laplace Transform Diffusive Flood Routing model (ILTDFR) can be used as a building block for the construction of real-time flood forecasting models or in optimization models, because it is unconditionally stable and allows fast and fairly precise computation.

  15. The Lanchester square-law model extended to a (2,2) conflict

    NASA Astrophysics Data System (ADS)

    Colegrave, R. K.; Hyde, J. M.

    1993-01-01

    A natural extension of the Lanchester (1,1) square-law model is the (M,N) linear model in which M forces oppose N forces with constant attrition rates. The (2,2) model is treated from both direct and inverse viewpoints. The inverse problem means that the model is to be fitted to a minimum number of observed force levels, i.e. the attrition rates are to be found from the initial force levels together with the levels observed at two subsequent times. An approach based on Hamiltonian dynamics has enabled the authors to derive a procedure for solving the inverse problem, which is readily computerized. Conflicts in which participants unexpectedly rally or weaken must be excluded.

  16. Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations

    NASA Astrophysics Data System (ADS)

    Zhi, L.; Gu, H.

    2017-12-01

    The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion has a better applicability. It doesn't need some assumptions and can estimate more parameters simultaneously. Meanwhile, by using the generalized linear method, the inversion is easily realized and its calculation amount is small. We use the Marmousi model to generate synthetic seismic records to test and analyze the influence of random noise. Without noise, all estimation results are relatively accurate. With the increase of noise, P-wave velocity change and oil saturation change are stable and less affected by noise. S-wave velocity change is most affected by noise. Finally we use the actual field data of time-lapse seismic prospecting to process and the results can prove the availability and feasibility of our method in actual situation.

  17. Angle-domain inverse scattering migration/inversion in isotropic media

    NASA Astrophysics Data System (ADS)

    Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan

    2018-07-01

    The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.

  18. Inverse solutions for electrical impedance tomography based on conjugate gradients methods

    NASA Astrophysics Data System (ADS)

    Wang, M.

    2002-01-01

    A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.

  19. Tracking Control of a Magnetic Shape Memory Actuator Using an Inverse Preisach Model with Modified Fuzzy Sliding Mode Control.

    PubMed

    Lin, Jhih-Hong; Chiang, Mao-Hsiung

    2016-08-25

    Magnetic shape memory (MSM) alloys are a new class of smart materials with extraordinary strains up to 12% and frequencies in the range of 1 to 2 kHz. The MSM actuator is a potential device which can achieve high performance electromagnetic actuation by using the properties of MSM alloys. However, significant non-linear hysteresis behavior is a significant barrier to control the MSM actuator. In this paper, the Preisach model was used, by capturing experiments from different input signals and output responses, to model the hysteresis of MSM actuator, and the inverse Preisach model, as a feedforward control, provided compensational signals to the MSM actuator to linearize the hysteresis non-linearity. The control strategy for path tracking combined the hysteresis compensator and the modified fuzzy sliding mode control (MFSMC) which served as a path controller. Based on the experimental results, it was verified that a tracking error in the order of micrometers was achieved.

  20. Tracking Control of a Magnetic Shape Memory Actuator Using an Inverse Preisach Model with Modified Fuzzy Sliding Mode Control

    PubMed Central

    Lin, Jhih-Hong; Chiang, Mao-Hsiung

    2016-01-01

    Magnetic shape memory (MSM) alloys are a new class of smart materials with extraordinary strains up to 12% and frequencies in the range of 1 to 2 kHz. The MSM actuator is a potential device which can achieve high performance electromagnetic actuation by using the properties of MSM alloys. However, significant non-linear hysteresis behavior is a significant barrier to control the MSM actuator. In this paper, the Preisach model was used, by capturing experiments from different input signals and output responses, to model the hysteresis of MSM actuator, and the inverse Preisach model, as a feedforward control, provided compensational signals to the MSM actuator to linearize the hysteresis non-linearity. The control strategy for path tracking combined the hysteresis compensator and the modified fuzzy sliding mode control (MFSMC) which served as a path controller. Based on the experimental results, it was verified that a tracking error in the order of micrometers was achieved. PMID:27571081

  1. Stochastic sediment property inversion in Shallow Water 06.

    PubMed

    Michalopoulou, Zoi-Heleni

    2017-11-01

    Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.

  2. A trade-off solution between model resolution and covariance in surface-wave inversion

    USGS Publications Warehouse

    Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.

    2010-01-01

    Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.

  3. Calculation of earthquake rupture histories using a hybrid global search algorithm: Application to the 1992 Landers, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.

    1996-01-01

    A method is presented for the simultaneous calculation of slip amplitudes and rupture times for a finite fault using a hybrid global search algorithm. The method we use combines simulated annealing with the downhill simplex method to produce a more efficient search algorithm then either of the two constituent parts. This formulation has advantages over traditional iterative or linearized approaches to the problem because it is able to escape local minima in its search through model space for the global optimum. We apply this global search method to the calculation of the rupture history for the Landers, California, earthquake. The rupture is modeled using three separate finite-fault planes to represent the three main fault segments that failed during this earthquake. Both the slip amplitude and the time of slip are calculated for a grid work of subfaults. The data used consist of digital, teleseismic P and SH body waves. Long-period, broadband, and short-period records are utilized to obtain a wideband characterization of the source. The results of the global search inversion are compared with a more traditional linear-least-squares inversion for only slip amplitudes. We use a multi-time-window linear analysis to relax the constraints on rupture time and rise time in the least-squares inversion. Both inversions produce similar slip distributions, although the linear-least-squares solution has a 10% larger moment (7.3 ?? 1026 dyne-cm compared with 6.6 ?? 1026 dyne-cm). Both inversions fit the data equally well and point out the importance of (1) using a parameterization with sufficient spatial and temporal flexibility to encompass likely complexities in the rupture process, (2) including suitable physically based constraints on the inversion to reduce instabilities in the solution, and (3) focusing on those robust rupture characteristics that rise above the details of the parameterization and data set.

  4. Imaging the Earth's anisotropic structure with Bayesian Inversion of fundamental and higher mode surface-wave dispersion data

    NASA Astrophysics Data System (ADS)

    Ravenna, Matteo; Lebedev, Sergei; Celli, Nicolas

    2017-04-01

    We develop a Markov Chain Monte Carlo inversion of fundamental and higher mode phase-velocity curves for radially and azimuthally anisotropic structure of the crust and upper mantle. In the inversions of Rayleigh- and Love-wave dispersion curves for radially anisotropic structure, we obtain probabilistic 1D radially anisotropic shear-velocity profiles of the isotropic average Vs and anisotropy (or Vsv and Vsh) as functions of depth. In the inversions for azimuthal anisotropy, Rayleigh-wave dispersion curves at different azimuths are inverted for the vertically polarized shear-velocity structure (Vsv) and the 2-phi component of azimuthal anisotropy. The strength and originality of the method is in its fully non-linear approach. Each model realization is computed using exact forward calculations. The uncertainty of the models is a part of the output. In the inversions for azimuthal anisotropy, in particular, the computation of the forward problem is performed separately at different azimuths, with no linear approximations on the relation of the Earth's elastic parameters to surface wave phase velocities. The computations are performed in parallel in order reduce the computing time. We compare inversions of the fundamental mode phase-velocity curves alone with inversions that also include overtones. The addition of higher modes enhances the resolving power of the anisotropic structure of the deep upper mantle. We apply the inversion method to phase-velocity curves in a few regions, including the Hangai dome region in Mongolia. Our models provide constraints on the Moho depth, the Lithosphere-Asthenosphere Boundary, and the alignment of the anisotropic fabric and the direction of current and past flow, from the crust down to the deep asthenosphere.

  5. Extracting Low-Frequency Information from Time Attenuation in Elastic Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong

    2017-03-01

    Low-frequency information is crucial for recovering background velocity, but the lack of low-frequency information in field data makes inversion impractical without accurate initial models. Laplace-Fourier domain waveform inversion can recover a smooth model from real data without low-frequency information, which can be used for subsequent inversion as an ideal starting model. In general, it also starts with low frequencies and includes higher frequencies at later inversion stages, while the difference is that its ultralow frequency information comes from the Laplace-Fourier domain. Meanwhile, a direct implementation of the Laplace-transformed wavefield using frequency domain inversion is also very convenient. However, because broad frequency bands are often used in the pure time domain waveform inversion, it is difficult to extract the wavefields dominated by low frequencies in this case. In this paper, low-frequency components are constructed by introducing time attenuation into the recorded residuals, and the rest of the method is identical to the traditional time domain inversion. Time windowing and frequency filtering are also applied to mitigate the ambiguity of the inverse problem. Therefore, we can start at low frequencies and to move to higher frequencies. The experiment shows that the proposed method can achieve a good inversion result in the presence of a linear initial model and records without low-frequency information.

  6. Seismic waveform inversion using neural networks

    NASA Astrophysics Data System (ADS)

    De Wit, R. W.; Trampert, J.

    2012-12-01

    Full waveform tomography aims to extract all available information on Earth structure and seismic sources from seismograms. The strongly non-linear nature of this inverse problem is often addressed through simplifying assumptions for the physical theory or data selection, thus potentially neglecting valuable information. Furthermore, the assessment of the quality of the inferred model is often lacking. This calls for the development of methods that fully appreciate the non-linear nature of the inverse problem, whilst providing a quantification of the uncertainties in the final model. We propose to invert seismic waveforms in a fully non-linear way by using artificial neural networks. Neural networks can be viewed as powerful and flexible non-linear filters. They are very common in speech, handwriting and pattern recognition. Mixture Density Networks (MDN) allow us to obtain marginal posterior probability density functions (pdfs) of all model parameters, conditioned on the data. An MDN can approximate an arbitrary conditional pdf as a linear combination of Gaussian kernels. Seismograms serve as input, Earth structure parameters are the so-called targets and network training aims to learn the relationship between input and targets. The network is trained on a large synthetic data set, which we construct by drawing many random Earth models from a prior model pdf and solving the forward problem for each of these models, thus generating synthetic seismograms. As a first step, we aim to construct a 1D Earth model. Training sets are constructed using the Mineos package, which computes synthetic seismograms in a spherically symmetric non-rotating Earth by summing normal modes. We train a network on the body waveforms present in these seismograms. Once the network has been trained, it can be presented with new unseen input data, in our case the body waves in real seismograms. We thus obtain the posterior pdf which represents our final state of knowledge given the information in the training set and the real data.

  7. The general linear inverse problem - Implication of surface waves and free oscillations for earth structure.

    NASA Technical Reports Server (NTRS)

    Wiggins, R. A.

    1972-01-01

    The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.

  8. Full analogue electronic realisation of the Hodgkin-Huxley neuronal dynamics in weak-inversion CMOS.

    PubMed

    Lazaridis, E; Drakakis, E M; Barahona, M

    2007-01-01

    This paper presents a non-linear analog synthesis path towards the modeling and full implementation of the Hodgkin-Huxley neuronal dynamics in silicon. The proposed circuits have been realized in weak-inversion CMOS technology and take advantage of both log-domain and translinear transistor-level techniques.

  9. Feedback control by online learning an inverse model.

    PubMed

    Waegeman, Tim; Wyffels, Francis; Schrauwen, Francis

    2012-10-01

    A model, predictor, or error estimator is often used by a feedback controller to control a plant. Creating such a model is difficult when the plant exhibits nonlinear behavior. In this paper, a novel online learning control framework is proposed that does not require explicit knowledge about the plant. This framework uses two learning modules, one for creating an inverse model, and the other for actually controlling the plant. Except for their inputs, they are identical. The inverse model learns by the exploration performed by the not yet fully trained controller, while the actual controller is based on the currently learned model. The proposed framework allows fast online learning of an accurate controller. The controller can be applied on a broad range of tasks with different dynamic characteristics. We validate this claim by applying our control framework on several control tasks: 1) the heating tank problem (slow nonlinear dynamics); 2) flight pitch control (slow linear dynamics); and 3) the balancing problem of a double inverted pendulum (fast linear and nonlinear dynamics). The results of these experiments show that fast learning and accurate control can be achieved. Furthermore, a comparison is made with some classical control approaches, and observations concerning convergence and stability are made.

  10. Adjoint-Based Sensitivity Kernels for Glacial Isostatic Adjustment in a Laterally Varying Earth

    NASA Astrophysics Data System (ADS)

    Crawford, O.; Al-Attar, D.; Tromp, J.; Mitrovica, J. X.; Austermann, J.; Lau, H. C. P.

    2017-12-01

    We consider a new approach to both the forward and inverse problems in glacial isostatic adjustment. We present a method for forward modelling GIA in compressible and laterally heterogeneous earth models with a variety of linear and non-linear rheologies. Instead of using the so-called sea level equation, which must be solved iteratively, the forward theory we present consists of a number of coupled evolution equations that can be straightforwardly numerically integrated. We also apply the adjoint method to the inverse problem in order to calculate the derivatives of measurements of GIA with respect to the viscosity structure of the Earth. Such derivatives quantify the sensitivity of the measurements to the model. The adjoint method enables efficient calculation of continuous and laterally varying derivatives, allowing us to calculate the sensitivity of measurements of glacial isostatic adjustment to the Earth's three-dimensional viscosity structure. The derivatives have a number of applications within the inverse method. Firstly, they can be used within a gradient-based optimisation method to find a model which minimises some data misfit function. The derivatives can also be used to quantify the uncertainty in such a model and hence to provide understanding of which parts of the model are well constrained. Finally, they enable construction of measurements which provide sensitivity to a particular part of the model space. We illustrate both the forward and inverse aspects with numerical examples in a spherically symmetric earth model.

  11. A Synthetic Study on the Resolution of 2D Elastic Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Cui, C.; Wang, Y.

    2017-12-01

    Gradient based full waveform inversion is an effective method in seismic study, it makes full use of the information given by seismic records and is capable of providing a more accurate model of the interior of the earth at a relatively low computational cost. However, the strong non-linearity of the problem brings about many difficulties in the assessment of its resolution. Synthetic inversions are therefore helpful before an inversion based on real data is made. Checker-board test is a commonly used method, but it is not always reliable due to the significant difference between a checker-board and the true model. Our study aims to provide a basic understanding of the resolution of 2D elastic inversion by examining three main factors that affect the inversion result respectively: 1. The structural characteristic of the model; 2. The level of similarity between the initial model and the true model; 3. The spacial distribution of sources and receivers. We performed about 150 synthetic inversions to demonstrate how each factor contributes to quality of the result, and compared the inversion results with those achieved by checker-board tests. The study can be a useful reference to assess the resolution of an inversion in addition to regular checker-board tests, or to determine whether the seismic data of a specific region is sufficient for a successful inversion.

  12. Three-dimensional inversion of multisource array electromagnetic data

    NASA Astrophysics Data System (ADS)

    Tartaras, Efthimios

    Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.

  13. Use of Linear Prediction Uncertainty Analysis to Guide Conditioning of Models Simulating Surface-Water/Groundwater Interactions

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; White, J.; Doherty, J.

    2011-12-01

    Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.

  14. Next Generation Robots for STEM Education andResearch at Huston Tillotson University

    DTIC Science & Technology

    2017-11-10

    dynamics through the following command: roslaunch mtb_lab6_feedback_linearization gravity_compensation.launch Part B: Gravity Inversion : After...understood the system’s natural dynamics. roslaunch mtb_lab6_feedback_linearization gravity_compensation.launch Part B: Gravity Inversion ...is created using the following command: roslaunch mtb_lab6_feedback_linearization gravity_inversion.launch Gravity inversion is just one

  15. Probing the solar core with low-degree p modes

    NASA Astrophysics Data System (ADS)

    Roxburgh, I. W.; Vorontsov, S. V.

    2002-01-01

    We address the question of what could be learned about the solar core structure if the seismic data were limited to low-degree modes only. The results of three different experiments are described. The first is the linearized structural inversion of the p-mode frequencies of a solar model modified slightly in the energy-generating core, using the original (unmodified) model as an initial guess. In the second experiment, we invert the solar p-mode frequencies measured in the 32-month subset of BiSON data (Chaplin et al. 1998), degraded with additional 0.1 μHz random errors, using a model of 2.6 Gyr age from the solar evolutionary sequence as an initial approximation. This second inversion is non-linear. In the third experiment, we compare the same set of BiSON frequencies with current reference solar model.

  16. Charge-based MOSFET model based on the Hermite interpolation polynomial

    NASA Astrophysics Data System (ADS)

    Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt

    2017-04-01

    An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.

  17. A developed nearly analytic discrete method for forward modeling in the frequency domain

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Lang, Chao; Yang, Hui; Wang, Wenshuai

    2018-02-01

    High-efficiency forward modeling methods play a fundamental role in full waveform inversion (FWI). In this paper, the developed nearly analytic discrete (DNAD) method is proposed to accelerate frequency-domain forward modeling processes. We first derive the discretization of frequency-domain wave equations via numerical schemes based on the nearly analytic discrete (NAD) method to obtain a linear system. The coefficients of numerical stencils are optimized to make the linear system easier to solve and to minimize computing time. Wavefield simulation and numerical dispersion analysis are performed to compare the numerical behavior of DNAD method with that of the conventional NAD method. The results demonstrate the superiority of our proposed method. Finally, the DNAD method is implemented in frequency-domain FWI, and high-resolution inverse results are obtained.

  18. A Nonlinear Dynamic Inversion Predictor-Based Model Reference Adaptive Controller for a Generic Transport Model

    NASA Technical Reports Server (NTRS)

    Campbell, Stefan F.; Kaneshige, John T.

    2010-01-01

    Presented here is a Predictor-Based Model Reference Adaptive Control (PMRAC) architecture for a generic transport aircraft. At its core, this architecture features a three-axis, non-linear, dynamic-inversion controller. Command inputs for this baseline controller are provided by pilot roll-rate, pitch-rate, and sideslip commands. This paper will first thoroughly present the baseline controller followed by a description of the PMRAC adaptive augmentation to this control system. Results are presented via a full-scale, nonlinear simulation of NASA s Generic Transport Model (GTM).

  19. A matched-peak inversion approach for ocean acoustic travel-time tomography

    PubMed

    Skarsoulis

    2000-03-01

    A new approach for the inversion of travel-time data is proposed, based on the matching between model arrivals and observed peaks. Using the linearized model relations between sound-speed and arrival-time perturbations about a set of background states, arrival times and associated errors are calculated on a fine grid of model states discretizing the sound-speed parameter space. Each model state can explain (identify) a number of observed peaks in a particular reception lying within the uncertainty intervals of the corresponding predicted arrival times. The model states that explain the maximum number of observed peaks are considered as the more likely parametric descriptions of the reception; these model states can be described in terms of mean values and variances providing a statistical answer (matched-peak solution) to the inversion problem. A basic feature of the matched-peak inversion approach is that each reception can be treated independently, i.e., no constraints are posed from previous-reception identification or inversion results. Accordingly, there is no need for initialization of the inversion procedure and, furthermore, discontinuous travel-time data can be treated. The matched-peak inversion method is demonstrated by application to 9-month-long travel-time data from the Thetis-2 tomography experiment in the western Mediterranean sea.

  20. The Determination of the Large-Scale Circulation of the Pacific Ocean from Satellite Altimetry using Model Green's Functions

    NASA Technical Reports Server (NTRS)

    Stammer, Detlef; Wunsch, Carl

    1996-01-01

    A Green's function method for obtaining an estimate of the ocean circulation using both a general circulation model and altimetric data is demonstrated. The fundamental assumption is that the model is so accurate that the differences between the observations and the model-estimated fields obey a linear dynamics. In the present case, the calculations are demonstrated for model/data differences occurring on very a large scale, where the linearization hypothesis appears to be a good one. A semi-automatic linearization of the Bryan/Cox general circulation model is effected by calculating the model response to a series of isolated (in both space and time) geostrophically balanced vortices. These resulting impulse responses or 'Green's functions' then provide the kernels for a linear inverse problem. The method is first demonstrated with a set of 'twin experiments' and then with real data spanning the entire model domain and a year of TOPEX/POSEIDON observations. Our present focus is on the estimate of the time-mean and annual cycle of the model. Residuals of the inversion/assimilation are largest in the western tropical Pacific, and are believed to reflect primarily geoid error. Vertical resolution diminishes with depth with 1 year of data. The model mean is modified such that the subtropical gyre is weakened by about 1 cm/s and the center of the gyre shifted southward by about 10 deg. Corrections to the flow field at the annual cycle suggest that the dynamical response is weak except in the tropics, where the estimated seasonal cycle of the low-latitude current system is of the order of 2 cm/s. The underestimation of observed fluctuations can be related to the inversion on the coarse spatial grid, which does not permit full resolution of the tropical physics. The methodology is easily extended to higher resolution, to use of spatially correlated errors, and to other data types.

  1. Azimuthal Seismic Amplitude Variation with Offset and Azimuth Inversion in Weakly Anisotropic Media with Orthorhombic Symmetry

    NASA Astrophysics Data System (ADS)

    Pan, Xinpeng; Zhang, Guangzhi; Yin, Xingyao

    2018-01-01

    Seismic amplitude variation with offset and azimuth (AVOaz) inversion is well known as a popular and pragmatic tool utilized to estimate fracture parameters. A single set of vertical fractures aligned along a preferred horizontal direction embedded in a horizontally layered medium can be considered as an effective long-wavelength orthorhombic medium. Estimation of Thomsen's weak-anisotropy (WA) parameters and fracture weaknesses plays an important role in characterizing the orthorhombic anisotropy in a weakly anisotropic medium. Our goal is to demonstrate an orthorhombic anisotropic AVOaz inversion approach to describe the orthorhombic anisotropy utilizing the observable wide-azimuth seismic reflection data in a fractured reservoir with the assumption of orthorhombic symmetry. Combining Thomsen's WA theory and linear-slip model, we first derive a perturbation in stiffness matrix of a weakly anisotropic medium with orthorhombic symmetry under the assumption of small WA parameters and fracture weaknesses. Using the perturbation matrix and scattering function, we then derive an expression for linearized PP-wave reflection coefficient in terms of P- and S-wave moduli, density, Thomsen's WA parameters, and fracture weaknesses in such an orthorhombic medium, which avoids the complicated nonlinear relationship between the orthorhombic anisotropy and azimuthal seismic reflection data. Incorporating azimuthal seismic data and Bayesian inversion theory, the maximum a posteriori solutions of Thomsen's WA parameters and fracture weaknesses in a weakly anisotropic medium with orthorhombic symmetry are reasonably estimated with the constraints of Cauchy a priori probability distribution and smooth initial models of model parameters to enhance the inversion resolution and the nonlinear iteratively reweighted least squares strategy. The synthetic examples containing a moderate noise demonstrate the feasibility of the derived orthorhombic anisotropic AVOaz inversion method, and the real data illustrate the inversion stabilities of orthorhombic anisotropy in a fractured reservoir.

  2. Surface wave tomography of the European crust and upper mantle from ambient seismic noise

    NASA Astrophysics Data System (ADS)

    LU, Y.; Stehly, L.; Paul, A.

    2017-12-01

    We present a high-resolution 3-D Shear wave velocity model of the European crust and upper mantle derived from ambient seismic noise tomography. In this study, we collect 4 years of continuous vertical-component seismic recordings from 1293 broadband stations across Europe (10W-35E, 30N-75N). We analyze group velocity dispersion from 5s to 150s for cross-correlations of more than 0.8 million virtual source-receiver pairs. 2-D group velocity maps are estimated using adaptive parameterization to accommodate the strong heterogeneity of path coverage. 3-D velocity model is obtained by merging 1-D models inverted at each pixel through a two-step data-driven inversion algorithm: a non-linear Bayesian Monte Carlo inversion, followed by a linearized inversion. Resulting S-wave velocity model and Moho depth are compared with previous geophysical studies: 1) The crustal model and Moho depth show striking agreement with active seismic imaging results. Moreover, it even provides new valuable information such as a strong difference of the European Moho along two seismic profiles in the Western Alps (Cifalps and ECORS-CROP). 2) The upper mantle model displays strong similarities with published models even at 150km deep, which is usually imaged using earthquake records.

  3. ITOUGH2(UNIX). Inverse Modeling for TOUGH2 Family of Multiphase Flow Simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.

    1999-03-01

    ITOUGH2 provides inverse modeling capabilities for the TOUGH2 family of numerical simulators for non-isothermal multiphase flows in fractured-porous media. The ITOUGH2 can be used for estimating parameters by automatic modeling calibration, for sensitivity analyses, and for uncertainity propagation analyses (linear and Monte Carlo simulations). Any input parameter to the TOUGH2 simulator can be estimated based on any type of observation for which a corresponding TOUGH2 output is calculated. ITOUGH2 solves a non-linear least-squares problem using direct or gradient-based minimization algorithms. A detailed residual and error analysis is performed, which includes the evaluation of model identification criteria. ITOUGH2 can also bemore » run in forward mode, solving subsurface flow problems related to nuclear waste isolation, oil, gas, and geothermal resevoir engineering, and vadose zone hydrology.« less

  4. A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    CUI, C.; Hou, W.

    2017-12-01

    Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (< 3Hz) in field data is still bottleneck in the FWI. By extracting ultra low-frequency data from field data, envelope inversion is able to recover low wavenumber model with a demodulation operator (envelope operator), though the low frequency data does not really exist in field data. To improve the efficiency and viability of the inversion, in this study, we proposed a joint method of envelope inversion combined with hybrid-domain FWI. First, we developed 3D elastic envelope inversion, and the misfit function and the corresponding gradient operator were derived. Then we performed hybrid-domain FWI with envelope inversion result as initial model which provides low wavenumber component of model. Here, forward modeling is implemented in the time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.

  5. FAST TRACK PAPER: Non-iterative multiple-attenuation methods: linear inverse solutions to non-linear inverse problems - II. BMG approximation

    NASA Astrophysics Data System (ADS)

    Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing

    2004-12-01

    The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.

  6. Delineating chalk sand distribution of Ekofisk formation using probabilistic neural network (PNN) and stepwise regression (SWR): Case study Danish North Sea field

    NASA Astrophysics Data System (ADS)

    Haris, A.; Nafian, M.; Riyanto, A.

    2017-07-01

    Danish North Sea Fields consist of several formations (Ekofisk, Tor, and Cromer Knoll) that was started from the age of Paleocene to Miocene. In this study, the integration of seismic and well log data set is carried out to determine the chalk sand distribution in the Danish North Sea field. The integration of seismic and well log data set is performed by using the seismic inversion analysis and seismic multi-attribute. The seismic inversion algorithm, which is used to derive acoustic impedance (AI), is model-based technique. The derived AI is then used as external attributes for the input of multi-attribute analysis. Moreover, the multi-attribute analysis is used to generate the linear and non-linear transformation of among well log properties. In the case of the linear model, selected transformation is conducted by weighting step-wise linear regression (SWR), while for the non-linear model is performed by using probabilistic neural networks (PNN). The estimated porosity, which is resulted by PNN shows better suited to the well log data compared with the results of SWR. This result can be understood since PNN perform non-linear regression so that the relationship between the attribute data and predicted log data can be optimized. The distribution of chalk sand has been successfully identified and characterized by porosity value ranging from 23% up to 30%.

  7. Iterative updating of model error for Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew

    2018-02-01

    In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.

  8. Application of a stochastic inverse to the geophysical inverse problem

    NASA Technical Reports Server (NTRS)

    Jordan, T. H.; Minster, J. B.

    1972-01-01

    The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.

  9. Finite frequency shear wave splitting tomography: a model space search approach

    NASA Astrophysics Data System (ADS)

    Mondal, P.; Long, M. D.

    2017-12-01

    Observations of seismic anisotropy provide key constraints on past and present mantle deformation. A common method for upper mantle anisotropy is to measure shear wave splitting parameters (delay time and fast direction). However, the interpretation is not straightforward, because splitting measurements represent an integration of structure along the ray path. A tomographic approach that allows for localization of anisotropy is desirable; however, tomographic inversion for anisotropic structure is a daunting task, since 21 parameters are needed to describe general anisotropy. Such a large parameter space does not allow a straightforward application of tomographic inversion. Building on previous work on finite frequency shear wave splitting tomography, this study aims to develop a framework for SKS splitting tomography with a new parameterization of anisotropy and a model space search approach. We reparameterize the full elastic tensor, reducing the number of parameters to three (a measure of strength based on symmetry considerations for olivine, plus the dip and azimuth of the fast symmetry axis). We compute Born-approximation finite frequency sensitivity kernels relating model perturbations to splitting intensity observations. The strong dependence of the sensitivity kernels on the starting anisotropic model, and thus the strong non-linearity of the inverse problem, makes a linearized inversion infeasible. Therefore, we implement a Markov Chain Monte Carlo technique in the inversion procedure. We have performed tests with synthetic data sets to evaluate computational costs and infer the resolving power of our algorithm for synthetic models with multiple anisotropic layers. Our technique can resolve anisotropic parameters on length scales of ˜50 km for realistic station and event configurations for dense broadband experiments. We are proceeding towards applications to real data sets, with an initial focus on the High Lava Plains of Oregon.

  10. A note on convergence of solutions of total variation regularized linear inverse problems

    NASA Astrophysics Data System (ADS)

    Iglesias, José A.; Mercier, Gwenael; Scherzer, Otmar

    2018-05-01

    In a recent paper by Chambolle et al (2017 Inverse Problems 33 015002) it was proven that if the subgradient of the total variation at the noise free data is not empty, the level-sets of the total variation denoised solutions converge to the level-sets of the noise free data with respect to the Hausdorff distance. The condition on the subgradient corresponds to the source condition introduced by Burger and Osher (2007 Multiscale Model. Simul. 6 365–95), who proved convergence rates results with respect to the Bregman distance under this condition. We generalize the result of Chambolle et al to total variation regularization of general linear inverse problems under such a source condition. As particular applications we present denoising in bounded and unbounded, convex and non convex domains, deblurring and inversion of the circular Radon transform. In all these examples the convergence result applies. Moreover, we illustrate the convergence behavior through numerical examples.

  11. Inverse dynamics of a 3 degree of freedom spatial flexible manipulator

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Serna, M.

    1989-01-01

    A technique is presented for solving the inverse dynamics and kinematics of 3 degree of freedom spatial flexible manipulator. The proposed method finds the joint torques necessary to produce a specified end effector motion. Since the inverse dynamic problem in elastic manipulators is closely coupled to the inverse kinematic problem, the solution of the first also renders the displacements and rotations at any point of the manipulator, including the joints. Furthermore the formulation is complete in the sense that it includes all the nonlinear terms due to the large rotation of the links. The Timoshenko beam theory is used to model the elastic characteristics, and the resulting equations of motion are discretized using the finite element method. An iterative solution scheme is proposed that relies on local linearization of the problem. The solution of each linearization is carried out in the frequency domain. The performance and capabilities of this technique are tested through simulation analysis. Results show the potential use of this method for the smooth motion control of space telerobots.

  12. Easy way to determine quantitative spatial resolution distribution for a general inverse problem

    NASA Astrophysics Data System (ADS)

    An, M.; Feng, M.

    2013-12-01

    The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.

  13. Is 3D true non linear traveltime tomography reasonable ?

    NASA Astrophysics Data System (ADS)

    Herrero, A.; Virieux, J.

    2003-04-01

    The data sets requiring 3D analysis tools in the context of seismic exploration (both onshore and offshore experiments) or natural seismicity (micro seismicity surveys or post event measurements) are more and more numerous. Classical linearized tomographies and also earthquake localisation codes need an accurate 3D background velocity model. However, if the medium is complex and a priori information not available, a 1D analysis is not able to provide an adequate background velocity image. Moreover, the design of the acquisition layouts is often intrinsically 3D and renders difficult even 2D approaches, especially in natural seismicity cases. Thus, the solution relies on the use of a 3D true non linear approach, which allows to explore the model space and to identify an optimal velocity image. The problem becomes then practical and its feasibility depends on the available computing resources (memory and time). In this presentation, we show that facing a 3D traveltime tomography problem with an extensive non-linear approach combining fast travel time estimators based on level set methods and optimisation techniques such as multiscale strategy is feasible. Moreover, because management of inhomogeneous inversion parameters is more friendly in a non linear approach, we describe how to perform a jointly non-linear inversion for the seismic velocities and the sources locations.

  14. Assessing non-uniqueness: An algebraic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasco, Don W.

    Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.

  15. Bayesian linearized amplitude-versus-frequency inversion for quality factor and its application

    NASA Astrophysics Data System (ADS)

    Yang, Xinchao; Teng, Long; Li, Jingnan; Cheng, Jiubing

    2018-06-01

    We propose a straightforward attenuation inversion method by utilizing the amplitude-versus-frequency (AVF) characteristics of seismic data. A new linearized approximation equation of the angle and frequency dependent reflectivity in viscoelastic media is derived. We then use the presented equation to implement the Bayesian linear AVF inversion. The inversion result includes not only P-wave and S-wave velocities, and densities, but also P-wave and S-wave quality factors. Synthetic tests show that the AVF inversion surpasses the AVA inversion for quality factor estimation. However, a higher signal noise ratio (SNR) of data is necessary for the AVF inversion. To show its feasibility, we apply both the new Bayesian AVF inversion and conventional AVA inversion to a tight gas reservoir data in Sichuan Basin in China. Considering the SNR of the field data, a combination of AVF inversion for attenuation parameters and AVA inversion for elastic parameters is recommended. The result reveals that attenuation estimations could serve as a useful complement in combination with the AVA inversion results for the detection of tight gas reservoirs.

  16. Joint inversion of regional and teleseismic earthquake waveforms

    NASA Astrophysics Data System (ADS)

    Baker, Mark R.; Doser, Diane I.

    1988-03-01

    A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.

  17. Surface wave tomography of Europe from ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Lu, Yang; Stehly, Laurent; Paul, Anne

    2017-04-01

    We present a European scale high-resolution 3-D shear wave velocity model derived from ambient seismic noise tomography. In this study, we collect 4 years of continuous seismic recordings from 1293 stations across much of the European region (10˚W-35˚E, 30˚N-75˚N), which yields more than 0.8 million virtual station pairs. This data set compiles records from 67 seismic networks, both permanent and temporary from the EIDA (European Integrated Data Archive). Rayleigh wave group velocity are measured at each station pair using the multiple-filter analysis technique. Group velocity maps are estimated through a linearized tomographic inversion algorithm at period from 5s to 100s. Adaptive parameterization is used to accommodate heterogeneity in data coverage. We then apply a two-step data-driven inversion method to obtain the shear wave velocity model. The two steps refer to a Monte Carlo inversion to build the starting model, followed by a linearized inversion for further improvement. Finally, Moho depth (and its uncertainty) are determined over most of our study region by identifying and analysing sharp velocity discontinuities (and sharpness). The resulting velocity model shows good agreement with main geological features and previous geophyical studies. Moho depth coincides well with that obtained from active seismic experiments. A focus on the Greater Alpine region (covered by the AlpArray seismic network) displays a clear crustal thinning that follows the arcuate shape of the Alps from the southern French Massif Central to southern Germany.

  18. Simultaneous travel time tomography for updating both velocity and reflector geometry in triangular/tetrahedral cell model

    NASA Astrophysics Data System (ADS)

    Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu

    2018-05-01

    To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.

  19. Simultaneous travel time tomography for updating both velocity and reflector geometry in triangular/tetrahedral cell model

    NASA Astrophysics Data System (ADS)

    Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu

    2017-12-01

    To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.

  20. Parallel three-dimensional magnetotelluric inversion using adaptive finite-element method. Part I: theory and synthetic study

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.

    2015-07-01

    This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.

  1. A novel post-processing scheme for two-dimensional electrical impedance tomography based on artificial neural networks

    PubMed Central

    2017-01-01

    Objective Electrical Impedance Tomography (EIT) is a powerful non-invasive technique for imaging applications. The goal is to estimate the electrical properties of living tissues by measuring the potential at the boundary of the domain. Being safe with respect to patient health, non-invasive, and having no known hazards, EIT is an attractive and promising technology. However, it suffers from a particular technical difficulty, which consists of solving a nonlinear inverse problem in real time. Several nonlinear approaches have been proposed as a replacement for the linear solver, but in practice very few are capable of stable, high-quality, and real-time EIT imaging because of their very low robustness to errors and inaccurate modeling, or because they require considerable computational effort. Methods In this paper, a post-processing technique based on an artificial neural network (ANN) is proposed to obtain a nonlinear solution to the inverse problem, starting from a linear solution. While common reconstruction methods based on ANNs estimate the solution directly from the measured data, the method proposed here enhances the solution obtained from a linear solver. Conclusion Applying a linear reconstruction algorithm before applying an ANN reduces the effects of noise and modeling errors. Hence, this approach significantly reduces the error associated with solving 2D inverse problems using machine-learning-based algorithms. Significance This work presents radical enhancements in the stability of nonlinear methods for biomedical EIT applications. PMID:29206856

  2. Three-dimensional forward modeling and inversion of marine CSEM data in anisotropic conductivity structures

    NASA Astrophysics Data System (ADS)

    Han, B.; Li, Y.

    2016-12-01

    We present a three-dimensional (3D) forward and inverse modeling code for marine controlled-source electromagnetic (CSEM) surveys in anisotropic media. The forward solution is based on a primary/secondary field approach, in which secondary fields are solved using a staggered finite-volume (FV) method and primary fields are solved for 1D isotropic background models analytically. It is shown that it is rather straightforward to extend the isotopic 3D FV algorithm to a triaxial anisotropic one, while additional coefficients are required to account for full tensor conductivity. To solve the linear system resulting from FV discretization of Maxwell' s equations, both iterative Krylov solvers (e.g. BiCGSTAB) and direct solvers (e.g. MUMPS) have been implemented, makes the code flexible for different computing platforms and different problems. For iterative soloutions, the linear system in terms of electromagnetic potentials (A-Phi) is used to precondition the original linear system, transforming the discretized Curl-Curl equations to discretized Laplace-like equations, thus much more favorable numerical properties can be obtained. Numerical experiments suggest that this A-Phi preconditioner can dramatically improve the convergence rate of an iterative solver and high accuracy can be achieved without divergence correction even for low frequencies. To efficiently calculate the sensitivities, i.e. the derivatives of CSEM data with respect to tensor conductivity, the adjoint method is employed. For inverse modeling, triaxial anisotropy is taken into account. Since the number of model parameters to be resolved of triaxial anisotropic medias is twice or thrice that of isotropic medias, the data-space version of the Gauss-Newton (GN) minimization method is preferred due to its lower computational cost compared with the traditional model-space GN method. We demonstrate the effectiveness of the code with synthetic examples.

  3. Restart Operator Meta-heuristics for a Problem-Oriented Evolutionary Strategies Algorithm in Inverse Mathematical MISO Modelling Problem Solving

    NASA Astrophysics Data System (ADS)

    Ryzhikov, I. S.; Semenkin, E. S.

    2017-02-01

    This study is focused on solving an inverse mathematical modelling problem for dynamical systems based on observation data and control inputs. The mathematical model is being searched in the form of a linear differential equation, which determines the system with multiple inputs and a single output, and a vector of the initial point coordinates. The described problem is complex and multimodal and for this reason the proposed evolutionary-based optimization technique, which is oriented on a dynamical system identification problem, was applied. To improve its performance an algorithm restart operator was implemented.

  4. Inverse problem of HIV cell dynamics using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    González, J. A.; Guzmán, F. S.

    2017-01-01

    In order to describe the cell dynamics of T-cells in a patient infected with HIV, we use a flavour of Perelson's model. This is a non-linear system of Ordinary Differential Equations that describes the evolution of healthy, latently infected, infected T-cell concentrations and the free viral cells. Different parameters in the equations give different dynamics. Considering the concentration of these types of cells is known for a particular patient, the inverse problem consists in estimating the parameters in the model. We solve this inverse problem using a Genetic Algorithm (GA) that minimizes the error between the solutions of the model and the data from the patient. These errors depend on the parameters of the GA, like mutation rate and population, although a detailed analysis of this dependence will be described elsewhere.

  5. Towards a new technique to construct a 3D shear-wave velocity model based on converted waves

    NASA Astrophysics Data System (ADS)

    Hetényi, G.; Colavitti, L.

    2017-12-01

    A 3D model is essential in all branches of solid Earth sciences because geological structures can be heterogeneous and change significantly in their lateral dimension. The main target of this research is to build a crustal S-wave velocity structure in 3D. The currently popular methodologies to construct 3D shear-wave velocity models are Ambient Noise Tomography (ANT) and Local Earthquake Tomography (LET). Here we propose a new technique to map Earth discontinuities and velocities at depth based on the analysis of receiver functions. The 3D model is obtained by simultaneously inverting P-to-S converted waveforms recorded at a dense array. The individual velocity models corresponding to each trace are extracted from the 3D initial model along ray paths that are calculated using the shooting method, and the velocity model is updated during the inversion. We consider a spherical approximation of ray propagation using a global velocity model (iasp91, Kennett and Engdahl, 1991) for the teleseismic part, while we adopt Cartesian coordinates and a local velocity model for the crust. During the inversion process we work with a multi-layer crustal model for shear-wave velocity, with a flexible mesh for the depth of the interfaces. The RFs inversion represents a complex problem because the amplitude and the arrival time of different phases depend in a non-linear way on the depth of interfaces and the characteristics of the velocity structure. The solution we envisage to manage the inversion problem is the stochastic Neighbourhood Algorithm (NA, Sambridge, 1999), whose goal is to find an ensemble of models that sample the good data-fitting regions of a multidimensional parameter space. Depending on the studied area, this method can accommodate possible independent and complementary geophysical data (gravity, active seismics, LET, ANT, etc.), helping to reduce the non-linearity of the inversion. Our first focus of application is the Central Alps, where a 20-year long dataset of high-quality teleseismic events recorded at 81 stations is available, and we have high-resolution P-wave velocity model available (Diehl et al., 2009). We plan to extend the 3D shear-wave velocity inversion method to the entire Alpine domain in frame of the AlpArray project, and apply it to other areas with a dense network of broadband seismometers.

  6. SU-C-207B-06: Comparison of Registration Methods for Modeling Pathologic Response of Esophageal Cancer to Chemoradiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riyahi, S; Choi, W; Bhooshan, N

    2016-06-15

    Purpose: To compare linear and deformable registration methods for evaluation of tumor response to Chemoradiation therapy (CRT) in patients with esophageal cancer. Methods: Linear and multi-resolution BSpline deformable registration were performed on Pre-Post-CRT CT/PET images of 20 patients with esophageal cancer. For both registration methods, we registered CT using Mean Square Error (MSE) metric, however to register PET we used transformation obtained using Mutual Information (MI) from the same CT due to being multi-modality. Similarity of Warped-CT/PET was quantitatively evaluated using Normalized Mutual Information and plausibility of DF was assessed using inverse consistency Error. To evaluate tumor response four groupsmore » of tumor features were examined: (1) Conventional PET/CT e.g. SUV, diameter (2) Clinical parameters e.g. TNM stage, histology (3)spatial-temporal PET features that describe intensity, texture and geometry of tumor (4)all features combined. Dominant features were identified using 10-fold cross-validation and Support Vector Machine (SVM) was deployed for tumor response prediction while the accuracy was evaluated by ROC Area Under Curve (AUC). Results: Average and standard deviation of Normalized mutual information for deformable registration using MSE was 0.2±0.054 and for linear registration was 0.1±0.026, showing higher NMI for deformable registration. Likewise for MI metric, deformable registration had 0.13±0.035 comparing to linear counterpart with 0.12±0.037. Inverse consistency error for deformable registration for MSE metric was 4.65±2.49 and for linear was 1.32±2.3 showing smaller value for linear registration. The same conclusion was obtained for MI in terms of inverse consistency error. AUC for both linear and deformable registration was 1 showing no absolute difference in terms of response evaluation. Conclusion: Deformable registration showed better NMI comparing to linear registration, however inverse consistency of transformation was lower in linear registration. We do not expect to see significant difference when warping PET images using deformable or linear registration. This work was supported in part by the National Cancer Institute Grants R01CA172638.« less

  7. Range of earth structure nonuniqueness implied by body wave observations.

    NASA Technical Reports Server (NTRS)

    Wiggins, R. A.; Mcmechan, G. A.; Toksoz, M. N.

    1973-01-01

    The Herglotz-Wiechert integral for the direct inversion of ray parameter versus distance curves can be manipulated to find the envelope of all possible models consistent with geometrical body wave observations (travel time and ray parameter versus distance). Such an extremal inversion approach has been used to find the uncertainty bounds for the velocity structure in the mantle and core. It is found, for example, that there is an uncertainty of plus or minus 40 km in the radius of the inner core boundary, plus or minus 18 km at the core-mantle boundary, and plus or minus 35 km at the 435-km transition zone. The velocity uncertainty is about plus or minus 0.08 km/sec for P and S waves in the lower mantle and about plus or minus 0.20 km/sec in the core. Experiments with various combinations of ray types in the core indicate that rather crude observations of SKKS-SKS travel times confine the range of possible models far more dramatically than do the most precise estimates of PmKP travel times. Comparisons of results from extremal inversion and linearized perturbation inversions indicate that body wave behavior is too strongly nonlinear for linearized schemes to be effective for predicting uncertainty.

  8. LQS_INVERSION v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiss, Chester J

    FORTRAN90 codes for inversion of electrostatic geophysical data in terms of three subsurface parameters in a single-well, oilfield environment: the linear charge density of the steel well casing (L), the point charge associated with an induced fracture filled with a conductive contrast agent (Q) and the location of said fracture (s). Theory is described in detail in Weiss et al. (Geophysics, 2016). Inversion strategy is to loop over candidate fracture locations, and at each one minimize the squared Cartesian norm of the data misfit to arrive at L and Q. Solution method is to construct the 2x2 linear system ofmore » normal equations and compute L and Q algebraically. Practical Application: Oilfield environments where observed electrostatic geophysical data can reasonably be assumed by a simple L-Q-s model. This may include hydrofracking operations, as postulated in Weiss et al. (2016), but no field validation examples have so far been provided.« less

  9. Shear wave velocity variation across the Taupo Volcanic Zone, New Zealand, from receiver function inversion

    USGS Publications Warehouse

    Bannister, S.; Bryan, C.J.; Bibby, H.M.

    2004-01-01

    The Taupo Volcanic Zone (TVZ), New Zealand is a region characterized by very high magma eruption rates and extremely high heat flow, which is manifest in high-temperature geothermal waters. The shear wave velocity structure across the region is inferred using non-linear inversion of receiver functions, which were derived from teleseismic earthquake data. Results from the non-linear inversion, and from forward synthetic modelling, indicate low S velocities at ???6- 16 km depth near the Rotorua and Reporoa calderas. We infer these low-velocity layers to represent the presence of high-level bodies of partial melt associated with the volcanism. Receiver functions at other stations are complicated by reverberations associated with near-surface sedimentary layers. The receiver function data also indicate that the Moho lies between 25 and 30 km, deeper than the 15 ?? 2 km depth previously inferred for the crust-mantle boundary beneath the TVZ. ?? 2004 RAS.

  10. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  11. An Inverse Modeling Approach to Estimating Phytoplankton Pigment Concentrations from Phytoplankton Absorption Spectra

    NASA Technical Reports Server (NTRS)

    Moisan, John R.; Moisan, Tiffany A. H.; Linkswiler, Matthew A.

    2011-01-01

    Phytoplankton absorption spectra and High-Performance Liquid Chromatography (HPLC) pigment observations from the Eastern U.S. and global observations from NASA's SeaBASS archive are used in a linear inverse calculation to extract pigment-specific absorption spectra. Using these pigment-specific absorption spectra to reconstruct the phytoplankton absorption spectra results in high correlations at all visible wavelengths (r(sup 2) from 0.83 to 0.98), and linear regressions (slopes ranging from 0.8 to 1.1). Higher correlations (r(sup 2) from 0.75 to 1.00) are obtained in the visible portion of the spectra when the total phytoplankton absorption spectra are unpackaged by multiplying the entire spectra by a factor that sets the total absorption at 675 nm to that expected from absorption spectra reconstruction using measured pigment concentrations and laboratory-derived pigment-specific absorption spectra. The derived pigment-specific absorption spectra were further used with the total phytoplankton absorption spectra in a second linear inverse calculation to estimate the various phytoplankton HPLC pigments. A comparison between the estimated and measured pigment concentrations for the 18 pigment fields showed good correlations (r(sup 2) greater than 0.5) for 7 pigments and very good correlations (r(sup 2) greater than 0.7) for chlorophyll a and fucoxanthin. Higher correlations result when the analysis is carried out at more local geographic scales. The ability to estimate phytoplankton pigments using pigment-specific absorption spectra is critical for using hyperspectral inverse models to retrieve phytoplankton pigment concentrations and other Inherent Optical Properties (IOPs) from passive remote sensing observations.

  12. On the value of incorporating spatial statistics in large-scale geophysical inversions: the SABRe case

    NASA Astrophysics Data System (ADS)

    Kokkinaki, A.; Sleep, B. E.; Chambers, J. E.; Cirpka, O. A.; Nowak, W.

    2010-12-01

    Electrical Resistance Tomography (ERT) is a popular method for investigating subsurface heterogeneity. The method relies on measuring electrical potential differences and obtaining, through inverse modeling, the underlying electrical conductivity field, which can be related to hydraulic conductivities. The quality of site characterization strongly depends on the utilized inversion technique. Standard ERT inversion methods, though highly computationally efficient, do not consider spatial correlation of soil properties; as a result, they often underestimate the spatial variability observed in earth materials, thereby producing unrealistic subsurface models. Also, these methods do not quantify the uncertainty of the estimated properties, thus limiting their use in subsequent investigations. Geostatistical inverse methods can be used to overcome both these limitations; however, they are computationally expensive, which has hindered their wide use in practice. In this work, we compare a standard Gauss-Newton smoothness constrained least squares inversion method against the quasi-linear geostatistical approach using the three-dimensional ERT dataset of the SABRe (Source Area Bioremediation) project. The two methods are evaluated for their ability to: a) produce physically realistic electrical conductivity fields that agree with the wide range of data available for the SABRe site while being computationally efficient, and b) provide information on the spatial statistics of other parameters of interest, such as hydraulic conductivity. To explore the trade-off between inversion quality and computational efficiency, we also employ a 2.5-D forward model with corrections for boundary conditions and source singularities. The 2.5-D model accelerates the 3-D geostatistical inversion method. New adjoint equations are developed for the 2.5-D forward model for the efficient calculation of sensitivities. Our work shows that spatial statistics can be incorporated in large-scale ERT inversions to improve the inversion results without making them computationally prohibitive.

  13. The non-linear response of a muscle in transverse compression: assessment of geometry influence using a finite element model.

    PubMed

    Gras, Laure-Lise; Mitton, David; Crevier-Denoix, Nathalie; Laporte, Sébastien

    2012-01-01

    Most recent finite element models that represent muscles are generic or subject-specific models that use complex, constitutive laws. Identification of the parameters of such complex, constitutive laws could be an important limit for subject-specific approaches. The aim of this study was to assess the possibility of modelling muscle behaviour in compression with a parametric model and a simple, constitutive law. A quasi-static compression test was performed on the muscles of dogs. A parametric finite element model was designed using a linear, elastic, constitutive law. A multi-variate analysis was performed to assess the effects of geometry on muscle response. An inverse method was used to define Young's modulus. The non-linear response of the muscles was obtained using a subject-specific geometry and a linear elastic law. Thus, a simple muscle model can be used to have a bio-faithful, biomechanical response.

  14. Adapting Better Interpolation Methods to Model Amphibious MT Data Along the Cascadian Subduction Zone.

    NASA Astrophysics Data System (ADS)

    Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.

    2016-12-01

    Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new forward model during each iteration of the inversion.

  15. On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.

    PubMed

    Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C

    2008-07-21

    The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.

  16. Stochastic inversion of cross-borehole radar data from metalliferous vein detection

    NASA Astrophysics Data System (ADS)

    Zeng, Zhaofa; Huai, Nan; Li, Jing; Zhao, Xueyu; Liu, Cai; Hu, Yingsa; Zhang, Ling; Hu, Zuzhi; Yang, Hui

    2017-12-01

    In the exploration and evaluation of the metalliferous veins with a cross-borehole radar system, traditional linear inversion methods (least squares inversion, LSQR) only get indirect parameters (permittivity, resistivity, or velocity) to estimate the target structure. They cannot accurately reflect the geological parameters of the metalliferous veins’ media properties. In order to get the intrinsic geological parameters and internal distribution, in this paper, we build a metalliferous veins model based on the stochastic effective medium theory, and carry out stochastic inversion and parameter estimation based on the Monte Carlo sampling algorithm. Compared with conventional LSQR, the stochastic inversion can get higher resolution inversion permittivity and velocity of the target body. We can estimate more accurately the distribution characteristics of abnormality and target internal parameters. It provides a new research idea to evaluate the properties of complex target media.

  17. 3D electromagnetic modelling of a TTI medium and TTI effects in inversion

    NASA Astrophysics Data System (ADS)

    Jaysaval, Piyoosh; Shantsev, Daniil; de la Kethulle de Ryhove, Sébastien

    2016-04-01

    We present a numerical algorithm for 3D electromagnetic (EM) forward modelling in conducting media with general electric anisotropy. The algorithm is based on the finite-difference discretization of frequency-domain Maxwell's equations on a Lebedev grid, in which all components of the electric field are collocated but half a spatial step staggered with respect to the magnetic field components, which also are collocated. This leads to a system of linear equations that is solved using a stabilized biconjugate gradient method with a multigrid preconditioner. We validate the accuracy of the numerical results for layered and 3D tilted transverse isotropic (TTI) earth models representing typical scenarios used in the marine controlled-source EM method. It is then demonstrated that not taking into account the full anisotropy of the conductivity tensor can lead to misleading inversion results. For simulation data corresponding to a 3D model with a TTI anticlinal structure, a standard vertical transverse isotropic inversion is not able to image a resistor, while for a 3D model with a TTI synclinal structure the inversion produces a false resistive anomaly. If inversion uses the proposed forward solver that can handle TTI anisotropy, it produces resistivity images consistent with the true models.

  18. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  19. Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements

    NASA Astrophysics Data System (ADS)

    Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar

    2017-11-01

    Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.

  20. Estimation for the Linear Model With Uncertain Covariance Matrices

    NASA Astrophysics Data System (ADS)

    Zachariah, Dave; Shariati, Nafiseh; Bengtsson, Mats; Jansson, Magnus; Chatterjee, Saikat

    2014-03-01

    We derive a maximum a posteriori estimator for the linear observation model, where the signal and noise covariance matrices are both uncertain. The uncertainties are treated probabilistically by modeling the covariance matrices with prior inverse-Wishart distributions. The nonconvex problem of jointly estimating the signal of interest and the covariance matrices is tackled by a computationally efficient fixed-point iteration as well as an approximate variational Bayes solution. The statistical performance of estimators is compared numerically to state-of-the-art estimators from the literature and shown to perform favorably.

  1. Monte Carlo Volcano Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  2. Solving ill-posed inverse problems using iterative deep neural networks

    NASA Astrophysics Data System (ADS)

    Adler, Jonas; Öktem, Ozan

    2017-12-01

    We propose a partially learned approach for the solution of ill-posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularisation theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularising functional. The method results in a gradient-like iterative scheme, where the ‘gradient’ component is learned using a convolutional network that includes the gradients of the data discrepancy and regulariser as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of 512 × 512 pixel images in about 0.4 s using a single graphics processing unit (GPU).

  3. Solution Methods for 3D Tomographic Inversion Using A Highly Non-Linear Ray Tracer

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Ballard, S.; Young, C. J.; Chang, M.

    2008-12-01

    To develop 3D velocity models to improve nuclear explosion monitoring capability, we have developed a 3D tomographic modeling system that traces rays using an implementation of the Um and Thurber ray pseudo- bending approach, with full enforcement of Snell's Law in 3D at the major discontinuities. Due to the highly non-linear nature of the ray tracer, however, we are forced to substantially damp the inversion in order to converge on a reasonable model. Unfortunately the amount of damping is not known a priori and can significantly extend the number of calls of the computationally expensive ray-tracer and the least squares matrix solver. If the damping term is too small the solution step-size produces either an un-realistic model velocity change or places the solution in or near a local minimum from which extrication is nearly impossible. If the damping term is too large, convergence can be very slow or premature convergence can occur. Standard approaches involve running inversions with a suite of damping parameters to find the best model. A better solution methodology is to take advantage of existing non-linear solution techniques such as Levenberg-Marquardt (LM) or quasi-newton iterative solvers. In particular, the LM algorithm was specifically designed to find the minimum of a multi-variate function that is expressed as the sum of squares of non-linear real-valued functions. It has become a standard technique for solving non-linear least squared problems, and is widely adopted in a broad spectrum of disciplines, including the geosciences. At each iteration, the LM approach dynamically varies the level of damping to optimize convergence. When the current estimate of the solution is far from the ultimate solution LM behaves as a steepest decent method, but transitions to Gauss- Newton behavior, with near quadratic convergence, as the estimate approaches the final solution. We show typical linear solution techniques and how they can lead to local minima if the damping is set too low. We also describe the LM technique and show how it automatically determines the appropriate damping factor as it iteratively converges on the best solution. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000.

  4. Inversion for the driving forces of plate tectonics

    NASA Technical Reports Server (NTRS)

    Richardson, R. M.

    1983-01-01

    Inverse modeling techniques have been applied to the problem of determining the roles of various forces that may drive and resist plate tectonic motions. Separate linear inverse problems have been solved to find the best fitting pole of rotation for finite element grid point velocities and to find the best combination of force models to fit the observed relative plate velocities for the earth's twelve major plates using the generalized inverse operator. Variance-covariance data on plate motion have also been included. Results emphasize the relative importance of ridge push forces in the driving mechanism. Convergent margin forces are smaller by at least a factor of two, and perhaps by as much as a factor of twenty. Slab pull, apparently, is poorly transmitted to the surface plate as a driving force. Drag forces at the base of the plate are smaller than ridge push forces, although the sign of the force remains in question.

  5. Inverse square law isothermal property in relativistic charged static distributions

    NASA Astrophysics Data System (ADS)

    Hansraj, Sudan; Qwabe, Nkululeko

    2017-12-01

    We analyze the impact of the inverse square law fall-off of the energy density in a charged isotropic spherically symmetric fluid. Initially, we impose a linear barotropic equation of state p = αρ but this leads to an intractable differential equation. Next, we consider the neutral isothermal metric of Saslaw et al. [Phys. Rev. D 13, 471 (1996)] in an electric field and the usual inverse square law of energy density and pressure results thus preserving the equation of state. Additionally, we discard a linear equation of state and endeavor to find new classes of solutions with the inverse square law fall-off of density. Certain prescribed forms of the spatial and temporal gravitational forms result in new exact solutions. An interesting result that emerges is that while isothermal fluid spheres are unbounded in the neutral case, this is not so when charge is involved. Indeed it was found that barotropic equations of state exist and hypersurfaces of vanishing pressure exist establishing a boundary in practically all models. One model was studied in depth and found to satisfy other elementary requirements for physical admissibility such as a subluminal sound speed as well as gravitational surface redshifts smaller than 2. Buchdahl [Acta Phys. Pol. B 10, 673 (1965)], Böhmer and Harko [Gen. Relat. Gravit. 39, 757 (2007)] and Andréasson [Commum. Math. Phys. 198, 507 (2009)] mass-radius bounds were also found to be satisfied. Graphical plots utilizing constants selected from the boundary conditions established that the model displayed characteristics consistent with physically viable models.

  6. Effects of induced stress on seismic forward modelling and inversion

    NASA Astrophysics Data System (ADS)

    Tromp, Jeroen; Trampert, Jeannot

    2018-05-01

    We demonstrate how effects of induced stress may be incorporated in seismic modelling and inversion. Our approach is motivated by the accommodation of pre-stress in global seismology. Induced stress modifies both the equation of motion and the constitutive relationship. The theory predicts that induced pressure linearly affects the unstressed isotropic moduli with a slope determined by their adiabatic pressure derivatives. The induced deviatoric stress produces anisotropic compressional and shear wave speeds; the latter result in shear wave splitting. For forward modelling purposes, we determine the weak form of the equation of motion under induced stress. In the context of the inverse problem, we determine induced stress sensitivity kernels, which may be used for adjoint tomography. The theory is illustrated by considering 2-D propagation of SH waves and related Fréchet derivatives based on a spectral-element method.

  7. Quantitative model of diffuse speckle contrast analysis for flow measurement.

    PubMed

    Liu, Jialin; Zhang, Hongchao; Lu, Jian; Ni, Xiaowu; Shen, Zhonghua

    2017-07-01

    Diffuse speckle contrast analysis (DSCA) is a noninvasive optical technique capable of monitoring deep tissue blood flow. However, a detailed study of the speckle contrast model for DSCA has yet to be presented. We deduced the theoretical relationship between speckle contrast and exposure time and further simplified it to a linear approximation model. The feasibility of this linear model was validated by the liquid phantoms which demonstrated that the slope of this linear approximation was able to rapidly determine the Brownian diffusion coefficient of the turbid media at multiple distances using multiexposure speckle imaging. Furthermore, we have theoretically quantified the influence of optical property on the measurements of the Brownian diffusion coefficient which was a consequence of the fact that the slope of this linear approximation was demonstrated to be equal to the inverse of correlation time of the speckle.

  8. Hydrodynamic Modeling of Free Surface Interactions and Implications for P and Rg Waves Recorded on the Source Physics Experiments

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Rougier, E.; Knight, E.; Yang, X.; Patton, H. J.

    2013-12-01

    A goal of the Source Physics Experiments (SPE) is to develop explosion source models expanding monitoring capabilities beyond empirical methods. The SPE project combines field experimentation with numerical modelling. The models take into account non-linear processes occurring from the first moment of the explosion as well as complex linear propagation effects of signals reaching far-field recording stations. The hydrodynamic code CASH is used for modelling high-strain rate, non-linear response occurring in the material near the source. Our development efforts focused on incorporating in-situ stress and fracture processes. CASH simulates the material response from the near-source, strong shock zone out to the small-strain and ultimately the elastic regime where a linear code can take over. We developed an interface with the Spectral Element Method code, SPECFEM3D, that is an efficient implementation on parallel computers of a high-order finite element method. SPECFEM3D allows accurate modelling of wave propagation to remote monitoring distance at low cost. We will present CASH-SPECFEM3D results for SPE1, which was a chemical detonation of about 85 kg of TNT at 55 m depth in a granitic geologic unit. Spallation was observed for SPE1. Keeping yield fixed we vary the depth of the source systematically and compute synthetic seismograms to distances where the P and Rg waves are separated, so that analysis can be performed without concern about interference effects due to overlapping energy. We study the time and frequency characteristics of P and Rg waves and analyse them in regard to the impact of free-surface interactions and rock damage resulting from those interactions. We also perform traditional CMT inversions as well as advanced CMT inversions, developed at LANL to take into account the damage. This will allow us to assess the effect of spallation on CMT solutions as well as to validate our inversion procedure. Further work will aim to validate the developed models with the data recorded on SPEs. This long-term goal requires taking into account the 3D structure and thus a comprehensive characterization of the site.

  9. Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow

    NASA Astrophysics Data System (ADS)

    Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar

    2014-09-01

    We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.

  10. Fast inversion of gravity data using the symmetric successive over-relaxation (SSOR) preconditioned conjugate gradient algorithm

    NASA Astrophysics Data System (ADS)

    Meng, Zhaohai; Li, Fengting; Xu, Xuechun; Huang, Danian; Zhang, Dailei

    2017-02-01

    The subsurface three-dimensional (3D) model of density distribution is obtained by solving an under-determined linear equation that is established by gravity data. Here, we describe a new fast gravity inversion method to recover a 3D density model from gravity data. The subsurface will be divided into a large number of rectangular blocks, each with an unknown constant density. The gravity inversion method introduces a stabiliser model norm with a depth weighting function to produce smooth models. The depth weighting function is combined with the model norm to counteract the skin effect of the gravity potential field. As the numbers of density model parameters is NZ (the number of layers in the vertical subsurface domain) times greater than the observed gravity data parameters, the inverse density parameter is larger than the observed gravity data parameters. Solving the full set of gravity inversion equations is very time-consuming, and applying a new algorithm to estimate gravity inversion can significantly reduce the number of iterations and the computational time. In this paper, a new symmetric successive over-relaxation (SSOR) iterative conjugate gradient (CG) method is shown to be an appropriate algorithm to solve this Tikhonov cost function (gravity inversion equation). The new, faster method is applied on Gaussian noise-contaminated synthetic data to demonstrate its suitability for 3D gravity inversion. To demonstrate the performance of the new algorithm on actual gravity data, we provide a case study that includes ground-based measurement of residual Bouguer gravity anomalies over the Humble salt dome near Houston, Gulf Coast Basin, off the shore of Louisiana. A 3D distribution of salt rock concentration is used to evaluate the inversion results recovered by the new SSOR iterative method. In the test model, the density values in the constructed model coincide with the known location and depth of the salt dome.

  11. On recovering distributed IP information from inductive source time domain electromagnetic data

    NASA Astrophysics Data System (ADS)

    Kang, Seogi; Oldenburg, Douglas W.

    2016-10-01

    We develop a procedure to invert time domain induced polarization (IP) data for inductive sources. Our approach is based upon the inversion methodology in conventional electrical IP (EIP), which uses a sensitivity function that is independent of time. However, significant modifications are required for inductive source IP (ISIP) because electric fields in the ground do not achieve a steady state. The time-history for these fields needs to be evaluated and then used to define approximate IP currents. The resultant data, either a magnetic field or its derivative, are evaluated through the Biot-Savart law. This forms the desired linear relationship between data and pseudo-chargeability. Our inversion procedure has three steps: (1) Obtain a 3-D background conductivity model. We advocate, where possible, that this be obtained by inverting early-time data that do not suffer significantly from IP effects. (2) Decouple IP responses embedded in the observations by forward modelling the TEM data due to a background conductivity and subtracting these from the observations. (3) Use the linearized sensitivity function to invert data at each time channel and recover pseudo-chargeability. Post-interpretation of the recovered pseudo-chargeabilities at multiple times allows recovery of intrinsic Cole-Cole parameters such as time constant and chargeability. The procedure is applicable to all inductive source survey geometries but we focus upon airborne time domain EM (ATEM) data with a coincident-loop configuration because of the distinctive negative IP signal that is observed over a chargeable body. Several assumptions are adopted to generate our linearized modelling but we systematically test the capability and accuracy of the linearization for ISIP responses arising from different conductivity structures. On test examples we show: (1) our decoupling procedure enhances the ability to extract information about existence and location of chargeable targets directly from the data maps; (2) the horizontal location of a target body can be well recovered through inversion; (3) the overall geometry of a target body might be recovered but for ATEM data a depth weighting is required in the inversion; (4) we can recover estimates of intrinsic τ and η that may be useful for distinguishing between two chargeable targets.

  12. Statistical modeling of the reactions Fe(+) + N2O → FeO(+) + N2 and FeO(+) + CO → Fe(+) + CO2.

    PubMed

    Ushakov, Vladimir G; Troe, Jürgen; Johnson, Ryan S; Guo, Hua; Ard, Shaun G; Melko, Joshua J; Shuman, Nicholas S; Viggiano, Albert A

    2015-08-14

    The rates of the reactions Fe(+) + N2O → FeO(+) + N2 and FeO(+) + CO → Fe(+) + CO2 are modeled by statistical rate theory accounting for energy- and angular momentum-specific rate constants for formation of the primary and secondary cationic adducts and their backward and forward reactions. The reactions are both suggested to proceed on sextet and quartet potential energy surfaces with efficient, but probably not complete, equilibration by spin-inversion of the populations of the sextet and quartet adducts. The influence of spin-inversion on the overall reaction rate is investigated. The differences of the two reaction rates mostly are due to different numbers of entrance states (atom + linear rotor or linear rotor + linear rotor, respectively). The reaction Fe(+) + N2O was studied either with (6)Fe(+) or with (4)Fe(+) reactants. Differences in the rate constants of (6)Fe(+) and (4)Fe(+) reacting with N2O are attributed to different contributions from electronically excited potential energy surfaces, such as they originate from the open-electronic shell reactants.

  13. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  14. Application of neural models as controllers in mobile robot velocity control loop

    NASA Astrophysics Data System (ADS)

    Cerkala, Jakub; Jadlovska, Anna

    2017-01-01

    This paper presents the application of an inverse neural models used as controllers in comparison to classical PI controllers for velocity tracking control task used in two-wheel, differentially driven mobile robot. The PI controller synthesis is based on linear approximation of actuators with equivalent load. In order to obtain relevant datasets for training of feed-forward multi-layer perceptron based neural network used as neural model, the mathematical model of mobile robot, that combines its kinematic and dynamic properties such as chassis dimensions, center of gravity offset, friction and actuator parameters is used. Neural models are trained off-line to act as an inverse dynamics of DC motors with particular load using data collected in simulation experiment for motor input voltage step changes within bounded operating area. The performances of PI controllers versus inverse neural models in mobile robot internal velocity control loops are demonstrated and compared in simulation experiment of navigation control task for line segment motion in plane.

  15. THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)

    EPA Science Inventory

    This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...

  16. Time-lapse three-dimensional inversion of complex conductivity data using an active time constrained (ATC) approach

    USGS Publications Warehouse

    Karaoulis, M.; Revil, A.; Werkema, D.D.; Minsley, B.J.; Woodruff, W.F.; Kemna, A.

    2011-01-01

    Induced polarization (more precisely the magnitude and phase of impedance of the subsurface) is measured using a network of electrodes located at the ground surface or in boreholes. This method yields important information related to the distribution of permeability and contaminants in the shallow subsurface. We propose a new time-lapse 3-D modelling and inversion algorithm to image the evolution of complex conductivity over time. We discretize the subsurface using hexahedron cells. Each cell is assigned a complex resistivity or conductivity value. Using the finite-element approach, we model the in-phase and out-of-phase (quadrature) electrical potentials on the 3-D grid, which are then transformed into apparent complex resistivity. Inhomogeneous Dirichlet boundary conditions are used at the boundary of the domain. The calculation of the Jacobian matrix is based on the principles of reciprocity. The goal of time-lapse inversion is to determine the change in the complex resistivity of each cell of the spatial grid as a function of time. Each model along the time axis is called a 'reference space model'. This approach can be simplified into an inverse problem looking for the optimum of several reference space models using the approximation that the material properties vary linearly in time between two subsequent reference models. Regularizations in both space domain and time domain reduce inversion artefacts and improve the stability of the inversion problem. In addition, the use of the time-lapse equations allows the simultaneous inversion of data obtained at different times in just one inversion step (4-D inversion). The advantages of this new inversion algorithm are demonstrated on synthetic time-lapse data resulting from the simulation of a salt tracer test in a heterogeneous random material described by an anisotropic semi-variogram. ?? 2011 The Authors Geophysical Journal International ?? 2011 RAS.

  17. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    PubMed

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  18. Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel

    ERIC Educational Resources Information Center

    El-Gebeily, M.; Yushau, B.

    2008-01-01

    In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…

  19. Almost but not quite 2D, Non-linear Bayesian Inversion of CSEM Data

    NASA Astrophysics Data System (ADS)

    Ray, A.; Key, K.; Bodin, T.

    2013-12-01

    The geophysical inverse problem can be elegantly stated in a Bayesian framework where a probability distribution can be viewed as a statement of information regarding a random variable. After all, the goal of geophysical inversion is to provide information on the random variables of interest - physical properties of the earth's subsurface. However, though it may be simple to postulate, a practical difficulty of fully non-linear Bayesian inversion is the computer time required to adequately sample the model space and extract the information we seek. As a consequence, in geophysical problems where evaluation of a full 2D/3D forward model is computationally expensive, such as marine controlled source electromagnetic (CSEM) mapping of the resistivity of seafloor oil and gas reservoirs, Bayesian studies have largely been conducted with 1D forward models. While the 1D approximation is indeed appropriate for exploration targets with planar geometry and geological stratification, it only provides a limited, site-specific idea of uncertainty in resistivity with depth. In this work, we extend our fully non-linear 1D Bayesian inversion to a 2D model framework, without requiring the usual regularization of model resistivities in the horizontal or vertical directions used to stabilize quasi-2D inversions. In our approach, we use the reversible jump Markov-chain Monte-Carlo (RJ-MCMC) or trans-dimensional method and parameterize the subsurface in a 2D plane with Voronoi cells. The method is trans-dimensional in that the number of cells required to parameterize the subsurface is variable, and the cells dynamically move around and multiply or combine as demanded by the data being inverted. This approach allows us to expand our uncertainty analysis of resistivity at depth to more than a single site location, allowing for interactions between model resistivities at different horizontal locations along a traverse over an exploration target. While the model is parameterized in 2D, we efficiently evaluate the forward response using 1D profiles extracted from the model at the common-midpoints of the EM source-receiver pairs. Since the 1D approximation is locally valid at different midpoint locations, the computation time is far lower than is required by a full 2D or 3D simulation. We have applied this method to both synthetic and real CSEM survey data from the Scarborough gas field on the Northwest shelf of Australia, resulting in a spatially variable quantification of resistivity and its uncertainty in 2D. This Bayesian approach results in a large database of 2D models that comprise a posterior probability distribution, which we can subset to test various hypotheses about the range of model structures compatible with the data. For example, we can subset the model distributions to examine the hypothesis that a resistive reservoir extends overs a certain spatial extent. Depending on how this conditions other parts of the model space, light can be shed on the geological viability of the hypothesis. Since tackling spatially variable uncertainty and trade-offs in 2D and 3D is a challenging research problem, the insights gained from this work may prove valuable for subsequent full 2D and 3D Bayesian inversions.

  20. Development of WRF-CO2 4DVAR Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Zheng, T.; French, N. H. F.

    2016-12-01

    Four dimensional variational (4DVar) assimilation systems have been widely used for CO2 inverse modeling at global scale. At regional scale, however, 4DVar assimilation systems have been lacking. At present, most regional CO2 inverse models use Lagrangian particle backward trajectory tools to compute influence function in an analytical/synthesis framework. To provide a 4DVar based alternative, we developed WRF-CO2 4DVAR based on Weather Research and Forecasting (WRF), its chemistry extension (WRF-Chem), and its data assimilation system (WRFDA/WRFPLUS). Different from WRFDA, WRF-CO2 4DVAR does not optimize meteorology initial condition, instead it solves for the optimized CO2 surface fluxes (sources/sink) constrained by atmospheric CO2 observations. Based on WRFPLUS, we developed tangent linear and adjoint code for CO2 emission, advection, vertical mixing in boundary layer, and convective transport. Furthermore, we implemented an incremental algorithm to solve for optimized CO2 emission scaling factors by iteratively minimizing the cost function in a Bayes framework. The model sensitivity (of atmospheric CO2 with respect to emission scaling factor) calculated by tangent linear and adjoint model agrees well with that calculated by finite difference, indicating the validity of the newly developed code. The effectiveness of WRF-CO2 4DVar for inverse modeling is tested using forward-model generated pseudo-observation data in two experiments: first-guess CO2 fluxes has a 50% overestimation in the first case and 50% underestimation in the second. In both cases, WRF-CO2 4DVar reduces cost function to less than 10-4 of its initial values in less than 20 iterations and successfully recovers the true values of emission scaling factors. We expect future applications of WRF-CO2 4DVar with satellite observations will provide insights for CO2 regional inverse modeling, including the impacts of model transport error in vertical mixing.

  1. Simultaneous elastic parameter inversion in 2-D/3-D TTI medium combined later arrival times

    NASA Astrophysics Data System (ADS)

    Bai, Chao-ying; Wang, Tao; Yang, Shang-bei; Li, Xing-wang; Huang, Guo-jiao

    2016-04-01

    Traditional traveltime inversion for anisotropic medium is, in general, based on a "weak" assumption in the anisotropic property, which simplifies both the forward part (ray tracing is performed once only) and the inversion part (a linear inversion solver is possible). But for some real applications, a general (both "weak" and "strong") anisotropic medium should be considered. In such cases, one has to develop a ray tracing algorithm to handle with the general (including "strong") anisotropic medium and also to design a non-linear inversion solver for later tomography. Meanwhile, it is constructive to investigate how much the tomographic resolution can be improved by introducing the later arrivals. For this motivation, we incorporated our newly developed ray tracing algorithm (multistage irregular shortest-path method) for general anisotropic media with a non-linear inversion solver (a damped minimum norm, constrained least squares problem with a conjugate gradient approach) to formulate a non-linear inversion solver for anisotropic medium. This anisotropic traveltime inversion procedure is able to combine the later (reflected) arrival times. Both 2-D/3-D synthetic inversion experiments and comparison tests show that (1) the proposed anisotropic traveltime inversion scheme is able to recover the high contrast anomalies and (2) it is possible to improve the tomographic resolution by introducing the later (reflected) arrivals, but not as expected in the isotropic medium, because the different velocity (qP, qSV and qSH) sensitivities (or derivatives) respective to the different elastic parameters are not the same but are also dependent on the inclination angle.

  2. Three-Dimensional Anisotropic Acoustic and Elastic Full-Waveform Seismic Inversion

    NASA Astrophysics Data System (ADS)

    Warner, M.; Morgan, J. V.

    2013-12-01

    Three-dimensional full-waveform inversion is a high-resolution, high-fidelity, quantitative, seismic imaging technique that has advanced rapidly within the oil and gas industry. The method involves the iterative improvement of a starting model using a series of local linearized updates to solve the full non-linear inversion problem. During the inversion, forward modeling employs the full two-way three-dimensional heterogeneous anisotropic acoustic or elastic wave equation to predict the observed raw field data, wiggle-for-wiggle, trace-by-trace. The method is computationally demanding; it is highly parallelized, and runs on large multi-core multi-node clusters. Here, we demonstrate what can be achieved by applying this newly practical technique to several high-density 3D seismic datasets that were acquired to image four contrasting sedimentary targets: a gas cloud above an oil reservoir, a radially faulted dome, buried fluvial channels, and collapse structures overlying an evaporate sequence. We show that the resulting anisotropic p-wave velocity models match in situ measurements in deep boreholes, reproduce detailed structure observed independently on high-resolution seismic reflection sections, accurately predict the raw seismic data, simplify and sharpen reverse-time-migrated reflection images of deeper horizons, and flatten Kirchhoff-migrated common-image gathers. We also show that full-elastic 3D full-waveform inversion of pure pressure data can generate a reasonable shear-wave velocity model for one of these datasets. For two of the four datasets, the inclusion of significant transversely isotropic anisotropy with a vertical axis of symmetry was necessary in order to fit the kinematics of the field data properly. For the faulted dome, the full-waveform-inversion p-wave velocity model recovers the detailed structure of every fault that can be seen on coincident seismic reflection data. Some of the individual faults represent high-velocity zones, some represent low-velocity zones, some have more-complex internal structure, and some are visible merely as offsets between two regions with contrasting velocity. Although this has not yet been demonstrated quantitatively for this dataset, it seems likely that at least some of this fine structure in the recovered velocity model is related to the detailed lithology, strain history and fluid properties within the individual faults. We have here applied this technique to seismic data that were acquired by the extractive industries, however this inversion scheme is immediately scalable and applicable to a much wider range of problems given sufficient quality and density of observed data. Potential targets range from shallow magma chambers beneath active volcanoes, through whole-crustal sections across plate boundaries, to regional and whole-Earth models.

  3. Bayesian inversion of surface-wave data for radial and azimuthal shear-wave anisotropy, with applications to central Mongolia and west-central Italy

    NASA Astrophysics Data System (ADS)

    Ravenna, Matteo; Lebedev, Sergei

    2018-04-01

    Seismic anisotropy provides important information on the deformation history of the Earth's interior. Rayleigh and Love surface-waves are sensitive to and can be used to determine both radial and azimuthal shear-wave anisotropies at depth, but parameter trade-offs give rise to substantial model non-uniqueness. Here, we explore the trade-offs between isotropic and anisotropic structure parameters and present a suite of methods for the inversion of surface-wave, phase-velocity curves for radial and azimuthal anisotropies. One Markov chain Monte Carlo (McMC) implementation inverts Rayleigh and Love dispersion curves for a radially anisotropic shear velocity profile of the crust and upper mantle. Another McMC implementation inverts Rayleigh phase velocities and their azimuthal anisotropy for profiles of vertically polarized shear velocity and its depth-dependent azimuthal anisotropy. The azimuthal anisotropy inversion is fully non-linear, with the forward problem solved numerically at different azimuths for every model realization, which ensures that any linearization biases are avoided. The computations are performed in parallel, in order to reduce the computing time. The often challenging issue of data noise estimation is addressed by means of a Hierarchical Bayesian approach, with the variance of the noise treated as an unknown during the radial anisotropy inversion. In addition to the McMC inversions, we also present faster, non-linear gradient-search inversions for the same anisotropic structure. The results of the two approaches are mutually consistent; the advantage of the McMC inversions is that they provide a measure of uncertainty of the models. Applying the method to broad-band data from the Baikal-central Mongolia region, we determine radial anisotropy from the crust down to the transition-zone depths. Robust negative anisotropy (Vsh < Vsv) in the asthenosphere, at 100-300 km depths, presents strong new evidence for a vertical component of asthenospheric flow. This is consistent with an upward flow from below the thick lithosphere of the Siberian Craton to below the thinner lithosphere of central Mongolia, likely to give rise to decompression melting and the scattered, sporadic volcanism observed in the Baikal Rift area, as proposed previously. Inversion of phase-velocity data from west-central Italy for azimuthal anisotropy reveals a clear change in the shear-wave fast-propagation direction at 70-100 km depths, near the lithosphere-asthenosphere boundary. The orientation of the fabric in the lithosphere is roughly E-W, parallel to the direction of stretching over the last 10 m.y. The orientation of the fabric in the asthenosphere is NW-SE, matching the fast directions inferred from shear-wave splitting and probably indicating the direction of the asthenospheric flow.

  4. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  5. Inversion of time-domain induced polarization data based on time-lapse concept

    NASA Astrophysics Data System (ADS)

    Kim, Bitnarae; Nam, Myung Jin; Kim, Hee Joon

    2018-05-01

    Induced polarization (IP) surveys, measuring overvoltage phenomena of the medium, are widely and increasingly performed not only for exploration of mineral resources but also for engineering applications. Among several IP survey methods such as time-domain, frequency-domain and spectral IP surveys, this study introduces a noble inversion method for time-domain IP data to recover the chargeability structure of target medium. The inversion method employs the concept of 4D inversion of time-lapse resistivity data sets, considering the fact that measured voltage in time-domain IP survey is distorted by IP effects to increase from the instantaneous voltage measured at the moment the source current injection starts. Even though the increase is saturated very fast, we can consider the saturated and instantaneous voltages as a time-lapse data set. The 4D inversion method is one of the most powerful method for inverting time-lapse resistivity data sets. Using the developed IP inversion algorithm, we invert not only synthetic but also field IP data to show the effectiveness of the proposed method by comparing the recovered chargeability models with those from linear inversion that was used for the inversion of the field data in a previous study. Numerical results confirm that the proposed inversion method generates reliable chargeability models even though the anomalous bodies have large IP effects.

  6. Implications of overestimated anthropogenic CO2 emissions on East Asian and global land CO2 flux inversion

    NASA Astrophysics Data System (ADS)

    Saeki, Tazu; Patra, Prabir K.

    2017-12-01

    Measurement and modelling of regional or country-level carbon dioxide (CO2) fluxes are becoming critical for verification of the greenhouse gases emission control. One of the commonly adopted approaches is inverse modelling, where CO2 fluxes (emission: positive flux, sink: negative flux) from the terrestrial ecosystems are estimated by combining atmospheric CO2 measurements with atmospheric transport models. The inverse models assume anthropogenic emissions are known, and thus the uncertainties in the emissions introduce systematic bias in estimation of the terrestrial (residual) fluxes by inverse modelling. Here we show that the CO2 sink increase, estimated by the inverse model, over East Asia (China, Japan, Korea and Mongolia), by about 0.26 PgC year-1 (1 Pg = 1012 g) during 2001-2010, is likely to be an artifact of the anthropogenic CO2 emissions increasing too quickly in China by 1.41 PgC year-1. Independent results from methane (CH4) inversion suggested about 41% lower rate of East Asian CH4 emission increase during 2002-2012. We apply a scaling factor of 0.59, based on CH4 inversion, to the rate of anthropogenic CO2 emission increase since the anthropogenic emissions of both CO2 and CH4 increase linearly in the emission inventory. We find no systematic increase in land CO2 uptake over East Asia during 1993-2010 or 2000-2009 when scaled anthropogenic CO2 emissions are used, and that there is a need of higher emission increase rate for 2010-2012 compared to those calculated by the inventory methods. High bias in anthropogenic CO2 emissions leads to stronger land sinks in global land-ocean flux partitioning in our inverse model. The corrected anthropogenic CO2 emissions also produce measurable reductions in the rate of global land CO2 sink increase post-2002, leading to a better agreement with the terrestrial biospheric model simulations that include CO2-fertilization and climate effects.

  7. Recursive inversion of externally defined linear systems

    NASA Technical Reports Server (NTRS)

    Bach, Ralph E., Jr.; Baram, Yoram

    1988-01-01

    The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problems of system identification and compensation.

  8. Random hopping fermions on bipartite lattices: density of states, inverse participation ratios, and their correlations in a strong disorder regime

    NASA Astrophysics Data System (ADS)

    Yamada, Hiroki; Fukui, Takahiro

    2004-02-01

    We study Anderson localization of non-interacting random hopping fermions on bipartite lattices in two dimensions, focusing our attention to strong disorder features of the model. We concentrate ourselves on specific models with a linear dispersion in the vicinity of the band center, which can be described by a Dirac fermion in the continuum limit. Based on the recent renormalization group method developed by Carpentier and Le Doussal for the XY gauge glass model, we calculate the density of states, inverse participation ratios, and their spatial correlations. It turns out that their behavior is quite different from those expected within naive weak disorder approaches.

  9. Identification and Control of Non-Linear Time-Varying Dynamical Systems Using Artificial Neural Networks

    DTIC Science & Technology

    1992-09-01

    finding an inverse plant such as was done by Bertrand [BD91] and by Levin, Gewirtzman and Inbar in a binary type inverse controller [LGI91], to self tuning...gain robust control. 2) Self oscillating adaptive controller. 3) Gain scheduling. 4) Self tuning. 5) Model-reference adaptive systems. Although the...of multidimensional systems (CS881 as well as aircraft [HG90]. The self oscillating method is also a feedback based mechanism, utilizing a relay in the

  10. An efficient implementation of a high-order filter for a cubed-sphere spectral element model

    NASA Astrophysics Data System (ADS)

    Kang, Hyun-Gyu; Cheong, Hyeong-Bin

    2017-03-01

    A parallel-scalable, isotropic, scale-selective spatial filter was developed for the cubed-sphere spectral element model on the sphere. The filter equation is a high-order elliptic (Helmholtz) equation based on the spherical Laplacian operator, which is transformed into cubed-sphere local coordinates. The Laplacian operator is discretized on the computational domain, i.e., on each cell, by the spectral element method with Gauss-Lobatto Lagrange interpolating polynomials (GLLIPs) as the orthogonal basis functions. On the global domain, the discrete filter equation yielded a linear system represented by a highly sparse matrix. The density of this matrix increases quadratically (linearly) with the order of GLLIP (order of the filter), and the linear system is solved in only O (Ng) operations, where Ng is the total number of grid points. The solution, obtained by a row reduction method, demonstrated the typical accuracy and convergence rate of the cubed-sphere spectral element method. To achieve computational efficiency on parallel computers, the linear system was treated by an inverse matrix method (a sparse matrix-vector multiplication). The density of the inverse matrix was lowered to only a few times of the original sparse matrix without degrading the accuracy of the solution. For better computational efficiency, a local-domain high-order filter was introduced: The filter equation is applied to multiple cells, and then the central cell was only used to reconstruct the filtered field. The parallel efficiency of applying the inverse matrix method to the global- and local-domain filter was evaluated by the scalability on a distributed-memory parallel computer. The scale-selective performance of the filter was demonstrated on Earth topography. The usefulness of the filter as a hyper-viscosity for the vorticity equation was also demonstrated.

  11. [Research on the method of interference correction for nondispersive infrared multi-component gas analysis].

    PubMed

    Sun, You-Wen; Liu, Wen-Qing; Wang, Shi-Mei; Huang, Shu-Hua; Yu, Xiao-Man

    2011-10-01

    A method of interference correction for nondispersive infrared multi-component gas analysis was described. According to the successive integral gas absorption models and methods, the influence of temperature and air pressure on the integral line strengths and linetype was considered, and based on Lorentz detuning linetypes, the absorption cross sections and response coefficients of H2O, CO2, CO, and NO on each filter channel were obtained. The four dimension linear regression equations for interference correction were established by response coefficients, the absorption cross interference was corrected by solving the multi-dimensional linear regression equations, and after interference correction, the pure absorbance signal on each filter channel was only controlled by the corresponding target gas concentration. When the sample cell was filled with gas mixture with a certain concentration proportion of CO, NO and CO2, the pure absorbance after interference correction was used for concentration inversion, the inversion concentration error for CO2 is 2.0%, the inversion concentration error for CO is 1.6%, and the inversion concentration error for NO is 1.7%. Both the theory and experiment prove that the interference correction method proposed for NDIR multi-component gas analysis is feasible.

  12. Gravimetric control of active volcanic processes

    NASA Astrophysics Data System (ADS)

    Saltogianni, Vasso; Stiros, Stathis

    2017-04-01

    Volcanic activity includes phases of magma chamber inflation and deflation, produced by movement of magma and/or hydrothermal processes. Such effects usually leave their imprint as deformation of the ground surfaces which can be recorded by GNSS and other methods, on one hand, and on the other hand they can be modeled as elastic deformation processes, with deformation produced by volcanic masses of finite dimensions such as spheres, ellipsoids and parallelograms. Such volumes are modeled on the basis of inversion (non-linear, numerical solution) of systems of equations relating the unknown dimensions and location of magma sources with observations, currently mostly GNSS and INSAR data. Inversion techniques depend on the misfit between model predictions and observations, but because systems of equations are highly non-linear, and because adopted models for the geometry of magma sources is simple, non-unique solutions can be derived, constrained by local extrema. Assessment of derived magma models can be provided by independent observations and models, such as micro-seismicity distribution and changes in geophysical parameters. In the simplest case magmatic intrusions can be modeled as spheres with diameters of at least a few tens of meters at a depth of a few kilometers; hence they are expected to have a gravimetric signature in permanent recording stations on the ground surface, while larger intrusions may also have an imprint in sensors in orbit around the earth or along precisely defined air paths. Identification of such gravimetric signals and separation of the "true" signal from the measurement and ambient noise requires fine forward modeling of the wider areas based on realistic simulation of the ambient gravimetric field, and then modeling of its possible distortion because of magmatic anomalies. Such results are useful to remove ambiguities in inverse modeling of ground deformation, and also to detect magmatic anomalies offshore.

  13. Cooperative inversion of magnetotelluric and seismic data sets

    NASA Astrophysics Data System (ADS)

    Markovic, M.; Santos, F.

    2012-04-01

    Cooperative inversion of magnetotelluric and seismic data sets Milenko Markovic,Fernando Monteiro Santos IDL, Faculdade de Ciências da Universidade de Lisboa 1749-016 Lisboa Inversion of single geophysical data has well-known limitations due to the non-linearity of the fields and non-uniqueness of the model. There is growing need, both in academy and industry to use two or more different data sets and thus obtain subsurface property distribution. In our case ,we are dealing with magnetotelluric and seismic data sets. In our approach,we are developing algorithm based on fuzzy-c means clustering technique, for pattern recognition of geophysical data. Separate inversion is performed on every step, information exchanged for model integration. Interrelationships between parameters from different models is not required in analytical form. We are investigating how different number of clusters, affects zonation and spatial distribution of parameters. In our study optimization in fuzzy c-means clustering (for magnetotelluric and seismic data) is compared for two cases, firstly alternating optimization and then hybrid method (alternating optimization+ Quasi-Newton method). Acknowledgment: This work is supported by FCT Portugal

  14. Reflection full-waveform inversion using a modified phase misfit function

    NASA Astrophysics Data System (ADS)

    Cui, Chao; Huang, Jian-Ping; Li, Zhen-Chun; Liao, Wen-Yuan; Guan, Zhe

    2017-09-01

    Reflection full-waveform inversion (RFWI) updates the low- and highwavenumber components, and yields more accurate initial models compared with conventional full-waveform inversion (FWI). However, there is strong nonlinearity in conventional RFWI because of the lack of low-frequency data and the complexity of the amplitude. The separation of phase and amplitude information makes RFWI more linear. Traditional phase-calculation methods face severe phase wrapping. To solve this problem, we propose a modified phase-calculation method that uses the phase-envelope data to obtain the pseudo phase information. Then, we establish a pseudophase-information-based objective function for RFWI, with the corresponding source and gradient terms. Numerical tests verify that the proposed calculation method using the phase-envelope data guarantees the stability and accuracy of the phase information and the convergence of the objective function. The application on a portion of the Sigsbee2A model and comparison with inversion results of the improved RFWI and conventional FWI methods verify that the pseudophase-based RFWI produces a highly accurate and efficient velocity model. Moreover, the proposed method is robust to noise and high frequency.

  15. Model-based elastography: a survey of approaches to the inverse elasticity problem

    PubMed Central

    Doyley, M M

    2012-01-01

    Elastography is emerging as an imaging modality that can distinguish normal versus diseased tissues via their biomechanical properties. This article reviews current approaches to elastography in three areas — quasi-static, harmonic, and transient — and describes inversion schemes for each elastographic imaging approach. Approaches include: first-order approximation methods; direct and iterative inversion schemes for linear elastic; isotropic materials; and advanced reconstruction methods for recovering parameters that characterize complex mechanical behavior. The paper’s objective is to document efforts to develop elastography within the framework of solving an inverse problem, so that elastography may provide reliable estimates of shear modulus and other mechanical parameters. We discuss issues that must be addressed if model-based elastography is to become the prevailing approach to quasi-static, harmonic, and transient elastography: (1) developing practical techniques to transform the ill-posed problem with a well-posed one; (2) devising better forward models to capture the transient behavior of soft tissue; and (3) developing better test procedures to evaluate the performance of modulus elastograms. PMID:22222839

  16. Comparison of the Tangent Linear Properties of Tracer Transport Schemes Applied to Geophysical Problems.

    NASA Technical Reports Server (NTRS)

    Kent, James; Holdaway, Daniel

    2015-01-01

    A number of geophysical applications require the use of the linearized version of the full model. One such example is in numerical weather prediction, where the tangent linear and adjoint versions of the atmospheric model are required for the 4DVAR inverse problem. The part of the model that represents the resolved scale processes of the atmosphere is known as the dynamical core. Advection, or transport, is performed by the dynamical core. It is a central process in many geophysical applications and is a process that often has a quasi-linear underlying behavior. However, over the decades since the advent of numerical modelling, significant effort has gone into developing many flavors of high-order, shape preserving, nonoscillatory, positive definite advection schemes. These schemes are excellent in terms of transporting the quantities of interest in the dynamical core, but they introduce nonlinearity through the use of nonlinear limiters. The linearity of the transport schemes used in Goddard Earth Observing System version 5 (GEOS-5), as well as a number of other schemes, is analyzed using a simple 1D setup. The linearized version of GEOS-5 is then tested using a linear third order scheme in the tangent linear version.

  17. Nonlinear Waves and Inverse Scattering

    DTIC Science & Technology

    1989-01-01

    transform provides a linearization.’ Well known systems include the Kadomtsev - Petviashvili , Davey-Stewartson and Self-Dual Yang-Mills equations . The d...which employs inverse scattering theory in order to linearize the given nonlinear equation . I.S.T. has led to new developments in both fields: inverse...scattering and nonlinear wave equations . Listed below are some of the problems studied and a short description of results. - Multidimensional

  18. Recursive inversion of externally defined linear systems by FIR filters

    NASA Technical Reports Server (NTRS)

    Bach, Ralph E., Jr.; Baram, Yoram

    1989-01-01

    The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least-squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problem of system identification and compensation.

  19. Guidance for the utility of linear models in meta-analysis of genetic association studies of binary phenotypes.

    PubMed

    Cook, James P; Mahajan, Anubha; Morris, Andrew P

    2017-02-01

    Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.

  20. Inverse Force Determination on a Small Scale Launch Vehicle Model Using a Dynamic Balance

    NASA Technical Reports Server (NTRS)

    Ngo, Christina L.; Powell, Jessica M.; Ross, James C.

    2017-01-01

    A launch vehicle can experience large unsteady aerodynamic forces in the transonic regime that, while usually only lasting for tens of seconds during launch, could be devastating if structural components and electronic hardware are not designed to account for them. These aerodynamic loads are difficult to experimentally measure and even harder to computationally estimate. The current method for estimating buffet loads is through the use of a few hundred unsteady pressure transducers and wind tunnel test. Even with a large number of point measurements, the computed integrated load is not an accurate enough representation of the total load caused by buffeting. This paper discusses an attempt at using a dynamic balance to experimentally determine buffet loads on a generic scale hammer head launch vehicle model tested at NASA Ames Research Center's 11' x 11' transonic wind tunnel. To use a dynamic balance, the structural characteristics of the model needed to be identified so that the natural modal response could be and removed from the aerodynamic forces. A finite element model was created on a simplified version of the model to evaluate the natural modes of the balance flexures, assist in model design, and to compare to experimental data. Several modal tests were conducted on the model in two different configurations to check for non-linearity, and to estimate the dynamic characteristics of the model. The experimental results were used in an inverse force determination technique with a psuedo inverse frequency response function. Due to the non linearity, the model not being axisymmetric, and inconsistent data between the two shake tests from different mounting configuration, it was difficult to create a frequency response matrix that satisfied all input and output conditions for wind tunnel configuration to accurately predict unsteady aerodynamic loads.

  1. Non-linear Parameter Estimates from Non-stationary MEG Data

    PubMed Central

    Martínez-Vargas, Juan D.; López, Jose D.; Baker, Adam; Castellanos-Dominguez, German; Woolrich, Mark W.; Barnes, Gareth

    2016-01-01

    We demonstrate a method to estimate key electrophysiological parameters from resting state data. In this paper, we focus on the estimation of head-position parameters. The recovery of these parameters is especially challenging as they are non-linearly related to the measured field. In order to do this we use an empirical Bayesian scheme to estimate the cortical current distribution due to a range of laterally shifted head-models. We compare different methods of approaching this problem from the division of M/EEG data into stationary sections and performing separate source inversions, to explaining all of the M/EEG data with a single inversion. We demonstrate this through estimation of head position in both simulated and empirical resting state MEG data collected using a head-cast. PMID:27597815

  2. Ground-based microwave radiometric remote sensing of the tropical atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Yong.

    1992-01-01

    A partially developed 9-channel ground-based microwave radiometer for the Department of Meteorology at Penn State was completed and tested. Complementary units were added, corrections to both hardware and software were made, and system software was corrected and upgraded. Measurements from this radiometer were used to infer tropospheric temperature, water vapor and cloud liquid water. The various weighting functions at each of the 9 channels were calculated and analyzed to estimate the sensitivities of the brightness temperature to the desired atmospheric variables. The mathematical inversion problem, in a linear form, was viewed in terms of the theory of linear algebra. Severalmore » methods for solving the inversion problem were reviewed. Radiometric observations were conducted during the 1990 Tropical Cyclone Motion Experiment. The radiometer was installed on the island of Saipan in a tropical region. The radiometer was calibrated using tipping curve and radiosonde data as well as measurements of the radiation from a blackbody absorber. A linear statistical method was applied for the data inversion. The inversion coefficients in the equation were obtained using a large number of radiosonde profiles from Guam and a radiative transfer model. Retrievals were compared with those from local, Saipan, radiosonde measurements. Water vapor profiles, integrated water vapor, and integrated liquid water were retrieved successfully. For temperature profile retrievals, however, the radiometric measurements with experimental noises added no more profile information to the inversion than that they were determined mainly by the surface pressure measurements. A method was developed to derive the integrated water vapor and liquid water from combined radiometer and ceilometer measurements. Significant improvement on radiometric measurements of the integrated liquid water can be gained with this method.« less

  3. Parasitic chytrids sustain zooplankton growth during inedible algal bloom

    PubMed Central

    Rasconi, Serena; Grami, Boutheina; Niquil, Nathalie; Jobard, Marlène; Sime-Ngando, Télesphore

    2014-01-01

    This study assesses the quantitative impact of parasitic chytrids on the planktonic food web of two contrasting freshwater lakes during different algal bloom situations. Carbon-based food web models were used to investigate the effects of chytrids during the spring diatom bloom in Lake Pavin (oligo-mesotrophic) and the autumn cyanobacteria bloom in Lake Aydat (eutrophic). Linear inverse modeling was employed to estimate undetermined flows in both lakes. The Monte Carlo Markov chain linear inverse modeling procedure provided estimates of the ranges of model-derived fluxes. Model results confirm recent theories on the impact of parasites on food web function through grazers and recyclers. During blooms of “inedible” algae (unexploited by planktonic herbivores), the epidemic growth of chytrids channeled 19–20% of the primary production in both lakes through the production of grazer exploitable zoospores. The parasitic throughput represented 50% and 57% of the zooplankton diet, respectively, in the oligo-mesotrophic and in the eutrophic lakes. Parasites also affected ecological network properties such as longer carbon path lengths and loop strength, and contributed to increase the stability of the aquatic food web, notably in the oligo-mesotrophic Lake Pavin. PMID:24904543

  4. Flatness-based control and Kalman filtering for a continuous-time macroeconomic model

    NASA Astrophysics Data System (ADS)

    Rigatos, G.; Siano, P.; Ghosh, T.; Busawon, K.; Binns, R.

    2017-11-01

    The article proposes flatness-based control for a nonlinear macro-economic model of the UK economy. The differential flatness properties of the model are proven. This enables to introduce a transformation (diffeomorphism) of the system's state variables and to express the state-space description of the model in the linear canonical (Brunowsky) form in which both the feedback control and the state estimation problem can be solved. For the linearized equivalent model of the macroeconomic system, stabilizing feedback control can be achieved using pole placement methods. Moreover, to implement stabilizing feedback control of the system by measuring only a subset of its state vector elements the Derivative-free nonlinear Kalman Filter is used. This consists of the Kalman Filter recursion applied on the linearized equivalent model of the financial system and of an inverse transformation that is based again on differential flatness theory. The asymptotic stability properties of the control scheme are confirmed.

  5. Towards the mechanical characterization of abdominal wall by inverse analysis.

    PubMed

    Simón-Allué, R; Calvo, B; Oberai, A A; Barbone, P E

    2017-02-01

    The aim of this study is to characterize the passive mechanical behaviour of abdominal wall in vivo in an animal model using only external cameras and numerical analysis. The main objective lies in defining a methodology that provides in vivo information of a specific patient without altering mechanical properties. It is demonstrated in the mechanical study of abdomen for hernia purposes. Mechanical tests consisted on pneumoperitoneum tests performed on New Zealand rabbits, where inner pressure was varied from 0mmHg to 12mmHg. Changes in the external abdominal surface were recorded and several points were tracked. Based on their coordinates we reconstructed a 3D finite element model of the abdominal wall, considering an incompressible hyperelastic material model defined by two parameters. The spatial distributions of these parameters (shear modulus and non linear parameter) were calculated by inverse analysis, using two different types of regularization: Total Variation Diminishing (TVD) and Tikhonov (H 1 ). After solving the inverse problem, the distribution of the material parameters were obtained along the abdominal surface. Accuracy of the results was evaluated for the last level of pressure. Results revealed a higher value of the shear modulus in a wide stripe along the craneo-caudal direction, associated with the presence of linea alba in conjunction with fascias and rectus abdominis. Non linear parameter distribution was smoother and the location of higher values varied with the regularization type. Both regularizations proved to yield in an accurate predicted displacement field, but H 1 obtained a smoother material parameter distribution while TVD included some discontinuities. The methodology here presented was able to characterize in vivo the passive non linear mechanical response of the abdominal wall. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Variability simulations with a steady, linearized primitive equations model

    NASA Technical Reports Server (NTRS)

    Kinter, J. L., III; Nigam, S.

    1985-01-01

    Solutions of the steady, primitive equations on a sphere, linearized about a zonally symmetric basic state are computed for the purpose of simulating monthly mean variability in the troposphere. The basic states are observed, winter monthly mean, zonal means of zontal and meridional velocities, temperatures and surface pressures computed from the 15 year NMC time series. A least squares fit to a series of Legendre polynomials is used to compute the basic states between 20 H and the equator, and the hemispheres are assumed symmetric. The model is spectral in the zonal direction, and centered differences are employed in the meridional and vertical directions. Since the model is steady and linear, the solution is obtained by inversion of a block, pente-diagonal matrix. The model simulates the climatology of the GFDL nine level, spectral general circulation model quite closely, particularly in middle latitudes above the boundary layer. This experiment is an extension of that simulation to examine variability of the steady, linear solution.

  7. Mixing and evaporation processes in an inverse estuary inferred from δ2H and δ18O

    NASA Astrophysics Data System (ADS)

    Corlis, Nicholas J.; Herbert Veeh, H.; Dighton, John C.; Herczeg, Andrew L.

    2003-05-01

    We have measured δ2H and δ18O in Spencer Gulf, South Australia, an inverse estuary with a salinity gradient from 36‰ near its entrance to about 45‰ at its head. We show that a simple evaporation model of seawater under ambient conditions, aided by its long residence time in Spencer Gulf, can account for the major features of the non-linear distribution pattern of δ2H with respect to salinity, at least in the restricted part of the gulf. In the more exposed part of the gulf, the δ/ S pattern appears to be governed primarily by mixing processes between inflowing shelf water and outflowing high salinity gulf water. These data provide direct support for the oceanographic model of Spencer Gulf previously proposed by other workers. Although the observed δ/ S relationship here is non-linear and hence in notable contrast to the linear δ/ S relationship in the Red Sea, the slopes of δ2H vs. δ18O are comparable, indicating that the isotopic enrichments in both marginal seas are governed by similar climatic conditions with evaporation exceeding precipitation.

  8. Real-time solution of linear computational problems using databases of parametric reduced-order models with arbitrary underlying meshes

    NASA Astrophysics Data System (ADS)

    Amsallem, David; Tezaur, Radek; Farhat, Charbel

    2016-12-01

    A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.

  9. Joint inversion of multiple geophysical and petrophysical data using generalized fuzzy clustering algorithms

    NASA Astrophysics Data System (ADS)

    Sun, Jiajia; Li, Yaoguo

    2017-02-01

    Joint inversion that simultaneously inverts multiple geophysical data sets to recover a common Earth model is increasingly being applied to exploration problems. Petrophysical data can serve as an effective constraint to link different physical property models in such inversions. There are two challenges, among others, associated with the petrophysical approach to joint inversion. One is related to the multimodality of petrophysical data because there often exist more than one relationship between different physical properties in a region of study. The other challenge arises from the fact that petrophysical relationships have different characteristics and can exhibit point, linear, quadratic, or exponential forms in a crossplot. The fuzzy c-means (FCM) clustering technique is effective in tackling the first challenge and has been applied successfully. We focus on the second challenge in this paper and develop a joint inversion method based on variations of the FCM clustering technique. To account for the specific shapes of petrophysical relationships, we introduce several different fuzzy clustering algorithms that are capable of handling different shapes of petrophysical relationships. We present two synthetic and one field data examples and demonstrate that, by choosing appropriate distance measures for the clustering component in the joint inversion algorithm, the proposed joint inversion method provides an effective means of handling common petrophysical situations we encounter in practice. The jointly inverted models have both enhanced structural similarity and increased petrophysical correlation, and better represent the subsurface in the spatial domain and the parameter domain of physical properties.

  10. A Hybrid Seismic Inversion Method for V P/V S Ratio and Its Application to Gas Identification

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Zhang, Hongbing; Han, Feilong; Xiao, Wei; Shang, Zuoping

    2018-03-01

    The ratio of compressional wave velocity to shear wave velocity (V P/V S ratio) has established itself as one of the most important parameters in identifying gas reservoirs. However, considering that seismic inversion process is highly non-linear and geological conditions encountered may be complex, a direct estimation of V P/V S ratio from pre-stack seismic data remains a challenging task. In this paper, we propose a hybrid seismic inversion method to estimate V P/V S ratio directly. In this method, post- and pre-stack inversions are combined in which the pre-stack inversion for V P/V S ratio is driven by the post-stack inversion results (i.e., V P and density). In particular, the V P/V S ratio is considered as a model parameter and is directly inverted from the pre-stack inversion based on the exact Zoeppritz equation. Moreover, anisotropic Markov random field is employed in order to regularise the inversion process as well as taking care of geological structures (boundaries) information. Aided by the proposed hybrid inversion strategy, the directional weighting coefficients incorporated in the anisotropic Markov random field neighbourhoods are quantitatively calculated by the anisotropic diffusion method. The synthetic test demonstrates the effectiveness of the proposed inversion method. In particular, given low quality of the pre-stack data and high heterogeneity of the target layers in the field data, the proposed inversion method reveals the detailed model of V P/V S ratio that can successfully identify the gas-bearing zones.

  11. Improved characterisation of measurement errors in electrical resistivity tomography (ERT) surveys

    NASA Astrophysics Data System (ADS)

    Tso, C. H. M.; Binley, A. M.; Kuras, O.; Graham, J.

    2016-12-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe a statistical model of data errors before inversion. Wrongly prescribed error levels can lead to over- or under-fitting of data, yet commonly used models of measurement error are relatively simplistic. With the heightening interests in uncertainty estimation across hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide more reliable estimates of uncertainty. We have analysed two time-lapse electrical resistivity tomography (ERT) datasets; one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24h timeframe, while the other is a year-long cross-borehole survey at a UK nuclear site with over 50,000 daily measurements. Our study included the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and covariance analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used. This agrees with reported speculation in previous literature that ERT errors could be somewhat correlated. Based on these findings, we develop a new error model that allows grouping based on electrode number in additional to fitting a linear model to transfer resistance. The new model fits the observed measurement errors better and shows superior inversion and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the number of the four electrodes used to make each measurement. The new model can be readily applied to the diagonal data weighting matrix commonly used in classical inversion methods, as well as to the data covariance matrix in the Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  12. Aircraft automatic-flight-control system with inversion of the model in the feed-forward path using a Newton-Raphson technique for the inversion

    NASA Technical Reports Server (NTRS)

    Smith, G. A.; Meyer, G.; Nordstrom, M.

    1986-01-01

    A new automatic flight control system concept suitable for aircraft with highly nonlinear aerodynamic and propulsion characteristics and which must operate over a wide flight envelope was investigated. This exact model follower inverts a complete nonlinear model of the aircraft as part of the feed-forward path. The inversion is accomplished by a Newton-Raphson trim of the model at each digital computer cycle time of 0.05 seconds. The combination of the inverse model and the actual aircraft in the feed-forward path alloys the translational and rotational regulators in the feedback path to be easily designed by linear methods. An explanation of the model inversion procedure is presented. An extensive set of simulation data for essentially the full flight envelope for a vertical attitude takeoff and landing aircraft (VATOL) is presented. These data demonstrate the successful, smooth, and precise control that can be achieved with this concept. The trajectory includes conventional flight from 200 to 900 ft/sec with path accelerations and decelerations, altitude changes of over 6000 ft and 2g and 3g turns. Vertical attitude maneuvering as a tail sitter along all axes is demonstrated. A transition trajectory from 200 ft/sec in conventional flight to stationary hover in the vertical attitude includes satisfactory operation through lift-cure slope reversal as attitude goes from horizontal to vertical at constant altitude. A vertical attitude takeoff from stationary hover to conventional flight is also demonstrated.

  13. Automated dynamic analytical model improvement for damped structures

    NASA Technical Reports Server (NTRS)

    Fuh, J. S.; Berman, A.

    1985-01-01

    A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.

  14. Sky-radiance gradient measurements at narrow bands in the visible.

    PubMed

    Winter, E M; Metcalf, T W; Stotts, L B

    1995-07-01

    Accurate calibrated measurements of the radiance of the daytime sky were made in narrow bands in the visible portion of the spectrum. These measurements were made over several months and were tabulated in a sun-referenced coordinate system. The radiance as a function of wavelength at angles ranging from 5 to 90 deg was plotted. A best-fit inverse power-law fit shows inversely linear behavior of the radiance versus wavelength near the Sun (5 deg) and a slope approaching inverse fourth power far from the Sun (60 deg). This behavior fits a Mie-scattering interpretation near the Sun and a Rayleigh-scattering interpretation away from the Sun. The results are also compared with LOWTRAN models.

  15. A Glimpse in the Third Dimension for Electrical Resistivity Profiles

    NASA Astrophysics Data System (ADS)

    Robbins, A. R.; Plattner, A.

    2017-12-01

    We present an electrode layout strategy designed to enhance the popular two-dimensional electrical resistivity profile. Offsetting electrodes from the traditional linear layout and using 3-D inversion software allows for mapping the three-dimensional electrical resistivity close to the profile plane. We established a series of synthetic tests using simulated data generated from chosen resistivity distributions with a three-dimensional target feature. All inversions and simulations were conducted using freely-available ERT software, BERT and E4D. Synthetic results demonstrate the effectiveness of the offset electrode approach, whereas the linear layout failed to resolve the three-dimensional character of our subsurface feature. A field survey using trench backfill as a known resistivity contrast confirmed our synthetic tests. As we show, 3-D inversions of linear layouts for starting models without previously known structure are futile ventures because they generate symmetric resistivity solutions with respect to the profile plane. This is a consequence of the layout's inherent symmetrical sensitivity patterns. An offset electrode layout is not subject to the same limitation, as the collective measurements do not share a common sensitivity symmetry. For practitioners, this approach presents a low-cost improvement of a traditional geophysical method which is simple to use yet may provide critical information about the three dimensional structure of the subsurface close to the profile.

  16. Novel Scalable 3-D MT Inverse Solver

    NASA Astrophysics Data System (ADS)

    Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.

    2016-12-01

    We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.

  17. Estimation of splitting functions from Earth's normal mode spectra using the neighbourhood algorithm

    NASA Astrophysics Data System (ADS)

    Pachhai, Surya; Tkalčić, Hrvoje; Masters, Guy

    2016-01-01

    The inverse problem for Earth structure from normal mode data is strongly non-linear and can be inherently non-unique. Traditionally, the inversion is linearized by taking partial derivatives of the complex spectra with respect to the model parameters (i.e. structure coefficients), and solved in an iterative fashion. This method requires that the earthquake source model is known. However, the release of energy in large earthquakes used for the analysis of Earth's normal modes is not simple. A point source approximation is often inadequate, and a more complete account of energy release at the source is required. In addition, many earthquakes are required for the solution to be insensitive to the initial constraints and regularization. In contrast to an iterative approach, the autoregressive linear inversion technique conveniently avoids the need for earthquake source parameters, but it also requires a number of events to achieve full convergence when a single event does not excite all singlets well. To build on previous improvements, we develop a technique to estimate structure coefficients (and consequently, the splitting functions) using a derivative-free parameter search, known as neighbourhood algorithm (NA). We implement an efficient forward method derived using the autoregresssion of receiver strips, and this allows us to search over a multiplicity of structure coefficients in a relatively short time. After demonstrating feasibility of the use of NA in synthetic cases, we apply it to observations of the inner core sensitive mode 13S2. The splitting function of this mode is dominated by spherical harmonic degree 2 axisymmetric structure and is consistent with the results obtained from the autoregressive linear inversion. The sensitivity analysis of multiple events confirms the importance of the Bolivia, 1994 earthquake. When this event is used in the analysis, as little as two events are sufficient to constrain the splitting functions of 13S2 mode. Apart from not requiring the knowledge of earthquake source, the newly developed technique provides an approximate uncertainty measure of the structure coefficients and allows us to control the type of structure solved for, for example to establish if elastic structure is sufficient.

  18. Linear and nonlinear models for predicting fish bioconcentration factors for pesticides.

    PubMed

    Yuan, Jintao; Xie, Chun; Zhang, Ting; Sun, Jinfang; Yuan, Xuejie; Yu, Shuling; Zhang, Yingbiao; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu

    2016-08-01

    This work is devoted to the applications of the multiple linear regression (MLR), multilayer perceptron neural network (MLP NN) and projection pursuit regression (PPR) to quantitative structure-property relationship analysis of bioconcentration factors (BCFs) of pesticides tested on Bluegill (Lepomis macrochirus). Molecular descriptors of a total of 107 pesticides were calculated with the DRAGON Software and selected by inverse enhanced replacement method. Based on the selected DRAGON descriptors, a linear model was built by MLR, nonlinear models were developed using MLP NN and PPR. The robustness of the obtained models was assessed by cross-validation and external validation using test set. Outliers were also examined and deleted to improve predictive power. Comparative results revealed that PPR achieved the most accurate predictions. This study offers useful models and information for BCF prediction, risk assessment, and pesticide formulation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Sneutrino dark matter in gauged inverse seesaw models for neutrinos.

    PubMed

    An, Haipeng; Dev, P S Bhupal; Cai, Yi; Mohapatra, R N

    2012-02-24

    Extending the minimal supersymmetric standard model to explain small neutrino masses via the inverse seesaw mechanism can lead to a new light supersymmetric scalar partner which can play the role of inelastic dark matter (IDM). It is a linear combination of the superpartners of the neutral fermions in the theory (the light left-handed neutrino and two heavy standard model singlet neutrinos) which can be very light with mass in ~5-20 GeV range, as suggested by some current direct detection experiments. The IDM in this class of models has keV-scale mass splitting, which is intimately connected to the small Majorana masses of neutrinos. We predict the differential scattering rate and annual modulation of the IDM signal which can be testable at future germanium- and xenon-based detectors.

  20. Approximate non-linear multiparameter inversion for multicomponent single and double P-wave scattering in isotropic elastic media

    NASA Astrophysics Data System (ADS)

    Ouyang, Wei; Mao, Weijian

    2018-03-01

    An asymptotic quadratic true-amplitude inversion method for isotropic elastic P waves is proposed to invert medium parameters. The multicomponent P-wave scattered wavefield is computed based on a forward relationship using second-order Born approximation and corresponding high-frequency ray theoretical methods. Within the local double scattering mechanism, the P-wave transmission factors are elaborately calculated, which results in the radiation pattern for P-waves scattering being a quadratic combination of the density and Lamé's moduli perturbation parameters. We further express the elastic P-wave scattered wavefield in a form of generalized Radon transform (GRT). After introducing classical backprojection operators, we obtain an approximate solution of the inverse problem by solving a quadratic non-linear system. Numerical tests with synthetic data computed by finite-differences scheme demonstrate that our quadratic inversion can accurately invert perturbation parameters for strong perturbations, compared with the P-wave single-scattering linear inversion method. Although our inversion strategy here is only syncretized with P-wave scattering, it can be extended to invert multicomponent elastic data containing both P-wave and S-wave information.

  1. Trajectory following and stabilization control of fully actuated AUV using inverse kinematics and self-tuning fuzzy PID.

    PubMed

    Hammad, Mohanad M; Elshenawy, Ahmed K; El Singaby, M I

    2017-01-01

    In this work a design for self-tuning non-linear Fuzzy Proportional Integral Derivative (FPID) controller is presented to control position and speed of Multiple Input Multiple Output (MIMO) fully-actuated Autonomous Underwater Vehicles (AUV) to follow desired trajectories. Non-linearity that results from the hydrodynamics and the coupled AUV dynamics makes the design of a stable controller a very difficult task. In this study, the control scheme in a simulation environment is validated using dynamic and kinematic equations for the AUV model and hydrodynamic damping equations. An AUV configuration with eight thrusters and an inverse kinematic model from a previous work is utilized in the simulation. In the proposed controller, Mamdani fuzzy rules are used to tune the parameters of the PID. Nonlinear fuzzy Gaussian membership functions are selected to give better performance and response in the non-linear system. A control architecture with two feedback loops is designed such that the inner loop is for velocity control and outer loop is for position control. Several test scenarios are executed to validate the controller performance including different complex trajectories with and without injection of ocean current disturbances. A comparison between the proposed FPID controller and the conventional PID controller is studied and shows that the FPID controller has a faster response to the reference signal and more stable behavior in a disturbed non-linear environment.

  2. Trajectory following and stabilization control of fully actuated AUV using inverse kinematics and self-tuning fuzzy PID

    PubMed Central

    Elshenawy, Ahmed K.; El Singaby, M.I.

    2017-01-01

    In this work a design for self-tuning non-linear Fuzzy Proportional Integral Derivative (FPID) controller is presented to control position and speed of Multiple Input Multiple Output (MIMO) fully-actuated Autonomous Underwater Vehicles (AUV) to follow desired trajectories. Non-linearity that results from the hydrodynamics and the coupled AUV dynamics makes the design of a stable controller a very difficult task. In this study, the control scheme in a simulation environment is validated using dynamic and kinematic equations for the AUV model and hydrodynamic damping equations. An AUV configuration with eight thrusters and an inverse kinematic model from a previous work is utilized in the simulation. In the proposed controller, Mamdani fuzzy rules are used to tune the parameters of the PID. Nonlinear fuzzy Gaussian membership functions are selected to give better performance and response in the non-linear system. A control architecture with two feedback loops is designed such that the inner loop is for velocity control and outer loop is for position control. Several test scenarios are executed to validate the controller performance including different complex trajectories with and without injection of ocean current disturbances. A comparison between the proposed FPID controller and the conventional PID controller is studied and shows that the FPID controller has a faster response to the reference signal and more stable behavior in a disturbed non-linear environment. PMID:28683071

  3. On the inversion-indel distance

    PubMed Central

    2013-01-01

    Background The inversion distance, that is the distance between two unichromosomal genomes with the same content allowing only inversions of DNA segments, can be computed thanks to a pioneering approach of Hannenhalli and Pevzner in 1995. In 2000, El-Mabrouk extended the inversion model to allow the comparison of unichromosomal genomes with unequal contents, thus insertions and deletions of DNA segments besides inversions. However, an exact algorithm was presented only for the case in which we have insertions alone and no deletion (or vice versa), while a heuristic was provided for the symmetric case, that allows both insertions and deletions and is called the inversion-indel distance. In 2005, Yancopoulos, Attie and Friedberg started a new branch of research by introducing the generic double cut and join (DCJ) operation, that can represent several genome rearrangements (including inversions). Among others, the DCJ model gave rise to two important results. First, it has been shown that the inversion distance can be computed in a simpler way with the help of the DCJ operation. Second, the DCJ operation originated the DCJ-indel distance, that allows the comparison of genomes with unequal contents, considering DCJ, insertions and deletions, and can be computed in linear time. Results In the present work we put these two results together to solve an open problem, showing that, when the graph that represents the relation between the two compared genomes has no bad components, the inversion-indel distance is equal to the DCJ-indel distance. We also give a lower and an upper bound for the inversion-indel distance in the presence of bad components. PMID:24564182

  4. Developing a reversible rapid coordinate transformation model for the cylindrical projection

    NASA Astrophysics Data System (ADS)

    Ye, Si-jing; Yan, Tai-lai; Yue, Yan-li; Lin, Wei-yan; Li, Lin; Yao, Xiao-chuang; Mu, Qin-yun; Li, Yong-qin; Zhu, De-hai

    2016-04-01

    Numerical models are widely used for coordinate transformations. However, in most numerical models, polynomials are generated to approximate "true" geographic coordinates or plane coordinates, and one polynomial is hard to make simultaneously appropriate for both forward and inverse transformations. As there is a transformation rule between geographic coordinates and plane coordinates, how accurate and efficient is the calculation of the coordinate transformation if we construct polynomials to approximate the transformation rule instead of "true" coordinates? In addition, is it preferable to compare models using such polynomials with traditional numerical models with even higher exponents? Focusing on cylindrical projection, this paper reports on a grid-based rapid numerical transformation model - a linear rule approximation model (LRA-model) that constructs linear polynomials to approximate the transformation rule and uses a graticule to alleviate error propagation. Our experiments on cylindrical projection transformation between the WGS 84 Geographic Coordinate System (EPSG 4326) and the WGS 84 UTM ZONE 50N Plane Coordinate System (EPSG 32650) with simulated data demonstrate that the LRA-model exhibits high efficiency, high accuracy, and high stability; is simple and easy to use for both forward and inverse transformations; and can be applied to the transformation of a large amount of data with a requirement of high calculation efficiency. Furthermore, the LRA-model exhibits advantages in terms of calculation efficiency, accuracy and stability for coordinate transformations, compared to the widely used hyperbolic transformation model.

  5. Zinc oxide inverse opal enzymatic biosensor

    NASA Astrophysics Data System (ADS)

    You, Xueqiu; Pikul, James H.; King, William P.; Pak, James J.

    2013-06-01

    We report ZnO inverse opal- and nanowire (NW)-based enzymatic glucose biosensors with extended linear detection ranges. The ZnO inverse opal sensors have 0.01-18 mM linear detection range, which is 2.5 times greater than that of ZnO NW sensors and 1.5 times greater than that of other reported ZnO sensors. This larger range is because of reduced glucose diffusivity through the inverse opal geometry. The ZnO inverse opal sensors have an average sensitivity of 22.5 μA/(mM cm2), which diminished by 10% after 35 days, are more stable than ZnO NW sensors whose sensitivity decreased by 10% after 7 days.

  6. Polynomial compensation, inversion, and approximation of discrete time linear systems

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1987-01-01

    The least-squares transformation of a discrete-time multivariable linear system into a desired one by convolving the first with a polynomial system yields optimal polynomial solutions to the problems of system compensation, inversion, and approximation. The polynomial coefficients are obtained from the solution to a so-called normal linear matrix equation, whose coefficients are shown to be the weighting patterns of certain linear systems. These, in turn, can be used in the recursive solution of the normal equation.

  7. On a comparison of two schemes in sequential data assimilation

    NASA Astrophysics Data System (ADS)

    Grishina, Anastasiia A.; Penenko, Alexey V.

    2017-11-01

    This paper is focused on variational data assimilation as an approach to mathematical modeling. Realization of the approach requires a sequence of connected inverse problems with different sets of observational data to be solved. Two variational data assimilation schemes, "implicit" and "explicit", are considered in the article. Their equivalence is shown and the numerical results are given on a basis of non-linear Robertson system. To avoid the "inverse problem crime" different schemes were used to produce synthetic measurement and to solve the data assimilation problem.

  8. Linear functional minimization for inverse modeling

    DOE PAGES

    Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; ...

    2015-06-01

    In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less

  9. Modern Workflow Full Waveform Inversion Applied to North America and the Northern Atlantic

    NASA Astrophysics Data System (ADS)

    Krischer, Lion; Fichtner, Andreas; Igel, Heiner

    2015-04-01

    We present the current state of a new seismic tomography model obtained using full waveform inversion of the crustal and upper mantle structure beneath North America and the Northern Atlantic, including the westernmost part of Europe. Parts of the eastern portion of the initial model consists of previous models by Fichtner et al. (2013) and Rickers et al. (2013). The final results of this study will contribute to the 'Comprehensive Earth Model' being developed by the Computational Seismology group at ETH Zurich. Significant challenges include the size of the domain, the uneven event and station coverage, and the strong east-west alignment of seismic ray paths across the North Atlantic. We use as much data as feasible, resulting in several thousand recordings per event depending on the receivers deployed at the earthquakes' origin times. To manage such projects in a reproducible and collaborative manner, we, as tomographers, should abandon ad-hoc scripts and one-time programs, and adopt sustainable and reusable solutions. Therefore we developed the LArge-scale Seismic Inversion Framework (LASIF - http://lasif.net), an open-source toolbox for managing seismic data in the context of non-linear iterative inversions that greatly reduces the time to research. Information on the applied processing, modelling, iterative model updating, what happened during each iteration, and so on are systematically archived. This results in a provenance record of the final model which in the end significantly enhances the reproducibility of iterative inversions. Additionally, tools for automated data download across different data centers, window selection, misfit measurements, parallel data processing, and input file generation for various forward solvers are provided.

  10. On using surface-source downhole-receiver logging to determine seismic slownesses

    USGS Publications Warehouse

    Boore, D.M.; Thompson, E.M.

    2007-01-01

    We present a method to solve for slowness models from surface-source downhole-receiver seismic travel-times. The method estimates the slownesses in a single inversion of the travel-times from all receiver depths and accounts for refractions at layer boundaries. The number and location of layer interfaces in the model can be selected based on lithologic changes or linear trends in the travel-time data. The interfaces based on linear trends in the data can be picked manually or by an automated algorithm. We illustrate the method with example sites for which geologic descriptions of the subsurface materials and independent slowness measurements are available. At each site we present slowness models that result from different interpretations of the data. The examples were carefully selected to address the reliability of interface-selection and the ability of the inversion to identify thin layers, large slowness contrasts, and slowness gradients. Additionally, we compare the models in terms of ground-motion amplification. These plots illustrate the sensitivity of site amplifications to the uncertainties in the slowness model. We show that one-dimensional site amplifications are insensitive to thin layers in the slowness models; although slowness is variable over short ranges of depth, this variability has little affect on ground-motion amplification at frequencies up to 5 Hz.

  11. Bayesian Travel Time Inversion adopting Gaussian Process Regression

    NASA Astrophysics Data System (ADS)

    Mauerberger, S.; Holschneider, M.

    2017-12-01

    A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.

  12. Controllable rotational inversion in nanostructures with dual chirality.

    PubMed

    Dai, Lu; Zhu, Ka-Di; Shen, Wenzhong; Huang, Xiaojiang; Zhang, Li; Goriely, Alain

    2018-04-05

    Chiral structures play an important role in natural sciences due to their great variety and potential applications. A perversion connecting two helices with opposite chirality creates a dual-chirality helical structure. In this paper, we develop a novel model to explore quantitatively the mechanical behavior of normal, binormal and transversely isotropic helical structures with dual chirality and apply these ideas to known nanostructures. It is found that both direction and amplitude of rotation can be finely controlled by designing the cross-sectional shape. A peculiar rotational inversion of overwinding followed by unwinding, observed in some gourd and cucumber tendril perversions, not only exists in transversely isotropic dual-chirality helical nanobelts, but also in the binormal/normal ones when the cross-sectional aspect ratio is close to 1. Beyond this rotational inversion region, the binormal and normal dual-chirality helical nanobelts exhibit a fixed directional rotation of unwinding and overwinding, respectively. Moreover, in the binormal case, the rotation of these helical nanobelts is nearly linear, which is promising as a possible design for linear-to-rotary motion converters. The present work suggests new designs for nanoscale devices.

  13. Shear wave velocity structure in North America from large-scale waveform inversions of surface waves

    USGS Publications Warehouse

    Alsina, D.; Woodward, R.L.; Snieder, R.K.

    1996-01-01

    A two-step nonlinear and linear inversion is carried out to map the lateral heterogeneity beneath North America using surface wave data. The lateral resolution for most areas of the model is of the order of several hundred kilometers. The most obvious feature in the tomographic images is the rapid transition between low velocities in the technically active region west of the Rocky Mountains and high velocities in the stable central and eastern shield of North America. The model also reveals smaller-scale heterogeneous velocity structures. A high-velocity anomaly is imaged beneath the state of Washington that could be explained as the subducting Juan de Fuca plate beneath the Cascades. A large low-velocity structure extends along the coast from the Mendocino to the Rivera triple junction and to the continental interior across the southwestern United States and northwestern Mexico. Its shape changes notably with depth. This anomaly largely coincides with the part of the margin where no lithosphere is consumed since the subduction has been replaced by a transform fault. Evidence for a discontinuous subduction of the Cocos plate along the Middle American Trench is found. In central Mexico a transition is visible from low velocities across the Trans-Mexican Volcanic Belt (TMVB) to high velocities beneath the Yucatan Peninsula. Two elongated low-velocity anomalies beneath the Yellowstone Plateau and the eastern Snake River Plain volcanic system and beneath central Mexico and the TMVB seem to be associated with magmatism and partial melting. Another low-velocity feature is seen at depths of approximately 200 km beneath Florida and the Atlantic Coastal Plain. The inversion technique used is based on a linear surface wave scattering theory, which gives tomographic images of the relative phase velocity perturbations in four period bands ranging from 40 to 150 s. In order to find a smooth reference model a nonlinear inversion based on ray theory is first performed. After correcting for the crustal thickness the phase velocity perturbations obtained from the subsequent linear waveform inversion for the different period bands are converted to a three-layer model of S velocity perturbations (layer 1, 25-100 km; layer 2, 100-200 km) layer 3, 200-300 km). We have applied this method on 275 high-quality Rayleigh waves recorded by a variety of instruments in North America (IRIS/USGS, IRIS/IDA, TERRAscope, RSTN). Sensitivity tests indicate that the lateral resolution is especially good in the densely sampled western continental United States, Mexico, and the Gulf of Mexico.

  14. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; ...

    2015-04-29

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  15. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, J.; Lee, J.; Yadav, V.

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  16. Full Waveform Inversion for Seismic Velocity And Anelastic Losses in Heterogeneous Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Askan, A.; /Carnegie Mellon U.; Akcelik, V.

    2009-04-30

    We present a least-squares optimization method for solving the nonlinear full waveform inverse problem of determining the crustal velocity and intrinsic attenuation properties of sedimentary valleys in earthquake-prone regions. Given a known earthquake source and a set of seismograms generated by the source, the inverse problem is to reconstruct the anelastic properties of a heterogeneous medium with possibly discontinuous wave velocities. The inverse problem is formulated as a constrained optimization problem, where the constraints are the partial and ordinary differential equations governing the anelastic wave propagation from the source to the receivers in the time domain. This leads to amore » variational formulation in terms of the material model plus the state variables and their adjoints. We employ a wave propagation model in which the intrinsic energy-dissipating nature of the soil medium is modeled by a set of standard linear solids. The least-squares optimization approach to inverse wave propagation presents the well-known difficulties of ill posedness and multiple minima. To overcome ill posedness, we include a total variation regularization functional in the objective function, which annihilates highly oscillatory material property components while preserving discontinuities in the medium. To treat multiple minima, we use a multilevel algorithm that solves a sequence of subproblems on increasingly finer grids with increasingly higher frequency source components to remain within the basin of attraction of the global minimum. We illustrate the methodology with high-resolution inversions for two-dimensional sedimentary models of the San Fernando Valley, under SH-wave excitation. We perform inversions for both the seismic velocity and the intrinsic attenuation using synthetic waveforms at the observer locations as pseudoobserved data.« less

  17. Nonlinear inversion of resistivity sounding data for 1-D earth models using the Neighbourhood Algorithm

    NASA Astrophysics Data System (ADS)

    Ojo, A. O.; Xie, Jun; Olorunfemi, M. O.

    2018-01-01

    To reduce ambiguity related to nonlinearities in the resistivity model-data relationships, an efficient direct-search scheme employing the Neighbourhood Algorithm (NA) was implemented to solve the 1-D resistivity problem. In addition to finding a range of best-fit models which are more likely to be global minimums, this method investigates the entire multi-dimensional model space and provides additional information about the posterior model covariance matrix, marginal probability density function and an ensemble of acceptable models. This provides new insights into how well the model parameters are constrained and make assessing trade-offs between them possible, thus avoiding some common interpretation pitfalls. The efficacy of the newly developed program is tested by inverting both synthetic (noisy and noise-free) data and field data from other authors employing different inversion methods so as to provide a good base for comparative performance. In all cases, the inverted model parameters were in good agreement with the true and recovered model parameters from other methods and remarkably correlate with the available borehole litho-log and known geology for the field dataset. The NA method has proven to be useful whilst a good starting model is not available and the reduced number of unknowns in the 1-D resistivity inverse problem makes it an attractive alternative to the linearized methods. Hence, it is concluded that the newly developed program offers an excellent complementary tool for the global inversion of the layered resistivity structure.

  18. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  19. Errors in Tsunami Source Estimation from Tide Gauges

    NASA Astrophysics Data System (ADS)

    Arcas, D.

    2012-12-01

    Linearity of tsunami waves in deep water can be assessed as a comparison of flow speed, u to wave propagation speed √gh. In real tsunami scenarios this evaluation becomes impractical due to the absence of observational data of tsunami flow velocities in shallow water. Consequently the extent of validity of the linear regime in the ocean is unclear. Linearity is the fundamental assumption behind tsunami source inversion processes based on linear combinations of unit propagation runs from a deep water propagation database (Gica et al., 2008). The primary tsunami elevation data for such inversion is usually provided by National Oceanic and Atmospheric (NOAA) deep-water tsunami detection systems known as DART. The use of tide gauge data for such inversions is more controversial due to the uncertainty of wave linearity at the depth of the tide gauge site. This study demonstrates the inaccuracies incurred in source estimation using tide gauge data in conjunction with a linear combination procedure for tsunami source estimation.

  20. On uncertainty quantification in hydrogeology and hydrogeophysics

    NASA Astrophysics Data System (ADS)

    Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud

    2017-12-01

    Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.

  1. A Parameterized Inversion Model for Soil Moisture and Biomass from Polarimetric Backscattering Coefficients

    NASA Technical Reports Server (NTRS)

    Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak

    2012-01-01

    A semi-empirical algorithm for the retrieval of soil moisture, root mean square (RMS) height and biomass from polarimetric SAR data is explained and analyzed in this paper. The algorithm is a simplification of the distorted Born model. It takes into account the physical scattering phenomenon and has three major components: volume, double-bounce and surface. This simplified model uses the three backscattering coefficients ( sigma HH, sigma HV and sigma vv) at low-frequency (P-band). The inversion process uses the Levenberg-Marquardt non-linear least-squares method to estimate the structural parameters. The estimation process is entirely explained in this paper, from initialization of the unknowns to retrievals. A sensitivity analysis is also done where the initial values in the inversion process are varying randomly. The results show that the inversion process is not really sensitive to initial values and a major part of the retrievals has a root-mean-square error lower than 5% for soil moisture, 24 Mg/ha for biomass and 0.49 cm for roughness, considering a soil moisture of 40%, roughness equal to 3cm and biomass varying from 0 to 500 Mg/ha with a mean of 161 Mg/ha

  2. Coseismic and post-seismic activity associated with the 2008 Mw 6.3 Damxung earthquake, Tibet, constrained by InSAR

    NASA Astrophysics Data System (ADS)

    Bie, Lidong; Ryder, Isabelle; Nippress, Stuart E. J.; Bürgmann, Roland

    2014-02-01

    The 2008 Mw 6.3 Damxung earthquake on the Tibetan Plateau is investigated to (i) derive a coseismic slip model in a layered elastic Earth; (ii) reveal the relationship between coseismic slip, afterslip and aftershocks and (iii) place a lower bound on mid/lower crustal viscosity. The fault parameters and coseismic slip model were derived by inversion of Envisat InSAR data. We developed an improved non-linear inversion scheme to find an optimal rupture geometry and slip distribution on a fault in a layered elastic crust. Although the InSAR data for this event cannot distinguish between homogeneous and layered crustal models, the maximum slip of the latter model is smaller and deeper, while the moment release calculated from both models are similar. A ˜1.6 yr post-seismic deformation time-series starting 20 d after the main shock reveals localized deformation at the southern part of the fault. Inversions for afterslip indicate three localized slip patches, and the cumulative afterslip moment after 615 d is at least ˜11 per cent of the coseismic moment. The afterslip patches are distributed at different depths along the fault, showing no obvious systematic depth-dependence. The deeper of the three patches, however, shows a slight tendency to migrate to greater depth over time. No linear correlation is found for the temporal evolution of afterslip and aftershocks. Finally, modelling of viscoelastic relaxation in a Maxwell half-space yields a lower bound of 1 × 1018 Pa s on the viscosity of the mid/lower crust. This is consistent with viscosity estimates in other studies of post-seismic deformation across the Tibetan Plateau.

  3. A Reduced-Order Successive Linear Estimator for Geostatistical Inversion and its Application in Hydraulic Tomography

    NASA Astrophysics Data System (ADS)

    Zha, Yuanyuan; Yeh, Tian-Chyi J.; Illman, Walter A.; Zeng, Wenzhi; Zhang, Yonggen; Sun, Fangqiang; Shi, Liangsheng

    2018-03-01

    Hydraulic tomography (HT) is a recently developed technology for characterizing high-resolution, site-specific heterogeneity using hydraulic data (nd) from a series of cross-hole pumping tests. To properly account for the subsurface heterogeneity and to flexibly incorporate additional information, geostatistical inverse models, which permit a large number of spatially correlated unknowns (ny), are frequently used to interpret the collected data. However, the memory storage requirements for the covariance of the unknowns (ny × ny) in these models are prodigious for large-scale 3-D problems. Moreover, the sensitivity evaluation is often computationally intensive using traditional difference method (ny forward runs). Although employment of the adjoint method can reduce the cost to nd forward runs, the adjoint model requires intrusive coding effort. In order to resolve these issues, this paper presents a Reduced-Order Successive Linear Estimator (ROSLE) for analyzing HT data. This new estimator approximates the covariance of the unknowns using Karhunen-Loeve Expansion (KLE) truncated to nkl order, and it calculates the directional sensitivities (in the directions of nkl eigenvectors) to form the covariance and cross-covariance used in the Successive Linear Estimator (SLE). In addition, the covariance of unknowns is updated every iteration by updating the eigenvalues and eigenfunctions. The computational advantages of the proposed algorithm are demonstrated through numerical experiments and a 3-D transient HT analysis of data from a highly heterogeneous field site.

  4. Sensitivity-based virtual fields for the non-linear virtual fields method

    NASA Astrophysics Data System (ADS)

    Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice

    2017-09-01

    The virtual fields method is an approach to inversely identify material parameters using full-field deformation data. In this manuscript, a new set of automatically-defined virtual fields for non-linear constitutive models has been proposed. These new sensitivity-based virtual fields reduce the influence of noise on the parameter identification. The sensitivity-based virtual fields were applied to a numerical example involving small strain plasticity; however, the general formulation derived for these virtual fields is applicable to any non-linear constitutive model. To quantify the improvement offered by these new virtual fields, they were compared with stiffness-based and manually defined virtual fields. The proposed sensitivity-based virtual fields were consistently able to identify plastic model parameters and outperform the stiffness-based and manually defined virtual fields when the data was corrupted by noise.

  5. Association of Brain-Derived Neurotrophic Factor and Vitamin D with Depression and Obesity: A Population-Based Study.

    PubMed

    Goltz, Annemarie; Janowitz, Deborah; Hannemann, Anke; Nauck, Matthias; Hoffmann, Johanna; Seyfart, Tom; Völzke, Henry; Terock, Jan; Grabe, Hans Jörgen

    2018-06-19

    Depression and obesity are widespread and closely linked. Brain-derived neurotrophic factor (BDNF) and vitamin D are both assumed to be associated with depression and obesity. Little is known about the interplay between vitamin D and BDNF. We explored the putative associations and interactions between serum BDNF and vitamin D levels with depressive symptoms and abdominal obesity in a large population-based cohort. Data were obtained from the population-based Study of Health in Pomerania (SHIP)-Trend (n = 3,926). The associations of serum BDNF and vitamin D levels with depressive symptoms (measured using the Patient Health Questionnaire) were assessed with binary and multinomial logistic regression models. The associations of serum BDNF and vitamin D levels with obesity (measured by the waist-to-hip ratio [WHR]) were assessed with binary logistic and linear regression models with restricted cubic splines. Logistic regression models revealed inverse associations of vitamin D with depression (OR = 0.966; 95% CI 0.951-0.981) and obesity (OR = 0.976; 95% CI 0.967-0.985). No linear association of serum BDNF with depression or obesity was found. However, linear regression models revealed a U-shaped association of BDNF with WHR (p < 0.001). Vitamin D was inversely associated with depression and obesity. BDNF was associated with abdominal obesity, but not with depression. At the population level, our results support the relevant roles of vitamin D and BDNF in mental and physical health-related outcomes. © 2018 S. Karger AG, Basel.

  6. A sparse reconstruction method for the estimation of multiresolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; ...

    2014-08-20

    We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO 2 (ffCO 2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  7. A posteriori error estimates in voice source recovery

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  8. Fast Geostatistical Inversion using Randomized Matrix Decompositions and Sketchings for Heterogeneous Aquifer Characterization

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Le, E. B.; Vesselinov, V. V.

    2015-12-01

    We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.

  9. On estimation of linear transformation models with nested case–control sampling

    PubMed Central

    Liu, Mengling

    2011-01-01

    Nested case–control (NCC) sampling is widely used in large epidemiological cohort studies for its cost effectiveness, but its data analysis primarily relies on the Cox proportional hazards model. In this paper, we consider a family of linear transformation models for analyzing NCC data and propose an inverse selection probability weighted estimating equation method for inference. Consistency and asymptotic normality of our estimators for regression coefficients are established. We show that the asymptotic variance has a closed analytic form and can be easily estimated. Numerical studies are conducted to support the theory and an application to the Wilms’ Tumor Study is also given to illustrate the methodology. PMID:21912975

  10. Towards the Early Detection of Breast Cancer in Young Women

    DTIC Science & Technology

    2006-10-01

    approach. 4. Poroelastic model for tissue deformation: We have implemented the model of Netti et al. in a finite element program in order to simulate...changes would not be expected. 44Interstitial Fluid Flow 5. Conclusions A poroelastic model that includes the effects of fluid flow and the possibility of...images to produce a displacement field. Using this displacement field, and an assumed linear elastic model for the tissue, an inverse problem is solved

  11. Standard and inverse bond percolation of straight rigid rods on square lattices

    NASA Astrophysics Data System (ADS)

    Ramirez, L. S.; Centres, P. M.; Ramirez-Pastor, A. J.

    2018-04-01

    Numerical simulations and finite-size scaling analysis have been carried out to study standard and inverse bond percolation of straight rigid rods on square lattices. In the case of standard percolation, the lattice is initially empty. Then, linear bond k -mers (sets of k linear nearest-neighbor bonds) are randomly and sequentially deposited on the lattice. Jamming coverage pj ,k and percolation threshold pc ,k are determined for a wide range of k (1 ≤k ≤120 ). pj ,k and pc ,k exhibit a decreasing behavior with increasing k , pj ,k →∞=0.7476 (1 ) and pc ,k →∞=0.0033 (9 ) being the limit values for large k -mer sizes. pj ,k is always greater than pc ,k, and consequently, the percolation phase transition occurs for all values of k . In the case of inverse percolation, the process starts with an initial configuration where all lattice bonds are occupied and, given that periodic boundary conditions are used, the opposite sides of the lattice are connected by nearest-neighbor occupied bonds. Then, the system is diluted by randomly removing linear bond k -mers from the lattice. The central idea here is based on finding the maximum concentration of occupied bonds (minimum concentration of empty bonds) for which connectivity disappears. This particular value of concentration is called the inverse percolation threshold pc,k i, and determines a geometrical phase transition in the system. On the other hand, the inverse jamming coverage pj,k i is the coverage of the limit state, in which no more objects can be removed from the lattice due to the absence of linear clusters of nearest-neighbor bonds of appropriate size. It is easy to understand that pj,k i=1 -pj ,k . The obtained results for pc,k i show that the inverse percolation threshold is a decreasing function of k in the range 1 ≤k ≤18 . For k >18 , all jammed configurations are percolating states, and consequently, there is no nonpercolating phase. In other words, the lattice remains connected even when the highest allowed concentration of removed bonds pj,k i is reached. In terms of network attacks, this striking behavior indicates that random attacks on single nodes (k =1 ) are much more effective than correlated attacks on groups of close nodes (large k 's). Finally, the accurate determination of critical exponents reveals that standard and inverse bond percolation models on square lattices belong to the same universality class as the random percolation, regardless of the size k considered.

  12. Transurethral Ultrasound Diffraction Tomography

    DTIC Science & Technology

    2007-03-01

    the covariance matrix was derived. The covariance reduced to that of the X- ray CT under the assumptions of linear operator and real data.[5] The...the covariance matrix in the linear x- ray computed tomography is a special case of the inverse scattering matrix derived in this paper. The matrix was...is derived in Sec. IV, and its relation to that of the linear x- ray computed tomography appears in Sec. V. In Sec. VI, the inverse scattering

  13. Computer Model Inversion and Uncertainty Quantification in the Geosciences

    NASA Astrophysics Data System (ADS)

    White, Jeremy T.

    The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for inversion. The worth of different types of tephra data to reduce parameter uncertainty is evaluated, as is the importance of different observation error models. The analyses reveal the importance using tephra granulometry data for inversion, which results in reduced uncertainty for most eruption parameters. In the third chapter, geophysical inversion is combined with hydrothermal modeling to evaluate the enthalpy of an undeveloped geothermal resource in a pull-apart basin located in southeastern Armenia. A high-dimensional gravity inversion is used to define the depth to the contact between the lower-density valley fill sediments and the higher-density surrounding host rock. The inverted basin depth distribution was used to define the hydrostratigraphy for the coupled groundwater-flow and heat-transport model that simulates the circulation of hydrothermal fluids in the system. Evaluation of several different geothermal system configurations indicates that the most likely system configuration is a low-enthalpy, liquid-dominated geothermal system.

  14. High-resolution surface wave tomography of the European crust and uppermost mantle from ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Lu, Yang; Stehly, Laurent; Paul, Anne; AlpArray Working Group

    2018-05-01

    Taking advantage of the large number of seismic stations installed in Europe, in particular in the greater Alpine region with the AlpArray experiment, we derive a new high-resolution 3-D shear-wave velocity model of the European crust and uppermost mantle from ambient noise tomography. The correlation of up to four years of continuous vertical-component seismic recordings from 1293 broadband stations (10° W-35° E, 30° N-75° N) provides Rayleigh wave group velocity dispersion data in the period band 5-150 s at more than 0.8 million virtual source-receiver pairs. Two-dimensional Rayleigh wave group velocity maps are estimated using adaptive parameterization to accommodate the strong heterogeneity of path coverage. A probabilistic 3-D shear-wave velocity model, including probability densities for the depth of layer boundaries and S-wave velocity values, is obtained by non-linear Bayesian inversion. A weighted average of the probabilistic model is then used as starting model for the linear inversion step, providing the final Vs model. The resulting S-wave velocity model and Moho depth are validated by comparison with previous geophysical studies. Although surface-wave tomography is weakly sensitive to layer boundaries, vertical cross-sections through our Vs model and the associated probability of presence of interfaces display striking similarities with reference controlled-source (CSS) and receiver-function sections across the Alpine belt. Our model even provides new structural information such as a ˜8 km Moho jump along the CSS ECORS-CROP profile that was not imaged by reflection data due to poor penetration across a heterogeneous upper crust. Our probabilistic and final shear wave velocity models have the potential to become new reference models of the European crust, both for crustal structure probing and geophysical studies including waveform modeling or full waveform inversion.

  15. Inverse Theory for Petroleum Reservoir Characterization and History Matching

    NASA Astrophysics Data System (ADS)

    Oliver, Dean S.; Reynolds, Albert C.; Liu, Ning

    This book is a guide to the use of inverse theory for estimation and conditional simulation of flow and transport parameters in porous media. It describes the theory and practice of estimating properties of underground petroleum reservoirs from measurements of flow in wells, and it explains how to characterize the uncertainty in such estimates. Early chapters present the reader with the necessary background in inverse theory, probability and spatial statistics. The book demonstrates how to calculate sensitivity coefficients and the linearized relationship between models and production data. It also shows how to develop iterative methods for generating estimates and conditional realizations. The text is written for researchers and graduates in petroleum engineering and groundwater hydrology and can be used as a textbook for advanced courses on inverse theory in petroleum engineering. It includes many worked examples to demonstrate the methodologies and a selection of exercises.

  16. The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method

    NASA Astrophysics Data System (ADS)

    Voronina, T. A.; Romanenko, A. A.

    2016-12-01

    Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.

  17. A non-linear induced polarization effect on transient electromagnetic soundings

    NASA Astrophysics Data System (ADS)

    Hallbauer-Zadorozhnaya, Valeriya Yu.; Santarato, Giovanni; Abu Zeid, Nasser; Bignardi, Samuel

    2016-10-01

    In a TEM survey conducted for characterizing the subsurface for geothermal purposes, a strong induced polarization effect was recorded in all collected data. Surprisingly, anomalous decay curves were obtained in part of the sites, whose shape depended on the repetition frequency of the exciting square waveform, i.e. on current pulse length. The Cole-Cole model, besides being not directly related to physical parameters of rocks, was found inappropriate to model the observed distortion, due to induced polarization, because this model is linear, i.e. it cannot fit any dependence on current pulse. This phenomenon was investigated and explained as due to the presence of membrane polarization linked to constrictivity of (fresh) water-saturated pores. An algorithm for mathematical modeling of TEM data was then developed to fit this behavior. The case history is then discussed: 1D inversion, which accommodates non-linear effects, produced models that agree quite satisfactorily with resistivity and chargeability models obtained by an electrical resistivity tomography carried out for comparison.

  18. Simulation of the modulation transfer function dependent on the partial Fourier fraction in dynamic contrast enhancement magnetic resonance imaging.

    PubMed

    Takatsu, Yasuo; Ueyama, Tsuyoshi; Miyati, Tosiaki; Yamamura, Kenichirou

    2016-12-01

    The image characteristics in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) depend on the partial Fourier fraction and contrast medium concentration. These characteristics were assessed and the modulation transfer function (MTF) was calculated by computer simulation. A digital phantom was created from signal intensity data acquired at different contrast medium concentrations on a breast model. The frequency images [created by fast Fourier transform (FFT)] were divided into 512 parts and rearranged to form a new image. The inverse FFT of this image yielded the MTF. From the reference data, three linear models (low, medium, and high) and three exponential models (slow, medium, and rapid) of the signal intensity were created. Smaller partial Fourier fractions, and higher gradients in the linear models, corresponded to faster MTF decline. The MTF more gradually decreased in the exponential models than in the linear models. The MTF, which reflects the image characteristics in DCE-MRI, was more degraded as the partial Fourier fraction decreased.

  19. Solution of underdetermined systems of equations with gridded a priori constraints.

    PubMed

    Stiros, Stathis C; Saltogianni, Vasso

    2014-01-01

    The TOPINV, Topological Inversion algorithm (or TGS, Topological Grid Search) initially developed for the inversion of highly non-linear redundant systems of equations, can solve a wide range of underdetermined systems of non-linear equations. This approach is a generalization of a previous conclusion that this algorithm can be used for the solution of certain integer ambiguity problems in Geodesy. The overall approach is based on additional (a priori) information for the unknown variables. In the past, such information was used either to linearize equations around approximate solutions, or to expand systems of observation equations solved on the basis of generalized inverses. In the proposed algorithm, the a priori additional information is used in a third way, as topological constraints to the unknown n variables, leading to an R(n) grid containing an approximation of the real solution. The TOPINV algorithm does not focus on point-solutions, but exploits the structural and topological constraints in each system of underdetermined equations in order to identify an optimal closed space in the R(n) containing the real solution. The centre of gravity of the grid points defining this space corresponds to global, minimum-norm solutions. The rationale and validity of the overall approach are demonstrated on the basis of examples and case studies, including fault modelling, in comparison with SVD solutions and true (reference) values, in an accuracy-oriented approach.

  20. High-resolution near-surface velocity model building using full-waveform inversion—a case study from southwest Sweden

    NASA Astrophysics Data System (ADS)

    Adamczyk, A.; Malinowski, M.; Malehmir, A.

    2014-06-01

    Full-waveform inversion (FWI) is an iterative optimization technique that provides high-resolution models of subsurface properties. Frequency-domain, acoustic FWI was applied to seismic data acquired over a known quick-clay landslide scar in southwest Sweden. We inverted data from three 2-D seismic profiles, 261-572 m long, two of them shot with small charges of dynamite and one with a sledgehammer. To our best knowledge this is the first published application of FWI to sledgehammer data. Both sources provided data suitable for waveform inversion, the sledgehammer data containing even wider frequency spectrum. Inversion was performed for frequency groups between 27.5 and 43.1 Hz for the explosive data and 27.5-51.0 Hz for the sledgehammer. The lowest inverted frequency was limited by the resonance frequency of the standard 28-Hz geophones used in the survey. High-velocity granitic bedrock in the area is undulated and very shallow (15-100 m below the surface), and exhibits a large P-wave velocity contrast to the overlying normally consolidated sediments. In order to mitigate the non-linearity of the inverse problem we designed a multiscale layer-stripping inversion strategy. Obtained P-wave velocity models allowed to delineate the top of the bedrock and revealed distinct layers within the overlying sediments of clays and coarse-grained materials. Models were verified in an extensive set of validating procedures and used for pre-stack depth migration, which confirmed their robustness.

  1. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  2. The novel high-performance 3-D MT inverse solver

    NASA Astrophysics Data System (ADS)

    Kruglyakov, Mikhail; Geraskin, Alexey; Kuvshinov, Alexey

    2016-04-01

    We present novel, robust, scalable, and fast 3-D magnetotelluric (MT) inverse solver. The solver is written in multi-language paradigm to make it as efficient, readable and maintainable as possible. Separation of concerns and single responsibility concepts go through implementation of the solver. As a forward modelling engine a modern scalable solver extrEMe, based on contracting integral equation approach, is used. Iterative gradient-type (quasi-Newton) optimization scheme is invoked to search for (regularized) inverse problem solution, and adjoint source approach is used to calculate efficiently the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT responses, and supports massive parallelization. Moreover, different parallelization strategies implemented in the code allow optimal usage of available computational resources for a given problem statement. To parameterize an inverse domain the so-called mask parameterization is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to HPC Piz Daint (6th supercomputer in the world) demonstrate practically linear scalability of the code up to thousands of nodes.

  3. Improved characterisation and modelling of measurement errors in electrical resistivity tomography (ERT) surveys

    NASA Astrophysics Data System (ADS)

    Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew

    2017-11-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  4. Linearly exact parallel closures for slab geometry

    NASA Astrophysics Data System (ADS)

    Ji, Jeong-Young; Held, Eric D.; Jhang, Hogun

    2013-08-01

    Parallel closures are obtained by solving a linearized kinetic equation with a model collision operator using the Fourier transform method. The closures expressed in wave number space are exact for time-dependent linear problems to within the limits of the model collision operator. In the adiabatic, collisionless limit, an inverse Fourier transform is performed to obtain integral (nonlocal) parallel closures in real space; parallel heat flow and viscosity closures for density, temperature, and flow velocity equations replace Braginskii's parallel closure relations, and parallel flow velocity and heat flow closures for density and temperature equations replace Spitzer's parallel transport relations. It is verified that the closures reproduce the exact linear response function of Hammett and Perkins [Phys. Rev. Lett. 64, 3019 (1990)] for Landau damping given a temperature gradient. In contrast to their approximate closures where the vanishing viscosity coefficient numerically gives an exact response, our closures relate the heat flow and nonvanishing viscosity to temperature and flow velocity (gradients).

  5. An ambiguity of information content and error in an ill-posed satellite inversion

    NASA Astrophysics Data System (ADS)

    Koner, Prabhat

    According to Rodgers (2000, stochastic approach), the averaging kernel (AK) is the representational matrix to understand the information content in a scholastic inversion. On the other hand, in deterministic approach this is referred to as model resolution matrix (MRM, Menke 1989). The analysis of AK/MRM can only give some understanding of how much regularization is imposed on the inverse problem. The trace of the AK/MRM matrix, which is the so-called degree of freedom from signal (DFS; stochastic) or degree of freedom in retrieval (DFR; deterministic). There are no physical/mathematical explanations in the literature: why the trace of the matrix is a valid form to calculate this quantity? We will present an ambiguity between information and error using a real life problem of SST retrieval from GOES13. The stochastic information content calculation is based on the linear assumption. The validity of such mathematics in satellite inversion will be questioned because it is based on the nonlinear radiative transfer and ill-conditioned inverse problems. References: Menke, W., 1989: Geophysical data analysis: discrete inverse theory. San Diego academic press. Rodgers, C.D., 2000: Inverse methods for atmospheric soundings: theory and practice. Singapore :World Scientific.

  6. Three-dimensional Gravity Inversion with a New Gradient Scheme on Unstructured Grids

    NASA Astrophysics Data System (ADS)

    Sun, S.; Yin, C.; Gao, X.; Liu, Y.; Zhang, B.

    2017-12-01

    Stabilized gradient-based methods have been proved to be efficient for inverse problems. Based on these methods, setting gradient close to zero can effectively minimize the objective function. Thus the gradient of objective function determines the inversion results. By analyzing the cause of poor resolution on depth in gradient-based gravity inversion methods, we find that imposing depth weighting functional in conventional gradient can improve the depth resolution to some extent. However, the improvement is affected by the regularization parameter and the effect of the regularization term becomes smaller with increasing depth (shown as Figure 1 (a)). In this paper, we propose a new gradient scheme for gravity inversion by introducing a weighted model vector. The new gradient can improve the depth resolution more efficiently, which is independent of the regularization parameter, and the effect of regularization term will not be weakened when depth increases. Besides, fuzzy c-means clustering method and smooth operator are both used as regularization terms to yield an internal consecutive inverse model with sharp boundaries (Sun and Li, 2015). We have tested our new gradient scheme with unstructured grids on synthetic data to illustrate the effectiveness of the algorithm. Gravity forward modeling with unstructured grids is based on the algorithm proposed by Okbe (1979). We use a linear conjugate gradient inversion scheme to solve the inversion problem. The numerical experiments show a great improvement in depth resolution compared with regular gradient scheme, and the inverse model is compact at all depths (shown as Figure 1 (b)). AcknowledgeThis research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). ReferencesSun J, Li Y. 2015. Multidomain petrophysically constrained inversion and geology differentiation using guided fuzzy c-means clustering. Geophysics, 80(4): ID1-ID18. Okabe M. 1979. Analytical expressions for gravity anomalies due to homogeneous polyhedral bodies and translations into magnetic anomalies. Geophysics, 44(4), 730-741.

  7. Feedback control laws for highly maneuverable aircraft

    NASA Technical Reports Server (NTRS)

    Garrard, William L.; Balas, Gary J.

    1994-01-01

    During the first half of the year, the investigators concentrated their efforts on completing the design of control laws for the longitudinal axis of the HARV. During the second half of the year they concentrated on the synthesis of control laws for the lateral-directional axes. The longitudinal control law design efforts can be briefly summarized as follows. Longitudinal control laws were developed for the HARV using mu synthesis design techniques coupled with dynamic inversion. An inner loop dynamic inversion controller was used to simplify the system dynamics by eliminating the aerodynamic nonlinearities and inertial cross coupling. Models of the errors resulting from uncertainties in the principal longitudinal aerodynamic terms were developed and included in the model of the HARV with the inner loop dynamic inversion controller. This resulted in an inner loop transfer function model which was an integrator with the modeling errors characterized as uncertainties in gain and phase. Outer loop controllers were then designed using mu synthesis to provide robustness to these modeling errors and give desired response to pilot inputs. Both pitch rate and angle of attack command following systems were designed. The following tasks have been accomplished for the lateral-directional controllers: inner and outer loop dynamic inversion controllers have been designed; an error model based on a linearized perturbation model of the inner loop system was derived; controllers for the inner loop system have been designed, using classical techniques, that control roll rate and Dutch roll response; the inner loop dynamic inversion and classical controllers have been implemented on the six degree of freedom simulation; and lateral-directional control allocation scheme has been developed based on minimizing required control effort.

  8. Identifying Flow Networks in a Karstified Aquifer by Application of the Cellular Automata-Based Deterministic Inversion Method (Lez Aquifer, France)

    NASA Astrophysics Data System (ADS)

    Fischer, P.; Jardani, A.; Wang, X.; Jourde, H.; Lecoq, N.

    2017-12-01

    The distributed modeling of flow paths within karstic and fractured fields remains a complex task because of the high dependence of the hydraulic responses to the relative locations between observational boreholes and interconnected fractures and karstic conduits that control the main flow of the hydrosystem. The inverse problem in a distributed model is one alternative approach to interpret the hydraulic test data by mapping the karstic networks and fractured areas. In this work, we developed a Bayesian inversion approach, the Cellular Automata-based Deterministic Inversion (CADI) algorithm to infer the spatial distribution of hydraulic properties in a structurally constrained model. This method distributes hydraulic properties along linear structures (i.e., flow conduits) and iteratively modifies the structural geometry of this conduit network to progressively match the observed hydraulic data to the modeled ones. As a result, this method produces a conductivity model that is composed of a discrete conduit network embedded in the background matrix, capable of producing the same flow behavior as the investigated hydrologic system. The method is applied to invert a set of multiborehole hydraulic tests collected from a hydraulic tomography experiment conducted at the Terrieu field site in the Lez aquifer, Southern France. The emergent model shows a high consistency to field observation of hydraulic connections between boreholes. Furthermore, it provides a geologically realistic pattern of flow conduits. This method is therefore of considerable value toward an enhanced distributed modeling of the fractured and karstified aquifers.

  9. The Modularized Software Package ASKI - Full Waveform Inversion Based on Waveform Sensitivity Kernels Utilizing External Seismic Wave Propagation Codes

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.

    2015-12-01

    We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion.

  10. Collective Socialization and Child Conduct Problems: A Multilevel Analysis with an African American Sample

    ERIC Educational Resources Information Center

    Simons, Leslie Gordon; Simons, Ronald L.; Conger, Rand D.; Brody, Gene H.

    2004-01-01

    This article uses hierarchical linear modeling with a sample of African American children and their primary caregivers to examine the association between various community factors and child conduct problems. The analysis revealed a rather strong inverse association between level of collective socialization and conduct problems. This relationship…

  11. Non-linear feedback control of the p53 protein-mdm2 inhibitor system using the derivative-free non-linear Kalman filter.

    PubMed

    Rigatos, Gerasimos G

    2016-06-01

    It is proven that the model of the p53-mdm2 protein synthesis loop is a differentially flat one and using a diffeomorphism (change of state variables) that is proposed by differential flatness theory it is shown that the protein synthesis model can be transformed into the canonical (Brunovsky) form. This enables the design of a feedback control law that maintains the concentration of the p53 protein at the desirable levels. To estimate the non-measurable elements of the state vector describing the p53-mdm2 system dynamics, the derivative-free non-linear Kalman filter is used. Moreover, to compensate for modelling uncertainties and external disturbances that affect the p53-mdm2 system, the derivative-free non-linear Kalman filter is re-designed as a disturbance observer. The derivative-free non-linear Kalman filter consists of the Kalman filter recursion applied on the linearised equivalent of the protein synthesis model together with an inverse transformation based on differential flatness theory that enables to retrieve estimates for the state variables of the initial non-linear model. The proposed non-linear feedback control and perturbations compensation method for the p53-mdm2 system can result in more efficient chemotherapy schemes where the infusion of medication will be better administered.

  12. Anomalous Nernst and thermal Hall effects in tilted Weyl semimetals

    NASA Astrophysics Data System (ADS)

    Ferreiros, Yago; Zyuzin, A. A.; Bardarson, Jens H.

    2017-09-01

    We study the anomalous Nernst and thermal Hall effects in a linearized low-energy model of a tilted Weyl semimetal, with two Weyl nodes separated in momentum space. For inversion symmetric tilt, we give analytic expressions in two opposite limits: For a small tilt, corresponding to a type-I Weyl semimetal, the Nernst conductivity is finite and independent of the Fermi level; for a large tilt, corresponding to a type-II Weyl semimetal, it acquires a contribution depending logarithmically on the Fermi energy. This result is in a sharp contrast to the nontilted case, where the Nernst response is known to be zero in the linear model. The thermal Hall conductivity similarly acquires Fermi surface contributions, which add to the Fermi level-independent, zero-tilt result, and is suppressed as one over the tilt parameter at half filling in the type-II phase. In the case of inversion-breaking tilt, with the tilting vector of equal modulus in the two Weyl cones, all Fermi surface contributions to both anomalous responses cancel out, resulting in zero Nernst conductivity. We discuss two possible experimental setups, representing open and closed thermoelectric circuits.

  13. Linear regression analysis of survival data with missing censoring indicators.

    PubMed

    Wang, Qihua; Dinse, Gregg E

    2011-04-01

    Linear regression analysis has been studied extensively in a random censorship setting, but typically all of the censoring indicators are assumed to be observed. In this paper, we develop synthetic data methods for estimating regression parameters in a linear model when some censoring indicators are missing. We define estimators based on regression calibration, imputation, and inverse probability weighting techniques, and we prove all three estimators are asymptotically normal. The finite-sample performance of each estimator is evaluated via simulation. We illustrate our methods by assessing the effects of sex and age on the time to non-ambulatory progression for patients in a brain cancer clinical trial.

  14. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    NASA Astrophysics Data System (ADS)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion model can help in the understanding of the posterior estimates and percentage errors. Stable and realistic sub-regional and monthly flux estimates for western region of AB/SK can be obtained, but not for the eastern region of ON. This indicates that it is likely a real observation-based inversion for the annual provincial emissions will work for the western region whereas; improvements are needed with the current inversion setup before real inversion is performed for the eastern region.

  15. Bessel smoothing filter for spectral-element mesh

    NASA Astrophysics Data System (ADS)

    Trinh, P. T.; Brossier, R.; Métivier, L.; Virieux, J.; Wellington, P.

    2017-06-01

    Smoothing filters are extremely important tools in seismic imaging and inversion, such as for traveltime tomography, migration and waveform inversion. For efficiency, and as they can be used a number of times during inversion, it is important that these filters can easily incorporate prior information on the geological structure of the investigated medium, through variable coherent lengths and orientation. In this study, we promote the use of the Bessel filter to achieve these purposes. Instead of considering the direct application of the filter, we demonstrate that we can rely on the equation associated with its inverse filter, which amounts to the solution of an elliptic partial differential equation. This enhances the efficiency of the filter application, and also its flexibility. We apply this strategy within a spectral-element-based elastic full waveform inversion framework. Taking advantage of this formulation, we apply the Bessel filter by solving the associated partial differential equation directly on the spectral-element mesh through the standard weak formulation. This avoids cumbersome projection operators between the spectral-element mesh and a regular Cartesian grid, or expensive explicit windowed convolution on the finite-element mesh, which is often used for applying smoothing operators. The associated linear system is solved efficiently through a parallel conjugate gradient algorithm, in which the matrix vector product is factorized and highly optimized with vectorized computation. Significant scaling behaviour is obtained when comparing this strategy with the explicit convolution method. The theoretical numerical complexity of this approach increases linearly with the coherent length, whereas a sublinear relationship is observed practically. Numerical illustrations are provided here for schematic examples, and for a more realistic elastic full waveform inversion gradient smoothing on the SEAM II benchmark model. These examples illustrate well the efficiency and flexibility of the approach proposed.

  16. Linear bubble plume model for hypolimnetic oxygenation: Full-scale validation and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Singleton, V. L.; Gantzer, P.; Little, J. C.

    2007-02-01

    An existing linear bubble plume model was improved, and data collected from a full-scale diffuser installed in Spring Hollow Reservoir, Virginia, were used to validate the model. The depth of maximum plume rise was simulated well for two of the three diffuser tests. Temperature predictions deviated from measured profiles near the maximum plume rise height, but predicted dissolved oxygen profiles compared very well with observations. A sensitivity analysis was performed. The gas flow rate had the greatest effect on predicted plume rise height and induced water flow rate, both of which were directly proportional to gas flow rate. Oxygen transfer within the hypolimnion was independent of all parameters except initial bubble radius and was inversely proportional for radii greater than approximately 1 mm. The results of this work suggest that plume dynamics and oxygen transfer can successfully be predicted for linear bubble plumes using the discrete-bubble approach.

  17. Classifying the Sizes of Explosive Eruptions using Tephra Deposits: The Advantages of a Numerical Inversion Approach

    NASA Astrophysics Data System (ADS)

    Connor, C.; Connor, L.; White, J.

    2015-12-01

    Explosive volcanic eruptions are often classified by deposit mass and eruption column height. How well are these eruption parameters determined in older deposits, and how well can we reduce uncertainty using robust numerical and statistical methods? We describe an efficient and effective inversion and uncertainty quantification approach for estimating eruption parameters given a dataset of tephra deposit thickness and granulometry. The inversion and uncertainty quantification is implemented using the open-source PEST++ code. Inversion with PEST++ can be used with a variety of forward models and here is applied using Tephra2, a code that simulates advective and dispersive tephra transport and deposition. The Levenburg-Marquardt algorithm is combined with formal Tikhonov and subspace regularization to invert eruption parameters; a linear equation for conditional uncertainty propagation is used to estimate posterior parameter uncertainty. Both the inversion and uncertainty analysis support simultaneous analysis of the full eruption and wind-field parameterization. The combined inversion/uncertainty-quantification approach is applied to the 1992 eruption of Cerro Negro (Nicaragua), the 2011 Kirishima-Shinmoedake (Japan), and the 1913 Colima (Mexico) eruptions. These examples show that although eruption mass uncertainty is reduced by inversion against tephra isomass data, considerable uncertainty remains for many eruption and wind-field parameters, such as eruption column height. Supplementing the inversion dataset with tephra granulometry data is shown to further reduce the uncertainty of most eruption and wind-field parameters. We think the use of such robust models provides a better understanding of uncertainty in eruption parameters, and hence eruption classification, than is possible with more qualitative methods that are widely used.

  18. Source encoding in multi-parameter full waveform inversion

    NASA Astrophysics Data System (ADS)

    Matharu, Gian; Sacchi, Mauricio D.

    2018-04-01

    Source encoding techniques alleviate the computational burden of sequential-source full waveform inversion (FWI) by considering multiple sources simultaneously rather than independently. The reduced data volume requires fewer forward/adjoint simulations per non-linear iteration. Applications of source-encoded full waveform inversion (SEFWI) have thus far focused on monoparameter acoustic inversion. We extend SEFWI to the multi-parameter case with applications presented for elastic isotropic inversion. Estimating multiple parameters can be challenging as perturbations in different parameters can prompt similar responses in the data. We investigate the relationship between source encoding and parameter trade-off by examining the multi-parameter source-encoded Hessian. Probing of the Hessian demonstrates the convergence of the expected source-encoded Hessian, to that of conventional FWI. The convergence implies that the parameter trade-off in SEFWI is comparable to that observed in FWI. A series of synthetic inversions are conducted to establish the feasibility of source-encoded multi-parameter FWI. We demonstrate that SEFWI requires fewer overall simulations than FWI to achieve a target model error for a range of first-order optimization methods. An inversion for spatially inconsistent P - (α) and S-wave (β) velocity models, corroborates the expectation of comparable parameter trade-off in SEFWI and FWI. The final example demonstrates a shortcoming of SEFWI when confronted with time-windowing in data-driven inversion schemes. The limitation is a consequence of the implicit fixed-spread acquisition assumption in SEFWI. Alternative objective functions, namely the normalized cross-correlation and L1 waveform misfit, do not enable SEFWI to overcome this limitation.

  19. Disorder-dominated linear magnetoresistance in topological insulator Bi2Se3 thin films

    NASA Astrophysics Data System (ADS)

    Wang, Wen Jie; Gao, Kuang Hong; Li, Qiu Lin; Li, Zhi-Qing

    2017-12-01

    The linear magnetoresistance (MR) effect is an interesting topic due to its potential applications. In topological insulator Bi2Se3, this effect has been reported to be dominated by the carrier mobility (μ) and hence has a classical origin. Here, we study the magnetotransport properties of Bi2Se3 thin films and observe the linear MR effect, which cannot be attributed to the quantum model. Unexpectedly, the linear MR does not show the linear dependence on μ, in conflict with the reported results. However, we find that the observed linear MR is dominated by the inverse disorder parameter 1 /kFl , where kF and l are the Fermi wave vector and the mean free path, respectively. This suggests that its origin is also classical and that no μ-dominated linear MR effect is observed which may be due to the very small μ values in our samples.

  20. Ground-Based Microwave Radiometric Remote Sensing of the Tropical Atmosphere

    NASA Astrophysics Data System (ADS)

    Han, Yong

    A partially developed 9-channel ground-based microwave radiometer for the Department of Meteorology at Penn State was completed and tested. Complementary units were added, corrections to both hardware and software were made, and system software was corrected and upgraded. Measurements from this radiometer were used to infer tropospheric temperature, water vapor and cloud liquid water. The various weighting functions at each of the 9 channels were calculated and analyzed to estimate the sensitivities of the brightness temperatures to the desired atmospheric variables. The mathematical inversion problem, in a linear form, was viewed in terms of the theory of linear algebra. Several methods for solving the inversion problem were reviewed. Radiometric observations were conducted during the 1990 Tropical Cyclone Motion Experiment. The radiometer was installed on the island of Saipan in a tropical region. During this experiment, the radiometer was calibrated by using tipping curve and radiosonde data as well as measurements of the radiation from a blackbody absorber. A linear statistical method was first applied for the data inversion. The inversion coefficients in the equation were obtained using a large number of radiosonde profiles from Guam and a radiative transfer model. Retrievals were compared with those from local, Saipan, radiosonde measurements. Water vapor profiles, integrated water vapor, and integrated liquid water were retrieved successfully. For temperature profile retrievals, however, it was shown that the radiometric measurements with experimental noises added no more profile information to the inversion than that which was available from a climatological mean. Although successful retrievals of the geopotential heights were made, it was shown that they were determined mainly by the surface pressure measurements. The reasons why the radiometer did not contribute to the retrievals of temperature profiles and geopotential heights were discussed. A method was developed to derive the integrated water vapor and liquid water from combined radiometer and ceilometer measurements. Under certain assumptions, the cloud absorption coefficients and mean radiating temperature, used in the physical or statistical inversion equation, were determined from the measurements. It was shown that significant improvement on radiometric measurements of the integrated liquid water can be gained with this method.

  1. A three-step maximum a posteriori probability method for InSAR data inversion of coseismic rupture with application to the 14 April 2010 Mw 6.9 Yushu, China, earthquake

    NASA Astrophysics Data System (ADS)

    Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei

    2013-08-01

    develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.

  2. Microwave tomography for GPR data processing in archaeology and cultural heritages diagnostics

    NASA Astrophysics Data System (ADS)

    Soldovieri, F.

    2009-04-01

    Ground Penetrating Radar (GPR) is one of the most feasible and friendly instrumentation to detect buried remains and perform diagnostics of archaeological structures with the aim of detecting hidden objects (defects, voids, constructive typology; etc..). In fact, GPR technique allows to perform measurements over large areas in a very fast way thanks to a portable instrumentation. Despite of the widespread exploitation of the GPR as data acquisition system, many difficulties arise in processing GPR data so to obtain images reliable and easily interpretable by the end-users. This difficulty is exacerbated when no a priori information is available as for example arises in the case of historical heritages for which the knowledge of the constructive modalities and materials of the structure might be completely missed. A possible answer to the above cited difficulties resides in the development and the exploitation of microwave tomography algorithms [1, 2], based on more refined electromagnetic scattering model with respect to the ones usually adopted in the classic radaristic approach. By exploitation of the microwave tomographic approach, it is possible to gain accurate and reliable "images" of the investigated structure in order to detect, localize and possibly determine the extent and the geometrical features of the embedded objects. In this framework, the adoption of simplified models of the electromagnetic scattering appears very convenient for practical and theoretical reasons. First, the linear inversion algorithms are numerically efficient thus allowing to investigate domains large in terms of the probing wavelength in a quasi real- time also in the case of 3D case also by adopting schemes based on the combination of 2D reconstruction [3]. In addition, the solution approaches are very robust against the uncertainties in the parameters of the measurement configuration and on the investigated scenario. From a theoretical point of view, the linear models allow further advantages such as: the absence of the false solutions (a question to be arisen in non linear inverse problems); the exploitation of well known regularization tools for achieving a stable solution of the problem; the possibility to analyze the reconstruction performances of the algorithm once the measurement configuration and the properties of the host medium are known. Here, we will present the main features and the reconstruction results of a linear inversion algorithm based on the Born approximation in realistic applications in archaeology and cultural heritage diagnostics. Born model is useful when penetrable objects are under investigations. As well known, the Born Approximation is used to solve the forward problem, that is the determination of the scattered field from a known object under the hypothesis of weak scatterer, i.e. an object whose dielectric permittivity is slightly different from the one of the host medium and whose extent is small in term of probing wavelength. Differently, for the inverse scattering problem, the above hypotheses can be relaxed at the cost to renounce to a "quantitative reconstruction" of the object. In fact, as already shown by results in realistic conditions [4, 5], the adoption of a Born model inversion scheme allows to detect, to localize and to determine the geometry of the object also in the case of not weak scattering objects. [1] R. Persico, R. Bernini, F. Soldovieri, "The role of the measurement configuration in inverse scattering from buried objects under the Born approximation", IEEE Trans. Antennas and Propagation, vol. 53, no.6, pp. 1875-1887, June 2005. [2] F. Soldovieri, J. Hugenschmidt, R. Persico and G. Leone, "A linear inverse scattering algorithm for realistic GPR applications", Near Surface Geophysics, vol. 5, no. 1, pp. 29-42, February 2007. [3] R. Solimene, F. Soldovieri, G. Prisco, R.Pierri, "Three-Dimensional Microwave Tomography by a 2-D Slice-Based Reconstruction Algorithm", IEEE Geoscience and Remote Sensing Letters, vol. 4, no. 4, pp. 556 - 560, Oct. 2007. [4] L. Orlando, F. Soldovieri, "Two different approaches for georadar data processing: a case study in archaeological prospecting", Journal of Applied Geophysics, vol. 64, pp. 1-13, March 2008. [5] F. Soldovieri, M. Bavusi, L. Crocco, S. Piscitelli, A. Giocoli, F. Vallianatos, S. Pantellis, A. Sarris, "A comparison between two GPR data processing techniques for fracture detection and characterization", Proc. of 70th EAGE Conference & Exhibition, Rome, Italy, 9 - 12 June 2008

  3. Imaging the complex geometry of a magma reservoir using FEM-based linear inverse modeling of InSAR data: application to Rabaul Caldera, Papua New Guinea

    NASA Astrophysics Data System (ADS)

    Ronchin, Erika; Masterlark, Timothy; Dawson, John; Saunders, Steve; Martì Molist, Joan

    2017-06-01

    We test an innovative inversion scheme using Green's functions from an array of pressure sources embedded in finite-element method (FEM) models to image, without assuming an a-priori geometry, the composite and complex shape of a volcano deformation source. We invert interferometric synthetic aperture radar (InSAR) data to estimate the pressurization and shape of the magma reservoir of Rabaul caldera, Papua New Guinea. The results image the extended shallow magmatic system responsible for a broad and long-term subsidence of the caldera between 2007 February and 2010 December. Elastic FEM solutions are integrated into the regularized linear inversion of InSAR data of volcano surface displacements in order to obtain a 3-D image of the source of deformation. The Green's function matrix is constructed from a library of forward line-of-sight displacement solutions for a grid of cubic elementary deformation sources. Each source is sequentially generated by removing the corresponding cubic elements from a common meshed domain and simulating the injection of a fluid mass flux into the cavity, which results in a pressurization and volumetric change of the fluid-filled cavity. The use of a single mesh for the generation of all FEM models avoids the computationally expensive process of non-linear inversion and remeshing a variable geometry domain. Without assuming an a-priori source geometry other than the configuration of the 3-D grid that generates the library of Green's functions, the geodetic data dictate the geometry of the magma reservoir as a 3-D distribution of pressure (or flux of magma) within the source array. The inversion of InSAR data of Rabaul caldera shows a distribution of interconnected sources forming an amorphous, shallow magmatic system elongated under two opposite sides of the caldera. The marginal areas at the sides of the imaged magmatic system are the possible feeding reservoirs of the ongoing Tavurvur volcano eruption of andesitic products on the east side and of the past Vulcan volcano eruptions of more evolved materials on the west side. The interconnection and spatial distributions of sources correspond to the petrography of the volcanic products described in the literature and to the dynamics of the single and twin eruptions that characterize the caldera. The ability to image the complex geometry of deformation sources in both space and time can improve our ability to monitor active volcanoes, widen our understanding of the dynamics of active volcanic systems and improve the predictions of eruptions.

  4. Probabilistic estimation of splitting coefficients of normal modes of the Earth, and their uncertainties, using an autoregressive technique

    NASA Astrophysics Data System (ADS)

    Pachhai, S.; Masters, G.; Laske, G.

    2017-12-01

    Earth's normal-mode spectra are crucial to studying the long wavelength structure of the Earth. Such observations have been used extensively to estimate "splitting coefficients" which, in turn, can be used to determine the three-dimensional velocity and density structure. Most past studies apply a non-linear iterative inversion to estimate the splitting coefficients which requires that the earthquake source is known. However, it is challenging to know the source details, particularly for big events as used in normal-mode analyses. Additionally, the final solution of the non-linear inversion can depend on the choice of damping parameter and starting model. To circumvent the need to know the source, a two-step linear inversion has been developed and successfully applied to many mantle and core sensitive modes. The first step takes combinations of the data from a single event to produce spectra known as "receiver strips". The autoregressive nature of the receiver strips can then be used to estimate the structure coefficients without the need to know the source. Based on this approach, we recently employed a neighborhood algorithm to measure the splitting coefficients for an isolated inner-core sensitive mode (13S2). This approach explores the parameter space efficiently without any need of regularization and finds the structure coefficients which best fit the observed strips. Here, we implement a Bayesian approach to data collected for earthquakes from early 2000 and more recent. This approach combines the data (through likelihood) and prior information to provide rigorous parameter values and their uncertainties for both isolated and coupled modes. The likelihood function is derived from the inferred errors of the receiver strips which allows us to retrieve proper uncertainties. Finally, we apply model selection criteria that balance the trade-offs between fit (likelihood) and model complexity to investigate the degree and type of structure (elastic and anelastic) required to explain the data.

  5. A genetic meta-algorithm-assisted inversion approach: hydrogeological study for the determination of volumetric rock properties and matrix and fluid parameters in unsaturated formations

    NASA Astrophysics Data System (ADS)

    Szabó, Norbert Péter

    2018-03-01

    An evolutionary inversion approach is suggested for the interpretation of nuclear and resistivity logs measured by direct-push tools in shallow unsaturated sediments. The efficiency of formation evaluation is improved by estimating simultaneously (1) the petrophysical properties that vary rapidly along a drill hole with depth and (2) the zone parameters that can be treated as constant, in one inversion procedure. In the workflow, the fractional volumes of water, air, matrix and clay are estimated in adjacent depths by linearized inversion, whereas the clay and matrix properties are updated using a float-encoded genetic meta-algorithm. The proposed inversion method provides an objective estimate of the zone parameters that appear in the tool response equations applied to solve the forward problem, which can significantly increase the reliability of the petrophysical model as opposed to setting these parameters arbitrarily. The global optimization meta-algorithm not only assures the best fit between the measured and calculated data but also gives a reliable solution, practically independent of the initial model, as laboratory data are unnecessary in the inversion procedure. The feasibility test uses engineering geophysical sounding logs observed in an unsaturated loessy-sandy formation in Hungary. The multi-borehole extension of the inversion technique is developed to determine the petrophysical properties and their estimation errors along a profile of drill holes. The genetic meta-algorithmic inversion method is recommended for hydrogeophysical logging applications of various kinds to automatically extract the volumetric ratios of rock and fluid constituents as well as the most important zone parameters in a reliable inversion procedure.

  6. Total variation regularization for seismic waveform inversion using an adaptive primal dual hybrid gradient method

    NASA Astrophysics Data System (ADS)

    Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan

    2018-04-01

    Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.

  7. Inversion of Crater Morphometric Data to Gain Insight on the Cratering Process

    NASA Technical Reports Server (NTRS)

    Herrick, Robert R.; Lyons, Suzane N.

    1998-01-01

    In recent years, morphometric data for Venus and several outer planet satellites have been collected, so we now have observational data of complex Craters formed in a large range of target properties. We present general inversion techniques that can utilize the morphometric data to quantitatively test various models of complex crater formation. The morphometric data we use in this paper are depth of a complex crater, the diameter at which the depth-diameter ratio changes, and onset diameters for central peaks, terraces, and peak rings. We tested the roles of impactor velocities and hydrostatic pressure vs. crustal strength, and we tested the specific models of acoustic fluidization (Melosh, 1982) and nonproportional growth (Schultz, 1988). Neither the acoustic fluidization model nor the nonproportional growth in their published formulations are able to successfully reproduce the data. No dependence on impactor velocity is evident from our inversions. Most of the morphometric data is consistent with a linear dependence on the ratio of crustal strength to hydrostatic pressure on a planet, or the factor c/pg.

  8. Computing Generalized Matrix Inverse on Spiking Neural Substrate.

    PubMed

    Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen

    2018-01-01

    Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.

  9. Thermal Diagnostics with the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory: A Validated Method for Differential Emission Measure Inversions

    NASA Astrophysics Data System (ADS)

    Cheung, Mark C. M.; Boerner, P.; Schrijver, C. J.; Testa, P.; Chen, F.; Peter, H.; Malanushenko, A.

    2015-07-01

    We present a new method for performing differential emission measure (DEM) inversions on narrow-band EUV images from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory. The method yields positive definite DEM solutions by solving a linear program. This method has been validated against a diverse set of thermal models of varying complexity and realism. These include (1) idealized Gaussian DEM distributions, (2) 3D models of NOAA Active Region 11158 comprising quasi-steady loop atmospheres in a nonlinear force-free field, and (3) thermodynamic models from a fully compressible, 3D MHD simulation of active region (AR) corona formation following magnetic flux emergence. We then present results from the application of the method to AIA observations of Active Region 11158, comparing the region's thermal structure on two successive solar rotations. Additionally, we show how the DEM inversion method can be adapted to simultaneously invert AIA and Hinode X-ray Telescope data, and how supplementing AIA data with the latter improves the inversion result. The speed of the method allows for routine production of DEM maps, thus facilitating science studies that require tracking of the thermal structure of the solar corona in time and space.

  10. Limited-memory BFGS based least-squares pre-stack Kirchhoff depth migration

    NASA Astrophysics Data System (ADS)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2015-08-01

    Least-squares migration (LSM) is a linearized inversion technique for subsurface reflectivity estimation. Compared to conventional migration algorithms, it can improve spatial resolution significantly with a few iterative calculations. There are three key steps in LSM, (1) calculate data residuals between observed data and demigrated data using the inverted reflectivity model; (2) migrate data residuals to form reflectivity gradient and (3) update reflectivity model using optimization methods. In order to obtain an accurate and high-resolution inversion result, the good estimation of inverse Hessian matrix plays a crucial role. However, due to the large size of Hessian matrix, the inverse matrix calculation is always a tough task. The limited-memory BFGS (L-BFGS) method can evaluate the Hessian matrix indirectly using a limited amount of computer memory which only maintains a history of the past m gradients (often m < 10). We combine the L-BFGS method with least-squares pre-stack Kirchhoff depth migration. Then, we validate the introduced approach by the 2-D Marmousi synthetic data set and a 2-D marine data set. The results show that the introduced method can effectively obtain reflectivity model and has a faster convergence rate with two comparison gradient methods. It might be significant for general complex subsurface imaging.

  11. THERMAL DIAGNOSTICS WITH THE ATMOSPHERIC IMAGING ASSEMBLY ON BOARD THE SOLAR DYNAMICS OBSERVATORY: A VALIDATED METHOD FOR DIFFERENTIAL EMISSION MEASURE INVERSIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Mark C. M.; Boerner, P.; Schrijver, C. J.

    We present a new method for performing differential emission measure (DEM) inversions on narrow-band EUV images from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory. The method yields positive definite DEM solutions by solving a linear program. This method has been validated against a diverse set of thermal models of varying complexity and realism. These include (1) idealized Gaussian DEM distributions, (2) 3D models of NOAA Active Region 11158 comprising quasi-steady loop atmospheres in a nonlinear force-free field, and (3) thermodynamic models from a fully compressible, 3D MHD simulation of active region (AR) corona formation following magneticmore » flux emergence. We then present results from the application of the method to AIA observations of Active Region 11158, comparing the region's thermal structure on two successive solar rotations. Additionally, we show how the DEM inversion method can be adapted to simultaneously invert AIA and Hinode X-ray Telescope data, and how supplementing AIA data with the latter improves the inversion result. The speed of the method allows for routine production of DEM maps, thus facilitating science studies that require tracking of the thermal structure of the solar corona in time and space.« less

  12. Reduction of a linear complex model for respiratory system during Airflow Interruption.

    PubMed

    Jablonski, Ireneusz; Mroczka, Janusz

    2010-01-01

    The paper presents methodology of a complex model reduction to its simpler version - an identifiable inverse model. Its main tool is a numerical procedure of sensitivity analysis (structural and parametric) applied to the forward linear equivalent designed for the conditions of interrupter experiment. Final result - the reduced analog for the interrupter technique is especially worth of notice as it fills a major gap in occlusional measurements, which typically use simple, one- or two-element physical representations. Proposed electrical reduced circuit, being structural combination of resistive, inertial and elastic properties, can be perceived as a candidate for reliable reconstruction and quantification (in the time and frequency domain) of dynamical behavior of the respiratory system in response to a quasi-step excitation by valve closure.

  13. Flow throughout the Earth's core inverted from geomagnetic observations and numerical dynamo models

    NASA Astrophysics Data System (ADS)

    Aubert, Julien

    2013-02-01

    This paper introduces inverse geodynamo modelling, a framework imaging flow throughout the Earth's core from observations of the geomagnetic field and its secular variation. The necessary prior information is provided by statistics from 3-D and self-consistent numerical simulations of the geodynamo. The core method is a linear estimation (or Kalman filtering) procedure, combined with standard frozen-flux core surface flow inversions in order to handle the non-linearity of the problem. The inversion scheme is successfully validated using synthetic test experiments. A set of four numerical dynamo models of increasing physical complexity and similarity to the geomagnetic field is then used to invert for flows at single epochs within the period 1970-2010, using data from the geomagnetic field models CM4 and gufm-sat-Q3. The resulting core surface flows generally provide satisfactory fits to the secular variation within the level of modelled errors, and robustly reproduce the most commonly observed patterns while additionally presenting a high degree of equatorial symmetry. The corresponding deep flows present a robust, highly columnar structure once rotational constraints are enforced to a high level in the prior models, with patterns strikingly similar to the results of quasi-geostrophic inversions. In particular, the presence of a persistent planetary scale, eccentric westward columnar gyre circling around the inner core is confirmed. The strength of the approach is to uniquely determine the trade-off between fit to the data and complexity of the solution by clearly connecting it to first principle physics; statistical deviations observed between the inverted flows and the standard model behaviour can then be used to quantitatively assess the shortcomings of the physical modelling. Such deviations include the (i) westwards and (ii) hemispherical character of the eccentric gyre. A prior model with angular momentum conservation of the core-mantle inner-core system, and gravitational coupling of reasonable strength between the mantle and the inner core, is shown to produce enough westward drift to resolve statistical deviation (i). Deviation (ii) is resolved by a prior with an hemispherical buoyancy release at the inner-core boundary, with excess buoyancy below Asia. This latter result suggests that the recently proposed inner-core translational instability presently transports the solid inner-core material westwards, opposite to the seismologically inferred long-term trend but consistently with the eccentricity of the geomagnetic dipole in recent times.

  14. Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-07-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.

  15. Error analysis in inverse scatterometry. I. Modeling.

    PubMed

    Al-Assaad, Rayan M; Byrne, Dale M

    2007-02-01

    Scatterometry is an optical technique that has been studied and tested in recent years in semiconductor fabrication metrology for critical dimensions. Previous work presented an iterative linearized method to retrieve surface-relief profile parameters from reflectance measurements upon diffraction. With the iterative linear solution model in this work, rigorous models are developed to represent the random and deterministic or offset errors in scatterometric measurements. The propagation of different types of error from the measurement data to the profile parameter estimates is then presented. The improvement in solution accuracies is then demonstrated with theoretical and experimental data by adjusting for the offset errors. In a companion paper (in process) an improved optimization method is presented to account for unknown offset errors in the measurements based on the offset error model.

  16. State of charge estimation in Ni-MH rechargeable batteries

    NASA Astrophysics Data System (ADS)

    Milocco, R. H.; Castro, B. E.

    In this work we estimate the state of charge (SOC) of Ni-MH rechargeable batteries using the Kalman filter based on a simplified electrochemical model. First, we derive the complete electrochemical model of the battery which includes diffusional processes and kinetic reactions in both Ni and MH electrodes. The full model is further reduced in a cascade of two parts, a linear time invariant dynamical sub-model followed by a static nonlinearity. Both parts are identified using the current and potential measured at the terminals of the battery with a simple 1-D minimization procedure. The inverse of the static nonlinearity together with a Kalman filter provide the SOC estimation as a linear estimation problem. Experimental results with commercial batteries are provided to illustrate the estimation procedure and to show the performance.

  17. School social cohesion, student-school connectedness, and bullying in Colombian adolescents.

    PubMed

    Springer, Andrew E; Cuevas Jaramillo, Maria Clara; Ortiz Gómez, Yamileth; Case, Katie; Wilkinson, Anna

    2016-12-01

    Student-school connectedness is inversely associated with multiple health risk behaviors, yet research is limited on the relative contributions of a student's connectedness with school and an overall context of school social cohesion to peer victimization/bullying. We examined associations of perceived school cohesion and student-school connectedness with physical victimization, verbal victimization, and social exclusion in the past six months in adolescents in grades 6-11 (N = 774) attending 11 public and private urban schools in Colombia. Cross-sectional data were collected via a self-administered questionnaire and analyzed using mixed-effects linear regression models. Higher perceived school cohesion was inversely related with exposure to three bullying types examined (p < 0.05); student-school connectedness was negatively related to verbal victimization among girls only (p < 0.01). In full models, school cohesion maintained inverse associations with three bullying types after controlling for student-school connectedness (p ≤ 0.05). Enhancing school cohesion may hold benefits for bullying prevention beyond a student's individual school connectedness. © The Author(s) 2015.

  18. Optical orientation of the homogeneous nonequilibrium Bose-Einstein condensate of exciton polaritons

    NASA Astrophysics Data System (ADS)

    Korenev, V. L.

    2012-07-01

    A simple model, describing the steady state of the nonequilibrium polarization of a homogeneous Bose-Einstein condensate of exciton polaritons, is considered. It explains the suppression of spin splitting of a nonequilibrium polariton condensate in an external magnetic field, the linear polarization, the linear-to-circular polarization conversion, and the unexpected sign of the circular polarization of the condensate all on equal footing. It is shown that inverse effects are possible, to wit, spontaneous circular polarization and the enhancement of spin splitting of a nonequilibrium condensate of polaritons.

  19. Reverse Flood Routing with the Lag-and-Route Storage Model

    NASA Astrophysics Data System (ADS)

    Mazi, K.; Koussis, A. D.

    2010-09-01

    This work presents a method for reverse routing of flood waves in open channels, which is an inverse problem of the signal identification type. Inflow determination from outflow measurements is useful in hydrologic forensics and in optimal reservoir control, but has been seldom studied. Such problems are ill posed and their solution is sensitive to small perturbations present in the data, or to any related uncertainty. Therefore the major difficulty in solving this inverse problem consists in controlling the amplification of errors that inevitably befall flow measurements, from which the inflow signal is to be determined. The lag-and-route model offers a convenient framework for reverse routing, because not only is formal deconvolution not required, but also reverse routing is through a single linear reservoir. In addition, this inversion degenerates to calculating the intermediate inflow (prior to the lag step) simply as the sum of the outflow and of its time derivative multiplied by the reservoir’s time constant. The remaining time shifting (lag) of the intermediate, reversed flow presents no complications, as pure translation causes no error amplification. Note that reverse routing with the inverted Muskingum scheme (Koussis et al., submitted to the 12th Plinius Conference) fails when that scheme is specialised to the Kalinin-Miljukov model (linear reservoirs in series). The principal functioning of the reverse routing procedure was verified first with perfect field data (outflow hydrograph generated by forward routing of a known inflow hydrograph). The field data were then seeded with random error. To smooth the oscillations caused by the imperfect (measured) outflow data, we applied a multipoint Savitzky-Golay low-pass filter. The combination of reverse routing and filtering achieved an effective recovery of the inflow signal extremely efficiently. Specifically, we compared the reverse routing results of the inverted lag-and-route model and of the inverted Kalinin-Miljukov model. The latter applies the lag-and-route model’s single-reservoir inversion scheme sequentially to its cascade of linear reservoirs, the number of which is related to the stream's hydromorphology. For this purpose, we used the example of Bruen & Dooge (2007), who back-routed flow hydrographs in a 100-km long prismatic channel using a scheme for the reverse solution of the St. Venant equations of flood wave motion. The lag-and-route reverse routing model recovered the inflow hydrograph with comparable accuracy to that of the multi-reservoir, inverted Kalinin-Miljukov model, both performing as well as the box-scheme for reverse routing with the St. Venant equations. In conclusion, the success in the regaining of the inflow signal by the devised single-reservoir reverse routing procedure, with multipoint low-pass filtering, can be attributed to its simple computational structure that endows it with remarkable robustness and exceptional efficiency.

  20. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  1. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE PAGES

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    2016-12-01

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  2. Deformed coset models from gauged WZW actions

    NASA Astrophysics Data System (ADS)

    Park, Q.-Han

    1994-06-01

    A general Lagrangian formulation of integrably deformed G/H-coset models is given. We consider the G/H-coset model in terms of the gauged Wess-Zumino-Witten action and obtain an integrable deformation by adding a potential energy term Tr(gTg -1overlineT) , where algebra elements T, overlineT belong to the center of the algebra h associated with the subgroup H. We show that the classical equation of motion of the deformed coset model can be identified with the integrability condition of certain linear equations which makes the use of the inverse scattering method possible. Using the linear equation, we give a systematic way to construct infinitely many conserved currents as well as soliton solutions. In the case of the parafermionic SU(2)/U(1)-coset model, we derive n-solitons and conserved currents explicitly.

  3. Surface and Atmospheric Parameter Retrieval From AVIRIS Data: The Importance of Non-Linear Effects

    NASA Technical Reports Server (NTRS)

    Green Robert O.; Moreno, Jose F.

    1996-01-01

    AVIRIS data represent a new and important approach for the retrieval of atmospheric and surface parameters from optical remote sensing data. Not only as a test for future space systems, but also as an operational airborne remote sensing system, the development of algorithms to retrieve information from AVIRIS data is an important step to these new approaches and capabilities. Many things have been learned since AVIRIS became operational, and the successive technical improvements in the hardware and the more sophisticated calibration techniques employed have increased the quality of the data to the point of almost meeting optimum user requirements. However, the potential capabilities of imaging spectrometry over the standard multispectral techniques have still not been fully demonstrated. Reasons for this are the technical difficulties in handling the data, the critical aspect of calibration for advanced retrieval methods, and the lack of proper models with which to invert the measured AVIRIS radiances in all the spectral channels. To achieve the potential of imaging spectrometry, these issues must be addressed. In this paper, an algorithm to retrieve information about both atmospheric and surface parameters from AVIRIS data, by using model inversion techniques, is described. Emphasis is put on the derivation of the model itself as well as proper inversion techniques, robust to noise in the data and an inadequate ability of the model to describe natural variability in the data. The problem of non-linear effects is addressed, as it has been demonstrated to be a major source of error in the numerical values retrieved by more simple, linear-based approaches. Non-linear effects are especially critical for the retrieval of surface parameters where both scattering and absorption effects are coupled, as well as in the cases of significant multiple-scattering contributions. However, sophisticated modeling approaches can handle such non-linear effects, which are especially important over vegetated surfaces. All the data used in this study were acquired during the 1991 Multisensor Airborne Campaign (MAC-Europe), as part of the European Field Experiment on a Desertification-threatened Area (EFEDA), carried out in Spain in June-July 1991.

  4. The impact of approximations and arbitrary choices on geophysical images

    NASA Astrophysics Data System (ADS)

    Valentine, Andrew P.; Trampert, Jeannot

    2016-01-01

    Whenever a geophysical image is to be constructed, a variety of choices must be made. Some, such as those governing data selection and processing, or model parametrization, are somewhat arbitrary: there may be little reason to prefer one choice over another. Others, such as defining the theoretical framework within which the data are to be explained, may be more straightforward: typically, an `exact' theory exists, but various approximations may need to be adopted in order to make the imaging problem computationally tractable. Differences between any two images of the same system can be explained in terms of differences between these choices. Understanding the impact of each particular decision is essential if images are to be interpreted properly-but little progress has been made towards a quantitative treatment of this effect. In this paper, we consider a general linearized inverse problem, applicable to a wide range of imaging situations. We write down an expression for the difference between two images produced using similar inversion strategies, but where different choices have been made. This provides a framework within which inversion algorithms may be analysed, and allows us to consider how image effects may arise. In this paper, we take a general view, and do not specialize our discussion to any specific imaging problem or setup (beyond the restrictions implied by the use of linearized inversion techniques). In particular, we look at the concept of `hybrid inversion', in which highly accurate synthetic data (typically the result of an expensive numerical simulation) is combined with an inverse operator constructed based on theoretical approximations. It is generally supposed that this offers the benefits of using the more complete theory, without the full computational costs. We argue that the inverse operator is as important as the forward calculation in determining the accuracy of results. We illustrate this using a simple example, based on imaging the density structure of a vibrating string.

  5. A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Hu, Xiangyun; Liu, Tianyou

    2014-07-01

    Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.

  6. Rigorous Approach in Investigation of Seismic Structure and Source Characteristicsin Northeast Asia: Hierarchical and Trans-dimensional Bayesian Inversion

    NASA Astrophysics Data System (ADS)

    Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.

    2015-12-01

    Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.

  7. Linear Inverse Modeling and Scaling Analysis of Drainage Inventories.

    NASA Astrophysics Data System (ADS)

    O'Malley, C.; White, N. J.

    2016-12-01

    It is widely accepted that the stream power law can be used to describe the evolution of longitudinal river profiles. Over the last 5 years, this phenomenological law has been used to develop non-linear and linear inversion algorithms that enable uplift rate histories to be calculated by minimizing the misfit between observed and calculated river profiles. Substantial, continent-wide inventories of river profiles have been successfully inverted to yield uplift as a function of time and space. Erosional parameters can be determined by independent geological calibration. Our results help to illuminate empirical scaling laws that are well known to the geomorphological community. Here we present an analysis of river profiles from Asia. The timing and magnitude of uplift events across Asia, including the Himalayas and Tibet, have long been debated. River profile analyses have played an important role in clarifying the timing of uplift events. However, no attempt has yet been made to invert a comprehensive database of river profiles from the entire region. Asian rivers contain information which allows us to investigate putative uplift events quantitatively and to determine a cumulative uplift history for Asia. Long wavelength shapes of river profiles are governed by regional uplift and moderated by erosional processes. These processes are parameterised using the stream power law in the form of an advective-diffusive equation. Our non-negative, least-squares inversion scheme was applied to an inventory of 3722 Asian river profiles. We calibrate the key erosional parameters by predicting solid sedimentary flux for a set of Asian rivers and by comparing the flux predictions against published depositional histories for major river deltas. The resultant cumulative uplift history is compared with a range of published geological constraints for uplift and palaeoelevation. We have found good agreement for many regions across Asia. Surprisingly, single values of erosional constants can be shown to produce reliable uplift histories. However, these erosional constants appear to vary from continent to continent. Future work will investigate the global relationship between our inversion results, scaling laws, climate models, lithological variation and sedimentary flux.

  8. Relative sensitivity of depth discrimination for ankle inversion and plantar flexion movements.

    PubMed

    Black, Georgia; Waddington, Gordon; Adams, Roger

    2014-02-01

    25 participants (20 women, 5 men) were tested for sensitivity in discrimination between sets of six movements centered on 8 degrees, 11 degrees, and 14 degrees, and separated by 0.3 degrees. Both inversion and plantar flexion movements were tested. Discrimination of the extent of inversion movement was observed to decline linearly with increasing depth; however, for plantar flexion, the discrimination function for movement extent was found to be non-linear. The relatively better discrimination of plantar flexion movements than inversion movements at around 11 degrees from horizontal is interpreted as an effect arising from differential amounts of practice through use, because this position is associated with the plantar flexion movement made in normal walking. The fact that plantar flexion movements are discriminated better than inversion at one region but not others argues against accounts of superior proprioceptive sensitivity for plantar flexion compared to inversion that are based on general properties of plantar flexion such as the number of muscle fibres on stretch.

  9. a method of gravity and seismic sequential inversion and its GPU implementation

    NASA Astrophysics Data System (ADS)

    Liu, G.; Meng, X.

    2011-12-01

    In this abstract, we introduce a gravity and seismic sequential inversion method to invert for density and velocity together. For the gravity inversion, we use an iterative method based on correlation imaging algorithm; for the seismic inversion, we use the full waveform inversion. The link between the density and velocity is an empirical formula called Gardner equation, for large volumes of data, we use the GPU to accelerate the computation. For the gravity inversion method , we introduce a method based on correlation imaging algorithm,it is also a interative method, first we calculate the correlation imaging of the observed gravity anomaly, it is some value between -1 and +1, then we multiply this value with a little density ,this value become the initial density model. We get a forward reuslt with this initial model and also calculate the correaltion imaging of the misfit of observed data and the forward data, also multiply the correaltion imaging result a little density and add it to the initial model, then do the same procedure above , at last ,we can get a inversion density model. For the seismic inveron method ,we use a mothod base on the linearity of acoustic wave equation written in the frequency domain,with a intial velociy model, we can get a good velocity result. In the sequential inversion of gravity and seismic , we need a link formula to convert between density and velocity ,in our method , we use the Gardner equation. Driven by the insatiable market demand for real time, high-definition 3D images, the programmable NVIDIA Graphic Processing Unit (GPU) as co-processor of CPU has been developed for high performance computing. Compute Unified Device Architecture (CUDA) is a parallel programming model and software environment provided by NVIDIA designed to overcome the challenge of using traditional general purpose GPU while maintaining a low learn curve for programmers familiar with standard programming languages such as C. In our inversion processing, we use the GPU to accelerate our gravity and seismic inversion. Taking the gravity inversion as an example, its kernels are gravity forward simulation and correlation imaging, after the parallelization in GPU, in 3D case,the inversion module, the original five CPU loops are reduced to three,the forward module the original five CPU loops are reduced to two. Acknowledgments We acknowledge the financial support of Sinoprobe project (201011039 and 201011049-03), the Fundamental Research Funds for the Central Universities (2010ZY26 and 2011PY0183), the National Natural Science Foundation of China (41074095) and the Open Project of State Key Laboratory of Geological Processes and Mineral Resources (GPMR0945).

  10. Linear Approximation to Optimal Control Allocation for Rocket Nozzles with Elliptical Constraints

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Wall, Johnm W.

    2011-01-01

    In this paper we present a straightforward technique for assessing and realizing the maximum control moment effectiveness for a launch vehicle with multiple constrained rocket nozzles, where elliptical deflection limits in gimbal axes are expressed as an ensemble of independent quadratic constraints. A direct method of determining an approximating ellipsoid that inscribes the set of attainable angular accelerations is derived. In the case of a parameterized linear generalized inverse, the geometry of the attainable set is computationally expensive to obtain but can be approximated to a high degree of accuracy with the proposed method. A linear inverse can then be optimized to maximize the volume of the true attainable set by maximizing the volume of the approximating ellipsoid. The use of a linear inverse does not preclude the use of linear methods for stability analysis and control design, preferred in practice for assessing the stability characteristics of the inertial and servoelastic coupling appearing in large boosters. The present techniques are demonstrated via application to the control allocation scheme for a concept heavy-lift launch vehicle.

  11. Resolution analysis of marine seismic full waveform data by Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Ray, A.; Sekar, A.; Hoversten, G. M.; Albertin, U.

    2015-12-01

    The Bayesian posterior density function (PDF) of earth models that fit full waveform seismic data convey information on the uncertainty with which the elastic model parameters are resolved. In this work, we apply the trans-dimensional reversible jump Markov Chain Monte Carlo method (RJ-MCMC) for the 1D inversion of noisy synthetic full-waveform seismic data in the frequency-wavenumber domain. While seismic full waveform inversion (FWI) is a powerful method for characterizing subsurface elastic parameters, the uncertainty in the inverted models has remained poorly known, if at all and is highly initial model dependent. The Bayesian method we use is trans-dimensional in that the number of model layers is not fixed, and flexible such that the layer boundaries are free to move around. The resulting parameterization does not require regularization to stabilize the inversion. Depth resolution is traded off with the number of layers, providing an estimate of uncertainty in elastic parameters (compressional and shear velocities Vp and Vs as well as density) with depth. We find that in the absence of additional constraints, Bayesian inversion can result in a wide range of posterior PDFs on Vp, Vs and density. These PDFs range from being clustered around the true model, to those that contain little resolution of any particular features other than those in the near surface, depending on the particular data and target geometry. We present results for a suite of different frequencies and offset ranges, examining the differences in the posterior model densities thus derived. Though these results are for a 1D earth, they are applicable to areas with simple, layered geology and provide valuable insight into the resolving capabilities of FWI, as well as highlight the challenges in solving a highly non-linear problem. The RJ-MCMC method also presents a tantalizing possibility for extension to 2D and 3D Bayesian inversion of full waveform seismic data in the future, as it objectively tackles the problem of model selection (i.e., the number of layers or cells for parameterization), which could ease the computational burden of evaluating forward models with many parameters.

  12. Scarp degraded by linear diffusion: inverse solution for age.

    USGS Publications Warehouse

    Andrews, D.J.; Hanks, T.C.

    1985-01-01

    Under the assumption that landforms unaffected by drainage channels are degraded according to the linear diffusion equation, a procedure is developed to invert a scarp profile to find its 'diffusion age'. The inverse procedure applied to synthetic data yields the following rules of thumb. Evidence of initial scarp shape has been lost when apparent age reaches twice its initial value. A scarp that appears to have been formed by one event may have been formed by two with an interval between them as large as apparent age. The simplicity of scarp profile measurement and this inversion makes profile analysis attractive. -from Authors

  13. Mapping the landscape of metabolic goals of a cell

    DOE PAGES

    Zhao, Qi; Stettner, Arion I.; Reznik, Ed; ...

    2016-05-23

    Here, genome-scale flux balance models of metabolism provide testable predictions of all metabolic rates in an organism, by assuming that the cell is optimizing a metabolic goal known as the objective function. We introduce an efficient inverse flux balance analysis (invFBA) approach, based on linear programming duality, to characterize the space of possible objective functions compatible with measured fluxes. After testing our algorithm on simulated E. coli data and time-dependent S. oneidensis fluxes inferred from gene expression, we apply our inverse approach to flux measurements in long-term evolved E. coli strains, revealing objective functions that provide insight into metabolic adaptationmore » trajectories.« less

  14. Linear models for assessing mechanisms of sperm competition: the trouble with transformations.

    PubMed

    Eggert, Anne-Katrin; Reinhardt, Klaus; Sakaluk, Scott K

    2003-01-01

    Although sperm competition is a pervasive selective force shaping the reproductive tactics of males, the mechanisms underlying different patterns of sperm precedence remain obscure. Parker et al. (1990) developed a series of linear models designed to identify two of the more basic mechanisms: sperm lotteries and sperm displacement; the models can be tested experimentally by manipulating the relative numbers of sperm transferred by rival males and determining the paternity of offspring. Here we show that tests of the model derived for sperm lotteries can result in misleading inferences about the underlying mechanism of sperm precedence because the required inverse transformations may lead to a violation of fundamental assumptions of linear regression. We show that this problem can be remedied by reformulating the model using the actual numbers of offspring sired by each male, and log-transforming both sides of the resultant equation. Reassessment of data from a previous study (Sakaluk and Eggert 1996) using the corrected version of the model revealed that we should not have excluded a simple sperm lottery as a possible mechanism of sperm competition in decorated crickets, Gryllodes sigillatus.

  15. Isotropic-resolution linear-array-based photoacoustic computed tomography through inverse Radon transform

    NASA Astrophysics Data System (ADS)

    Li, Guo; Xia, Jun; Li, Lei; Wang, Lidai; Wang, Lihong V.

    2015-03-01

    Linear transducer arrays are readily available for ultrasonic detection in photoacoustic computed tomography. They offer low cost, hand-held convenience, and conventional ultrasonic imaging. However, the elevational resolution of linear transducer arrays, which is usually determined by the weak focus of the cylindrical acoustic lens, is about one order of magnitude worse than the in-plane axial and lateral spatial resolutions. Therefore, conventional linear scanning along the elevational direction cannot provide high-quality three-dimensional photoacoustic images due to the anisotropic spatial resolutions. Here we propose an innovative method to achieve isotropic resolutions for three-dimensional photoacoustic images through combined linear and rotational scanning. In each scan step, we first elevationally scan the linear transducer array, and then rotate the linear transducer array along its center in small steps, and scan again until 180 degrees have been covered. To reconstruct isotropic three-dimensional images from the multiple-directional scanning dataset, we use the standard inverse Radon transform originating from X-ray CT. We acquired a three-dimensional microsphere phantom image through the inverse Radon transform method and compared it with a single-elevational-scan three-dimensional image. The comparison shows that our method improves the elevational resolution by up to one order of magnitude, approaching the in-plane lateral-direction resolution. In vivo rat images were also acquired.

  16. Alternative model of space-charge-limited thermionic current flow through a plasma

    NASA Astrophysics Data System (ADS)

    Campanell, M. D.

    2018-04-01

    It is widely assumed that thermionic current flow through a plasma is limited by a "space-charge-limited" (SCL) cathode sheath that consumes the hot cathode's negative bias and accelerates upstream ions into the cathode. Here, we formulate a fundamentally different current-limited mode. In the "inverse" mode, the potentials of both electrodes are above the plasma potential, so that the plasma ions are confined. The bias is consumed by the anode sheath. There is no potential gradient in the neutral plasma region from resistivity or presheath. The inverse cathode sheath pulls some thermoelectrons back to the cathode, thereby limiting the circuit current. Thermoelectrons entering the zero-field plasma region that undergo collisions may also be sent back to the cathode, further attenuating the circuit current. In planar geometry, the plasma density is shown to vary linearly across the electrode gap. A continuum kinetic planar plasma diode simulation model is set up to compare the properties of current modes with classical, conventional SCL, and inverse cathode sheaths. SCL modes can exist only if charge-exchange collisions are turned off in the potential well of the virtual cathode to prevent ion trapping. With the collisions, the current-limited equilibrium must be inverse. Inverse operating modes should therefore be present or possible in many plasma devices that rely on hot cathodes. Evidence from past experiments is discussed. The inverse mode may offer opportunities to minimize sputtering and power consumption that were not previously explored due to the common assumption of SCL sheaths.

  17. Using informative priors in facies inversion: The case of C-ISR method

    NASA Astrophysics Data System (ADS)

    Valakas, G.; Modis, K.

    2016-08-01

    Inverse problems involving the characterization of hydraulic properties of groundwater flow systems by conditioning on observations of the state variables are mathematically ill-posed because they have multiple solutions and are sensitive to small changes in the data. In the framework of McMC methods for nonlinear optimization and under an iterative spatial resampling transition kernel, we present an algorithm for narrowing the prior and thus producing improved proposal realizations. To achieve this goal, we cosimulate the facies distribution conditionally to facies observations and normal scores transformed hydrologic response measurements, assuming a linear coregionalization model. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior. The effectiveness of our approach is demonstrated by an example application on a synthetic underdetermined inverse problem in aquifer characterization.

  18. Bayesian inference in geomagnetism

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1988-01-01

    The inverse problem in empirical geomagnetic modeling is investigated, with critical examination of recently published studies. Particular attention is given to the use of Bayesian inference (BI) to select the damping parameter lambda in the uniqueness portion of the inverse problem. The mathematical bases of BI and stochastic inversion are explored, with consideration of bound-softening problems and resolution in linear Gaussian BI. The problem of estimating the radial magnetic field B(r) at the earth core-mantle boundary from surface and satellite measurements is then analyzed in detail, with specific attention to the selection of lambda in the studies of Gubbins (1983) and Gubbins and Bloxham (1985). It is argued that the selection method is inappropriate and leads to lambda values much larger than those that would result if a reasonable bound on the heat flow at the CMB were assumed.

  19. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons

    NASA Astrophysics Data System (ADS)

    Sanchez-Parcerisa, D.; Cortés-Giraldo, M. A.; Dolney, D.; Kondrla, M.; Fager, M.; Carabe, A.

    2016-02-01

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm-1) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  20. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons.

    PubMed

    Sanchez-Parcerisa, D; Cortés-Giraldo, M A; Dolney, D; Kondrla, M; Fager, M; Carabe, A

    2016-02-21

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm(-1)) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  1. Cerebellar-inspired algorithm for adaptive control of nonlinear dielectric elastomer-based artificial muscle

    PubMed Central

    Assaf, Tareq; Rossiter, Jonathan M.; Porrill, John

    2016-01-01

    Electroactive polymer actuators are important for soft robotics, but can be difficult to control because of compliance, creep and nonlinearities. Because biological control mechanisms have evolved to deal with such problems, we investigated whether a control scheme based on the cerebellum would be useful for controlling a nonlinear dielectric elastomer actuator, a class of artificial muscle. The cerebellum was represented by the adaptive filter model, and acted in parallel with a brainstem, an approximate inverse plant model. The recurrent connections between the two allowed for direct use of sensory error to adjust motor commands. Accurate tracking of a displacement command in the actuator's nonlinear range was achieved by either semi-linear basis functions in the cerebellar model or semi-linear functions in the brainstem corresponding to recruitment in biological muscle. In addition, allowing transfer of training between cerebellum and brainstem as has been observed in the vestibulo-ocular reflex prevented the steady increase in cerebellar output otherwise required to deal with creep. The extensibility and relative simplicity of the cerebellar-based adaptive-inverse control scheme suggests that it is a plausible candidate for controlling this type of actuator. Moreover, its performance highlights important features of biological control, particularly nonlinear basis functions, recruitment and transfer of training. PMID:27655667

  2. Frequency dependent Qα and Qβ in the Umbria-Marche (Italy) region using a quadratic approximation of the coda-normalization method

    NASA Astrophysics Data System (ADS)

    de Lorenzo, Salvatore; Bianco, Francesca; Del Pezzo, Edoardo

    2013-06-01

    The coda normalization method is one of the most used methods in the inference of attenuation parameters Qα and Qβ. Since, in this method, the geometrical spreading exponent γ is an unknown model parameter, the most part of studies assumes a fixed γ, generally equal to 1. However γ and Q could be also jointly inferred from the non-linear inversion of coda-normalized logarithms of amplitudes, but the trade-off between γ and Q could give rise to unreasonable values of these parameters. To minimize the trade-off between γ and Q, an inversion method based on a parabolic expression of the coda-normalization equation has been developed. The method has been applied to the waveforms recorded during the 1997 Umbria-Marche seismic crisis. The Akaike criterion has been used to compare results of the parabolic model with those of the linear model, corresponding to γ = 1. A small deviation from the spherical geometrical spreading has been inferred, but this is accompanied by a significant variation of Qα and Qβ values. For almost all the considered stations, Qα smaller than Qβ has been inferred, confirming that seismic attenuation, in the Umbria-Marche region, is controlled by crustal pore fluids.

  3. Sampling-free Bayesian inversion with adaptive hierarchical tensor representations

    NASA Astrophysics Data System (ADS)

    Eigel, Martin; Marschall, Manuel; Schneider, Reinhold

    2018-03-01

    A sampling-free approach to Bayesian inversion with an explicit polynomial representation of the parameter densities is developed, based on an affine-parametric representation of a linear forward model. This becomes feasible due to the complete treatment in function spaces, which requires an efficient model reduction technique for numerical computations. The advocated perspective yields the crucial benefit that error bounds can be derived for all occuring approximations, leading to provable convergence subject to the discretization parameters. Moreover, it enables a fully adaptive a posteriori control with automatic problem-dependent adjustments of the employed discretizations. The method is discussed in the context of modern hierarchical tensor representations, which are used for the evaluation of a random PDE (the forward model) and the subsequent high-dimensional quadrature of the log-likelihood, alleviating the ‘curse of dimensionality’. Numerical experiments demonstrate the performance and confirm the theoretical results.

  4. Numerical modelling of instantaneous plate tectonics

    NASA Technical Reports Server (NTRS)

    Minster, J. B.; Haines, E.; Jordan, T. H.; Molnar, P.

    1974-01-01

    Assuming lithospheric plates to be rigid, 68 spreading rates, 62 fracture zones trends, and 106 earthquake slip vectors are systematically inverted to obtain a self-consistent model of instantaneous relative motions for eleven major plates. The inverse problem is linearized and solved iteratively by a maximum-likelihood procedure. Because the uncertainties in the data are small, Gaussian statistics are shown to be adequate. The use of a linear theory permits (1) the calculation of the uncertainties in the various angular velocity vectors caused by uncertainties in the data, and (2) quantitative examination of the distribution of information within the data set. The existence of a self-consistent model satisfying all the data is strong justification of the rigid plate assumption. Slow movement between North and South America is shown to be resolvable.

  5. Linear sampling method applied to non destructive testing of an elastic waveguide: theory, numerics and experiments

    NASA Astrophysics Data System (ADS)

    Baronian, Vahan; Bourgeois, Laurent; Chapuis, Bastien; Recoquillay, Arnaud

    2018-07-01

    This paper presents an application of the linear sampling method to ultrasonic non destructive testing of an elastic waveguide. In particular, the NDT context implies that both the solicitations and the measurements are located on the surface of the waveguide and are given in the time domain. Our strategy consists in using a modal formulation of the linear sampling method at multiple frequencies, such modal formulation being justified theoretically in Bourgeois et al (2011 Inverse Problems 27 055001) for rigid obstacles and in Bourgeois and Lunéville (2013 Inverse Problems 29 025017) for cracks. Our strategy requires the inversion of some emission and reception matrices which deserve some special attention due to potential ill-conditioning. The feasibility of our method is proved with the help of artificial data as well as real data.

  6. Revisiting the Impact of NCLB High-Stakes School Accountability, Capacity, and Resources: State NAEP 1990-2009 Reading and Math Achievement Gaps and Trends

    ERIC Educational Resources Information Center

    Lee, Jaekyung; Reeves, Todd

    2012-01-01

    This study examines the impact of high-stakes school accountability, capacity, and resources under NCLB on reading and math achievement outcomes through comparative interrupted time-series analyses of 1990-2009 NAEP state assessment data. Through hierarchical linear modeling latent variable regression with inverse probability of treatment…

  7. Box spaces in pictorial space: linear perspective versus templates

    NASA Astrophysics Data System (ADS)

    de Ridder, Huib; Pont, Sylvia C.

    2012-03-01

    In the past decades perceptual (or perceived) image quality has been one of the most important criteria for evaluating digitally processed image and video content. With the growing popularity of new media like stereoscopic displays there is a tendency to replace image quality with viewing experience as the ultimate criterion. Adopting such a high-level psychological criterion calls for a rethinking of the premises underlying human judgment. One premise is that perception is about accurately reconstructing the physical world in front of you ("inverse optics"). That is, human vision is striving for veridicality. The present study investigated one of its consequences, namely, that linear perspective will always yield the correct description of the perceived 3D geometry in 2D images. To this end, human observers adjusted the frontal view of a wireframe box on a television screen so as to look equally deep and wide (i.e. to look like a cube) or twice as deep as wide. In a number of stimulus configurations, the results showed huge deviations from veridicality suggesting that the inverse optics model fails. Instead, the results seem to be more in line with a model of "vision as optical interface".

  8. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of their slip distributions. Also, reliable solutions are generally obtained with data in a 30-minute window following the origin time, suggesting that a real-time system could obtain solutions in less than one hour following the origin time.

  9. Bilinear Inverse Problems: Theory, Algorithms, and Applications

    NASA Astrophysics Data System (ADS)

    Ling, Shuyang

    We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical guarantees and stability theory are derived and the number of sampling complexity is nearly optimal (up to a poly-log factor). Applications in imaging sciences and signal processing are discussed and numerical simulations are presented to demonstrate the effectiveness and efficiency of our approach.

  10. Analytically based forward and inverse models of fluvial landscape evolution during temporally continuous climatic and tectonic variations

    NASA Astrophysics Data System (ADS)

    Goren, Liran; Petit, Carole

    2017-04-01

    Fluvial channels respond to changing tectonic and climatic conditions by adjusting their patterns of erosion and relief. It is therefore expected that by examining these patterns, we can infer the tectonic and climatic conditions that shaped the channels. However, the potential interference between climatic and tectonic signals complicates this inference. Within the framework of the stream power model that describes incision rate of mountainous bedrock rivers, climate variability has two effects: it influences the erosive power of the river, causing local slope change, and it changes the fluvial response time that controls the rate at which tectonically and climatically induced slope breaks are communicated upstream. Because of this dual role, the fluvial response time during continuous climate change has so far been elusive, which hinders our understanding of environmental signal propagation and preservation in the fluvial topography. An analytic solution of the stream power model during general tectonic and climatic histories gives rise to a new definition of the fluvial response time. The analytic solution offers accurate predictions for landscape evolution that are hard to achieve with classical numerical schemes and thus can be used to validate and evaluate the accuracy of numerical landscape evolution models. The analytic solution together with the new definition of the fluvial response time allow inferring either the tectonic history or the climatic history from river long profiles by using simple linear inversion schemes. Analytic study of landscape evolution during periodic climate change reveals that high frequency (10-100 kyr) climatic oscillations with respect to the response time, such as Milankovitch cycles, are not expected to leave significant fingerprints in the upstream reaches of fluvial channels. Linear inversion schemes are applied to the Tinee river tributaries in the southern French Alps, where tributary long profiles are used to recover the incision rate history of the Tinee main trunk. Inversion results show periodic, high incision rate pulses, which are correlated with interglacial episodes. Similar incision rate histories are recovered for the past 100 kyr when assuming constant climatic conditions or periodic climatic oscillations, in agreement with theoretical predictions.

  11. Computing the Moore-Penrose Inverse of a Matrix with a Computer Algebra System

    ERIC Educational Resources Information Center

    Schmidt, Karsten

    2008-01-01

    In this paper "Derive" functions are provided for the computation of the Moore-Penrose inverse of a matrix, as well as for solving systems of linear equations by means of the Moore-Penrose inverse. Making it possible to compute the Moore-Penrose inverse easily with one of the most commonly used Computer Algebra Systems--and to have the blueprint…

  12. Assigning uncertainties in the inversion of NMR relaxation data.

    PubMed

    Parker, Robert L; Song, Yi-Qaio

    2005-06-01

    Recovering the relaxation-time density function (or distribution) from NMR decay records requires inverting a Laplace transform based on noisy data, an ill-posed inverse problem. An important objective in the face of the consequent ambiguity in the solutions is to establish what reliable information is contained in the measurements. To this end we describe how upper and lower bounds on linear functionals of the density function, and ratios of linear functionals, can be calculated using optimization theory. Those bounded quantities cover most of those commonly used in the geophysical NMR, such as porosity, T(2) log-mean, and bound fluid volume fraction, and include averages over any finite interval of the density function itself. In the theory presented statistical considerations enter to account for the presence of significant noise in the signal, but not in a prior characterization of density models. Our characterization of the uncertainties is conservative and informative; it will have wide application in geophysical NMR and elsewhere.

  13. The advantages of logarithmically scaled data for electromagnetic inversion

    NASA Astrophysics Data System (ADS)

    Wheelock, Brent; Constable, Steven; Key, Kerry

    2015-06-01

    Non-linear inversion algorithms traverse a data misfit space over multiple iterations of trial models in search of either a global minimum or some target misfit contour. The success of the algorithm in reaching that objective depends upon the smoothness and predictability of the misfit space. For any given observation, there is no absolute form a datum must take, and therefore no absolute definition for the misfit space; in fact, there are many alternatives. However, not all misfit spaces are equal in terms of promoting the success of inversion. In this work, we appraise three common forms that complex data take in electromagnetic geophysical methods: real and imaginary components, a power of amplitude and phase, and logarithmic amplitude and phase. We find that the optimal form is logarithmic amplitude and phase. Single-parameter misfit curves of log-amplitude and phase data for both magnetotelluric and controlled-source electromagnetic methods are the smoothest of the three data forms and do not exhibit flattening at low model resistivities. Synthetic, multiparameter, 2-D inversions illustrate that log-amplitude and phase is the most robust data form, converging to the target misfit contour in the fewest steps regardless of starting model and the amount of noise added to the data; inversions using the other two data forms run slower or fail under various starting models and proportions of noise. It is observed that inversion with log-amplitude and phase data is nearly two times faster in converging to a solution than with other data types. We also assess the statistical consequences of transforming data in the ways discussed in this paper. With the exception of real and imaginary components, which are assumed to be Gaussian, all other data types do not produce an expected mean-squared misfit value of 1.00 at the true model (a common assumption) as the errors in the complex data become large. We recommend that real and imaginary data with errors larger than 10 per cent of the complex amplitude be withheld from a log-amplitude and phase inversion rather than retaining them with large error-bars.

  14. Bayesian seismic inversion based on rock-physics prior modeling for the joint estimation of acoustic impedance, porosity and lithofacies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio

    We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less

  15. A statistical assessment of seismic models of the U.S. continental crust using Bayesian inversion of ambient noise surface wave dispersion data

    NASA Astrophysics Data System (ADS)

    Olugboji, T. M.; Lekic, V.; McDonough, W.

    2017-07-01

    We present a new approach for evaluating existing crustal models using ambient noise data sets and its associated uncertainties. We use a transdimensional hierarchical Bayesian inversion approach to invert ambient noise surface wave phase dispersion maps for Love and Rayleigh waves using measurements obtained from Ekström (2014). Spatiospectral analysis shows that our results are comparable to a linear least squares inverse approach (except at higher harmonic degrees), but the procedure has additional advantages: (1) it yields an autoadaptive parameterization that follows Earth structure without making restricting assumptions on model resolution (regularization or damping) and data errors; (2) it can recover non-Gaussian phase velocity probability distributions while quantifying the sources of uncertainties in the data measurements and modeling procedure; and (3) it enables statistical assessments of different crustal models (e.g., CRUST1.0, LITHO1.0, and NACr14) using variable resolution residual and standard deviation maps estimated from the ensemble. These assessments show that in the stable old crust of the Archean, the misfits are statistically negligible, requiring no significant update to crustal models from the ambient noise data set. In other regions of the U.S., significant updates to regionalization and crustal structure are expected especially in the shallow sedimentary basins and the tectonically active regions, where the differences between model predictions and data are statistically significant.

  16. Highway traffic estimation of improved precision using the derivative-free nonlinear Kalman Filter

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Siano, Pierluigi; Zervos, Nikolaos; Melkikh, Alexey

    2015-12-01

    The paper proves that the PDE dynamic model of the highway traffic is a differentially flat one and by applying spatial discretization its shows that the model's transformation into an equivalent linear canonical state-space form is possible. For the latter representation of the traffic's dynamics, state estimation is performed with the use of the Derivative-free nonlinear Kalman Filter. The proposed filter consists of the Kalman Filter recursion applied on the transformed state-space model of the highway traffic. Moreover, it makes use of an inverse transformation, based again on differential flatness theory which enables to obtain estimates of the state variables of the initial nonlinear PDE model. By avoiding approximate linearizations and the truncation of nonlinear terms from the PDE model of the traffic's dynamics the proposed filtering methods outperforms, in terms of accuracy, other nonlinear estimators such as the Extended Kalman Filter. The article's theoretical findings are confirmed through simulation experiments.

  17. Effect of step width manipulation on tibial stress during running.

    PubMed

    Meardon, Stacey A; Derrick, Timothy R

    2014-08-22

    Narrow step width has been linked to variables associated with tibial stress fracture. The purpose of this study was to evaluate the effect of step width on bone stresses using a standardized model of the tibia. 15 runners ran at their preferred 5k running velocity in three running conditions, preferred step width (PSW) and PSW±5% of leg length. 10 successful trials of force and 3-D motion data were collected. A combination of inverse dynamics, musculoskeletal modeling and beam theory was used to estimate stresses applied to the tibia using subject-specific anthropometrics and motion data. The tibia was modeled as a hollow ellipse. Multivariate analysis revealed that tibial stresses at the distal 1/3 of the tibia differed with step width manipulation (p=0.002). Compression on the posterior and medial aspect of the tibia was inversely related to step width such that as step width increased, compression on the surface of tibia decreased (linear trend p=0.036 and 0.003). Similarly, tension on the anterior surface of the tibia decreased as step width increased (linear trend p=0.029). Widening step width linearly reduced shear stress at all 4 sites (p<0.001 for all). The data from this study suggests that stresses experienced by the tibia during running were influenced by step width when using a standardized model of the tibia. Wider step widths were generally associated with reduced loading of the tibia and may benefit runners at risk of or experiencing stress injury at the tibia, especially if they present with a crossover running style. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Model Parameterization and P-wave AVA Direct Inversion for Young's Impedance

    NASA Astrophysics Data System (ADS)

    Zong, Zhaoyun; Yin, Xingyao

    2017-05-01

    AVA inversion is an important tool for elastic parameters estimation to guide the lithology prediction and "sweet spot" identification of hydrocarbon reservoirs. The product of the Young's modulus and density (named as Young's impedance in this study) is known as an effective lithology and brittleness indicator of unconventional hydrocarbon reservoirs. Density is difficult to predict from seismic data, which renders the estimation of the Young's impedance inaccurate in conventional approaches. In this study, a pragmatic seismic AVA inversion approach with only P-wave pre-stack seismic data is proposed to estimate the Young's impedance to avoid the uncertainty brought by density. First, based on the linearized P-wave approximate reflectivity equation in terms of P-wave and S-wave moduli, the P-wave approximate reflectivity equation in terms of the Young's impedance is derived according to the relationship between P-wave modulus, S-wave modulus, Young's modulus and Poisson ratio. This equation is further compared to the exact Zoeppritz equation and the linearized P-wave approximate reflectivity equation in terms of P- and S-wave velocities and density, which illustrates that this equation is accurate enough to be used for AVA inversion when the incident angle is within the critical angle. Parameter sensitivity analysis illustrates that the high correlation between the Young's impedance and density render the estimation of the Young's impedance difficult. Therefore, a de-correlation scheme is used in the pragmatic AVA inversion with Bayesian inference to estimate Young's impedance only with pre-stack P-wave seismic data. Synthetic examples demonstrate that the proposed approach is able to predict the Young's impedance stably even with moderate noise and the field data examples verify the effectiveness of the proposed approach in Young's impedance estimation and "sweet spots" evaluation.

  19. Field theory of the inverse cascade in two-dimensional turbulence

    NASA Astrophysics Data System (ADS)

    Mayo, Jackson R.

    2005-11-01

    A two-dimensional fluid, stirred at high wave numbers and damped by both viscosity and linear friction, is modeled by a statistical field theory. The fluid’s long-distance behavior is studied using renormalization-group (RG) methods, as begun by Forster, Nelson, and Stephen [Phys. Rev. A 16, 732 (1977)]. With friction, which dissipates energy at low wave numbers, one expects a stationary inverse energy cascade for strong enough stirring. While such developed turbulence is beyond the quantitative reach of perturbation theory, a combination of exact and perturbative results suggests a coherent picture of the inverse cascade. The zero-friction fluctuation-dissipation theorem (FDT) is derived from a generalized time-reversal symmetry and implies zero anomalous dimension for the velocity even when friction is present. Thus the Kolmogorov scaling of the inverse cascade cannot be explained by any RG fixed point. The β function for the dimensionless coupling ĝ is computed through two loops; the ĝ3 term is positive, as already known, but the ĝ5 term is negative. An ideal cascade requires a linear β function for large ĝ , consistent with a Padé approximant to the Borel transform. The conjecture that the Kolmogorov spectrum arises from an RG flow through large ĝ is compatible with other results, but the accurate k-5/3 scaling is not explained and the Kolmogorov constant is not estimated. The lack of scale invariance should produce intermittency in high-order structure functions, as observed in some but not all numerical simulations of the inverse cascade. When analogous RG methods are applied to the one-dimensional Burgers equation using an FDT-preserving dimensional continuation, equipartition is obtained instead of a cascade—in agreement with simulations.

  20. Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.

    NASA Astrophysics Data System (ADS)

    Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.

    2016-12-01

    Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.

  1. The association of trajectories of protein intake and age-specific protein intakes from 2 to 22 years with BMI in early adulthood.

    PubMed

    Wright, Melecia; Sotres-Alvarez, Daniela; Mendez, Michelle A; Adair, Linda

    2017-03-01

    No study has analysed how protein intake from early childhood to young adulthood relate to adult BMI in a single cohort. To estimate the association of protein intake at 2, 11, 15, 19 and 22 years with age- and sex-standardised BMI at 22 years (early adulthood), we used linear regression models with dietary and anthropometric data from a Filipino birth cohort (1985-2005, n 2586). We used latent growth curve analysis to identify trajectories of protein intake relative to age-specific recommended daily allowance (intake in g/kg body weight) from 2 to 22 years, then related trajectory membership to early adulthood BMI using linear regression models. Lean mass and fat mass were secondary outcomes. Regression models included socioeconomic, dietary and anthropometric confounders from early life and adulthood. Protein intake relative to needs at age 2 years was positively associated with BMI and lean mass at age 22 years, but intakes at ages 11, 15 and 22 years were inversely associated with early adulthood BMI. Individuals were classified into four mutually exclusive trajectories: (i) normal consumers (referent trajectory, 58 % of cohort), (ii) high protein consumers in infancy (20 %), (iii) usually high consumers (18 %) and (iv) always high consumers (5 %). Compared with the normal consumers, 'usually high' consumption was inversely associated with BMI, lean mass and fat mass at age 22 years whereas 'always high' consumption was inversely associated with male lean mass in males. Proximal protein intakes were more important contributors to early adult BMI relative to early-childhood protein intake; protein intake history was differentially associated with adulthood body size.

  2. Manifestation of remote response over the equatorial Pacific in a climate model

    NASA Astrophysics Data System (ADS)

    Misra, Vasubandhu; Marx, L.

    2007-10-01

    In this paper we examine the simulations over the tropical Pacific Ocean from long-term simulations of two different versions of the Center for Ocean-Land-Atmosphere Studies (COLA) coupled climate model that have a different global distribution of the inversion clouds. We find that subtle changes made to the numerics of an empirical parameterization of the inversion clouds can result in a significant change in the coupled climate of the equatorial Pacific Ocean. In one coupled simulation of this study we enforce a simple linear spatial filtering of the diagnostic inversion clouds to ameliorate its spatial incoherency (as a result of the Gibbs effect) while in the other we conduct no such filtering. It is found from the comparison of these two simulations that changing the distribution of the shallow inversion clouds prevalent in the subsidence region of the subtropical high over the eastern oceans in this manner has a direct bearing on the surface wind stress through surface pressure modifications. The SST in the warm pool region responds to this modulation of the wind stress, thus affecting the convective activity over the warm pool region and also the large-scale Walker and Hadley circulation. The interannual variability of SST in the eastern equatorial Pacific Ocean is also modulated by this change to the inversion clouds. Consequently, this sensitivity has a bearing on the midlatitude height response. The same set of two experiments were conducted with the respective versions of the atmosphere general circulation model uncoupled to the ocean general circulation model but forced with observed SST to demonstrate that this sensitivity of the mean climate of the equatorial Pacific Ocean is unique to the coupled climate model where atmosphere, ocean and land interact. Therefore a strong case is made for adopting coupled ocean-land-atmosphere framework to develop climate models as against the usual practice of developing component models independent of each other.

  3. A python framework for environmental model uncertainty analysis

    USGS Publications Warehouse

    White, Jeremy; Fienen, Michael N.; Doherty, John E.

    2016-01-01

    We have developed pyEMU, a python framework for Environmental Modeling Uncertainty analyses, open-source tool that is non-intrusive, easy-to-use, computationally efficient, and scalable to highly-parameterized inverse problems. The framework implements several types of linear (first-order, second-moment (FOSM)) and non-linear uncertainty analyses. The FOSM-based analyses can also be completed prior to parameter estimation to help inform important modeling decisions, such as parameterization and objective function formulation. Complete workflows for several types of FOSM-based and non-linear analyses are documented in example notebooks implemented using Jupyter that are available in the online pyEMU repository. Example workflows include basic parameter and forecast analyses, data worth analyses, and error-variance analyses, as well as usage of parameter ensemble generation and management capabilities. These workflows document the necessary steps and provides insights into the results, with the goal of educating users not only in how to apply pyEMU, but also in the underlying theory of applied uncertainty quantification.

  4. Mathematical programming models for the economic design and assessment of wind energy conversion systems

    NASA Astrophysics Data System (ADS)

    Reinert, K. A.

    The use of linear decision rules (LDR) and chance constrained programming (CCP) to optimize the performance of wind energy conversion clusters coupled to storage systems is described. Storage is modelled by LDR and output by CCP. The linear allocation rule and linear release rule prescribe the size and optimize a storage facility with a bypass. Chance constraints are introduced to explicitly treat reliability in terms of an appropriate value from an inverse cumulative distribution function. Details of deterministic programming structure and a sample problem involving a 500 kW and a 1.5 MW WECS are provided, considering an installed cost of $1/kW. Four demand patterns and three levels of reliability are analyzed for optimizing the generator choice and the storage configuration for base load and peak operating conditions. Deficiencies in ability to predict reliability and to account for serial correlations are noted in the model, which is concluded useful for narrowing WECS design options.

  5. Linearized inversion of multiple scattering seismic energy

    NASA Astrophysics Data System (ADS)

    Aldawood, Ali; Hoteit, Ibrahim; Zuberi, Mohammad

    2014-05-01

    Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. So, imaging seismic data with the single-scattering assumption does not locate multiple bounces events in their actual subsurface positions. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single scattering energy such as nearly vertical faults. Standard migration of these multiples provides subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. The resultant image obtained by the adjoint operator is a smoothed depiction of the true subsurface reflectivity model and is heavily masked by migration artifacts and the source wavelet fingerprint that needs to be properly deconvolved. Hence, we proposed a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. The proposed algorithm uses the least-square image based on single-scattering assumption as a constraint to invert for the part of the image that is illuminated by internal scattering energy. Then, we posed the problem of imaging double-scattering energy as a least-square minimization problem that requires solving the normal equation of the following form: GTGv = GTd, (1) where G is a linearized forward modeling operator that predicts double-scattered seismic data. Also, GT is a linearized adjoint operator that image double-scattered seismic data. Gradient-based optimization algorithms solve this linear system. Hence, we used a quasi-Newton optimization technique to find the least-square minimizer. In this approach, an estimate of the Hessian matrix that contains curvature information is modified at every iteration by a low-rank update based on gradient changes at every step. At each iteration, the data residual is imaged using GT to determine the model update. Application of the linearized inversion to synthetic data to image a vertical fault plane demonstrate the effectiveness of this methodology to properly delineate the vertical fault plane and give better amplitude information than the standard migrated image using the adjoint operator that takes into account internal multiples. Thus, least-square imaging of multiple scattering enhances the spatial resolution of the events illuminated by internal scattering energy. It also deconvolves the source signature and helps remove the fingerprint of the acquisition geometry. The final image is obtained by the superposition of the least-square solution based on single scattering assumption and the least-square solution based on double scattering assumption.

  6. Computing Generalized Matrix Inverse on Spiking Neural Substrate

    PubMed Central

    Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen

    2018-01-01

    Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines. PMID:29593483

  7. Data fitting and image fine-tuning approach to solve the inverse problem in fluorescence molecular imaging

    NASA Astrophysics Data System (ADS)

    Gorpas, Dimitris; Politopoulos, Kostas; Yova, Dido; Andersson-Engels, Stefan

    2008-02-01

    One of the most challenging problems in medical imaging is to "see" a tumour embedded into tissue, which is a turbid medium, by using fluorescent probes for tumour labeling. This problem, despite the efforts made during the last years, has not been fully encountered yet, due to the non-linear nature of the inverse problem and the convergence failures of many optimization techniques. This paper describes a robust solution of the inverse problem, based on data fitting and image fine-tuning techniques. As a forward solver the coupled radiative transfer equation and diffusion approximation model is proposed and compromised via a finite element method, enhanced with adaptive multi-grids for faster and more accurate convergence. A database is constructed by application of the forward model on virtual tumours with known geometry, and thus fluorophore distribution, embedded into simulated tissues. The fitting procedure produces the best matching between the real and virtual data, and thus provides the initial estimation of the fluorophore distribution. Using this information, the coupled radiative transfer equation and diffusion approximation model has the required initial values for a computational reasonable and successful convergence during the image fine-tuning application.

  8. The inverse problems of wing panel manufacture processes

    NASA Astrophysics Data System (ADS)

    Oleinikov, A. I.; Bormotin, K. S.

    2013-12-01

    It is shown that inverse problems of steady-state creep bending of plates in both the geometrically linear and nonlinear formulations can be represented in a variational formulation. Steady-state values of the obtained functionals corresponding to the solutions of the problems of inelastic deformation and springback are determined by applying a finite element procedure to the functionals. Optimal laws of creep deformation are formulated using the criterion of minimizing damage in the functionals of the inverse problems. The formulated problems are reduced to the problems solved by the finite element method using MSC.Marc software. Currently, forming of light metals poses tremendous challenges due to their low ductility at room temperature and their unusual deformation characteristics at hot-cold work: strong asymmetry between tensile and compressive behavior, and a very pronounced anisotropy. We used the constitutive models of steady-state creep of initially transverse isotropy structural materials the kind of the stress state has influence. The paper gives basics of the developed computer-aided system of design, modeling, and electronic simulation targeting the processes of manufacture of wing integral panels. The modeling results can be used to calculate the die tooling, determine the panel processibility, and control panel rejection in the course of forming.

  9. An evolutive real-time source inversion based on a linear inverse formulation

    NASA Astrophysics Data System (ADS)

    Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.

    2016-12-01

    Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source Inversion Validation project (Mai et al. 2011). A real case application is currently being explored. Our specific formulation, combined with simple prior information, as well as numerical results obtained so far, yields interesting perspectives for a real-time implementation.

  10. Reconstructing paleoclimate fields using online data assimilation with a linear inverse model

    NASA Astrophysics Data System (ADS)

    Perkins, Walter A.; Hakim, Gregory J.

    2017-05-01

    We examine the skill of a new approach to climate field reconstructions (CFRs) using an online paleoclimate data assimilation (PDA) method. Several recent studies have foregone climate model forecasts during assimilation due to the computational expense of running coupled global climate models (CGCMs) and the relatively low skill of these forecasts on longer timescales. Here we greatly diminish the computational cost by employing an empirical forecast model (linear inverse model, LIM), which has been shown to have skill comparable to CGCMs for forecasting annual-to-decadal surface temperature anomalies. We reconstruct annual-average 2 m air temperature over the instrumental period (1850-2000) using proxy records from the PAGES 2k Consortium Phase 1 database; proxy models for estimating proxy observations are calibrated on GISTEMP surface temperature analyses. We compare results for LIMs calibrated using observational (Berkeley Earth), reanalysis (20th Century Reanalysis), and CMIP5 climate model (CCSM4 and MPI) data relative to a control offline reconstruction method. Generally, we find that the usage of LIM forecasts for online PDA increases reconstruction agreement with the instrumental record for both spatial fields and global mean temperature (GMT). Specifically, the coefficient of efficiency (CE) skill metric for detrended GMT increases by an average of 57 % over the offline benchmark. LIM experiments display a common pattern of skill improvement in the spatial fields over Northern Hemisphere land areas and in the high-latitude North Atlantic-Barents Sea corridor. Experiments for non-CGCM-calibrated LIMs reveal region-specific reductions in spatial skill compared to the offline control, likely due to aspects of the LIM calibration process. Overall, the CGCM-calibrated LIMs have the best performance when considering both spatial fields and GMT. A comparison with the persistence forecast experiment suggests that improvements are associated with the linear dynamical constraints of the forecast and not simply persistence of temperature anomalies.

  11. The attitude inversion method of geostationary satellites based on unscented particle filter

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao

    2018-04-01

    The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.

  12. Velocity structure of a bottom simulating reflector offshore Peru: Results from full waveform inversion

    USGS Publications Warehouse

    Pecher, I.A.; Minshull, T.A.; Singh, S.C.; von Huene, Roland E.

    1996-01-01

    Much of our knowledge of the worldwide distribution of submarine gas hydrates comes from seismic observations of Bottom Simulating Reflectors (BSRs). Full waveform inversion has proven to be a reliable technique for studying the fine structure of BSRs using the compressional wave velocity. We applied a non-linear full waveform inversion technique to a BSR at a location offshore Peru. We first determined the large-scale features of seismic velocity variations using a statistical inversion technique to maximise coherent energy along travel-time curves. These velocities were used for a starting velocity model for the full waveform inversion, which yielded a detailed velocity/depth model in the vicinity of the BSR. We found that the data are best fit by a model in which the BSR consists of a thin, low-velocity layer. The compressional wave velocity drops from 2.15 km/s down to an average of 1.70 km/s in an 18m thick interval, with a minimum velocity of 1.62 km/s in a 6 m interval. The resulting compressional wave velocity was used to estimate gas content in the sediments. Our results suggest that the low velocity layer is a 6-18 m thick zone containing a few percent of free gas in the pore space. The presence of the BSR coincides with a region of vertical uplift. Therefore, we suggest that gas at this BSR is formed by a dissociation of hydrates at the base of the hydrate stability zone due to uplift and subsequently a decrease in pressure.

  13. Spectral likelihood expansions for Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nagel, Joseph B.; Sudret, Bruno

    2016-03-01

    A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.

  14. Enhancing Autonomy of Aerial Systems Via Integration of Visual Sensors into Their Avionics Suite

    DTIC Science & Technology

    2016-09-01

    aerial platform for subsequent visual sensor integration. 14. SUBJECT TERMS autonomous system, quadrotors, direct method, inverse ...CONTROLLER ARCHITECTURE .....................................................43 B. INVERSE DYNAMICS IN THE VIRTUAL DOMAIN ......................45 1...control station GPS Global-Positioning System IDVD inverse dynamics in the virtual domain ILP integer linear program INS inertial-navigation system

  15. Experimental evaluation of model predictive control and inverse dynamics control for spacecraft proximity and docking maneuvers

    NASA Astrophysics Data System (ADS)

    Virgili-Llop, Josep; Zagaris, Costantinos; Park, Hyeongjun; Zappulla, Richard; Romano, Marcello

    2018-03-01

    An experimental campaign has been conducted to evaluate the performance of two different guidance and control algorithms on a multi-constrained docking maneuver. The evaluated algorithms are model predictive control (MPC) and inverse dynamics in the virtual domain (IDVD). A linear-quadratic approach with a quadratic programming solver is used for the MPC approach. A nonconvex optimization problem results from the IDVD approach, and a nonlinear programming solver is used. The docking scenario is constrained by the presence of a keep-out zone, an entry cone, and by the chaser's maximum actuation level. The performance metrics for the experiments and numerical simulations include the required control effort and time to dock. The experiments have been conducted in a ground-based air-bearing test bed, using spacecraft simulators that float over a granite table.

  16. Confidence set inference with a prior quadratic bound

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1989-01-01

    In the uniqueness part of a geophysical inverse problem, the observer wants to predict all likely values of P unknown numerical properties z=(z sub 1,...,z sub p) of the earth from measurement of D other numerical properties y (sup 0) = (y (sub 1) (sup 0), ..., y (sub D (sup 0)), using full or partial knowledge of the statistical distribution of the random errors in y (sup 0). The data space Y containing y(sup 0) is D-dimensional, so when the model space X is infinite-dimensional the linear uniqueness problem usually is insoluble without prior information about the correct earth model x. If that information is a quadratic bound on x, Bayesian inference (BI) and stochastic inversion (SI) inject spurious structure into x, implied by neither the data nor the quadratic bound. Confidence set inference (CSI) provides an alternative inversion technique free of this objection. Confidence set inference is illustrated in the problem of estimating the geomagnetic field B at the core-mantle boundary (CMB) from components of B measured on or above the earth's surface.

  17. Towards a universal master curve in magnetorheology

    NASA Astrophysics Data System (ADS)

    Ruiz-López, José Antonio; Hidalgo-Alvarez, Roque; de Vicente, Juan

    2017-05-01

    We demonstrate that inverse ferrofluids behave as model magnetorheological fluids. A universal master curve is proposed, using a reduced Mason number, under the frame of a structural viscosity model where the magnetic field strength dependence is solely contained in the Mason number and the particle concentration is solely contained in the critical Mason number (i.e. the yield stress). A linear dependence of the critical Mason number with the particle concentration is observed that is in good agreement with a mean (average) magnetization approximation, particle level dynamic simulations and micromechanical models available in the literature.

  18. Use of LANDSAT images of vegetation cover to estimate effective hydraulic properties of soils

    NASA Technical Reports Server (NTRS)

    Eagleson, Peter S.; Jasinski, Michael F.

    1988-01-01

    This work focuses on the characterization of natural, spatially variable, semivegetated landscapes using a linear, stochastic, canopy-soil reflectance model. A first application of the model was the investigation of the effects of subpixel and regional variability of scenes on the shape and structure of red-infrared scattergrams. Additionally, the model was used to investigate the inverse problem, the estimation of subpixel vegetation cover, given only the scattergrams of simulated satellite scale multispectral scenes. The major aspects of that work, including recent field investigations, are summarized.

  19. Bukhvostov-Lipatov model and quantum-classical duality

    NASA Astrophysics Data System (ADS)

    Bazhanov, Vladimir V.; Lukyanov, Sergei L.; Runov, Boris A.

    2018-02-01

    The Bukhvostov-Lipatov model is an exactly soluble model of two interacting Dirac fermions in 1 + 1 dimensions. The model describes weakly interacting instantons and anti-instantons in the O (3) non-linear sigma model. In our previous work [arxiv:arXiv:1607.04839] we have proposed an exact formula for the vacuum energy of the Bukhvostov-Lipatov model in terms of special solutions of the classical sinh-Gordon equation, which can be viewed as an example of a remarkable duality between integrable quantum field theories and integrable classical field theories in two dimensions. Here we present a complete derivation of this duality based on the classical inverse scattering transform method, traditional Bethe ansatz techniques and analytic theory of ordinary differential equations. In particular, we show that the Bethe ansatz equations defining the vacuum state of the quantum theory also define connection coefficients of an auxiliary linear problem for the classical sinh-Gordon equation. Moreover, we also present details of the derivation of the non-linear integral equations determining the vacuum energy and other spectral characteristics of the model in the case when the vacuum state is filled by 2-string solutions of the Bethe ansatz equations.

  20. Solving geosteering inverse problems by stochastic Hybrid Monte Carlo method

    DOE PAGES

    Shen, Qiuyang; Wu, Xuqing; Chen, Jiefu; ...

    2017-11-20

    The inverse problems arise in almost all fields of science where the real-world parameters are extracted from a set of measured data. The geosteering inversion plays an essential role in the accurate prediction of oncoming strata as well as a reliable guidance to adjust the borehole position on the fly to reach one or more geological targets. This mathematical treatment is not easy to solve, which requires finding an optimum solution among a large solution space, especially when the problem is non-linear and non-convex. Nowadays, a new generation of logging-while-drilling (LWD) tools has emerged on the market. The so-called azimuthalmore » resistivity LWD tools have azimuthal sensitivity and a large depth of investigation. Hence, the associated inverse problems become much more difficult since the earth model to be inverted will have more detailed structures. The conventional deterministic methods are incapable to solve such a complicated inverse problem, where they suffer from the local minimum trap. Alternatively, stochastic optimizations are in general better at finding global optimal solutions and handling uncertainty quantification. In this article, we investigate the Hybrid Monte Carlo (HMC) based statistical inversion approach and suggest that HMC based inference is more efficient in dealing with the increased complexity and uncertainty faced by the geosteering problems.« less

  1. Bayesian inversion of seismic and electromagnetic data for marine gas reservoir characterization using multi-chain Markov chain Monte Carlo sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan

    In this study we developed an efficient Bayesian inversion framework for interpreting marine seismic amplitude versus angle (AVA) and controlled source electromagnetic (CSEM) data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo (MCMC) sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis (DREAM) and Adaptive Metropolis (AM) samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and CSEM data. The multi-chain MCMC is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration,more » the approach is used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic AVA and CSEM joint inversion provides better estimation of reservoir saturations than the seismic AVA-only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated – reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.« less

  2. Bayesian inversion of seismic and electromagnetic data for marine gas reservoir characterization using multi-chain Markov chain Monte Carlo sampling

    NASA Astrophysics Data System (ADS)

    Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Bao, Jie; Swiler, Laura

    2017-12-01

    In this study we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach is used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated - reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.

  3. Minimal-Inversion Feedforward-And-Feedback Control System

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1990-01-01

    Recent developments in theory of control systems support concept of minimal-inversion feedforward-and feedback control system consisting of three independently designable control subsystems. Applicable to the control of linear, time-invariant plant.

  4. The 2-D magnetotelluric inverse problem solved with optimization

    NASA Astrophysics Data System (ADS)

    van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven

    2011-02-01

    The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.

  5. Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-04-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.

  6. Travelling wave solutions of the homogeneous one-dimensional FREFLO model

    NASA Astrophysics Data System (ADS)

    Huang, B.; Hong, J. Y.; Jing, G. Q.; Niu, W.; Fang, L.

    2018-01-01

    Presently there is quite few analytical studies in traffic flows due to the non-linearity of the governing equations. In the present paper we introduce travelling wave solutions for the homogeneous one-dimensional FREFLO model, which are expressed in the form of series and describe the procedure that vehicles/pedestrians move with a negative velocity and decelerate until rest, then accelerate inversely to positive velocities. This method is expect to be extended to more complex situations in the future.

  7. SCIPUFF Tangent-Linear/Adjoint Model for Release Source Location from Observational Data

    DTIC Science & Technology

    2011-01-18

    magnitudes, times? Inverse model based on SCIPUFF (AIMS) 1/17/2011 Aerodyne Research, Inc. W911NF-06-C-0161 5 (1981), Daescu and Carmichael (2003...Defined Choice- Situations” The Journal of the Operational Research Society, Vol. 32, No. 2 (1981) Daescu, D. N., and Carmichael , G. R., “An Adjoint...Intelligence Applications to Environmental Sciences at AMS Annual Meeting, Atlanta, GA, Jan 17-21 (2010 ) N. Platt, S. Warner and S. M. Nunes, ’Plan for

  8. Bayesian inversion of marine CSEM data from the Scarborough gas field using a transdimensional 2-D parametrization

    NASA Astrophysics Data System (ADS)

    Ray, Anandaroop; Key, Kerry; Bodin, Thomas; Myer, David; Constable, Steven

    2014-12-01

    We apply a reversible-jump Markov chain Monte Carlo method to sample the Bayesian posterior model probability density function of 2-D seafloor resistivity as constrained by marine controlled source electromagnetic data. This density function of earth models conveys information on which parts of the model space are illuminated by the data. Whereas conventional gradient-based inversion approaches require subjective regularization choices to stabilize this highly non-linear and non-unique inverse problem and provide only a single solution with no model uncertainty information, the method we use entirely avoids model regularization. The result of our approach is an ensemble of models that can be visualized and queried to provide meaningful information about the sensitivity of the data to the subsurface, and the level of resolution of model parameters. We represent models in 2-D using a Voronoi cell parametrization. To make the 2-D problem practical, we use a source-receiver common midpoint approximation with 1-D forward modelling. Our algorithm is transdimensional and self-parametrizing where the number of resistivity cells within a 2-D depth section is variable, as are their positions and geometries. Two synthetic studies demonstrate the algorithm's use in the appraisal of a thin, segmented, resistive reservoir which makes for a challenging exploration target. As a demonstration example, we apply our method to survey data collected over the Scarborough gas field on the Northwest Australian shelf.

  9. FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems

    NASA Astrophysics Data System (ADS)

    Vourc'h, Eric; Rodet, Thomas

    2015-11-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2015 was a one-day workshop held in May 2015 which attracted around 70 attendees. Each of the submitted papers has been reviewed by two reviewers. There have been 15 accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks: GDR ISIS, GDR MIA, GDR MOA and GDR Ondes. The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA and SATIE.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis; Stanier, Adam John

    Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm ismore » shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.« less

  11. Optimal design of focused experiments and surveys

    NASA Astrophysics Data System (ADS)

    Curtis, Andrew

    1999-10-01

    Experiments and surveys are often performed to obtain data that constrain some previously underconstrained model. Often, constraints are most desired in a particular subspace of model space. Experiment design optimization requires that the quality of any particular design can be both quantified and then maximized. This study shows how the quality can be defined such that it depends on the amount of information that is focused in the particular subspace of interest. In addition, algorithms are presented which allow one particular focused quality measure (from the class of focused measures) to be evaluated efficiently. A subclass of focused quality measures is also related to the standard variance and resolution measures from linearized inverse theory. The theory presented here requires that the relationship between model parameters and data can be linearized around a reference model without significant loss of information. Physical and financial constraints define the space of possible experiment designs. Cross-well tomographic examples are presented, plus a strategy for survey design to maximize information about linear combinations of parameters such as bulk modulus, κ =λ+ 2μ/3.

  12. Sheared Layers in the Continental Crust: Nonlinear and Linearized inversion for Ps receiver functions

    NASA Astrophysics Data System (ADS)

    Park, J. J.

    2017-12-01

    Sheared Layers in the Continental Crust: Nonlinear and Linearized inversion for Ps receiver functions Jeffrey Park, Yale University The interpretation of seismic receiver functions (RFs) in terms of isotropic and anisotropic layered structure can be complex. The relationship between structure and body-wave scattering is nonlinear. The anisotropy can involve more parameters than the observations can readily constrain. Finally, reflectivity-predicted layer reverberations are often not prominent in data, so that nonlinear waveform inversion can search in vain to match ghost signals. Multiple-taper correlation (MTC) receiver functions have uncertainties in the frequency domain that follow Gaussian statistics [Park and Levin, 2016a], so grid-searches for the best-fitting collections of interfaces can be performed rapidly to minimize weighted misfit variance. Tests for layer-reverberations can be performed in the frequency domain without reflectivity calculations, allowing flexible modelling of weak, but nonzero, reverberations. Park and Levin [2016b] linearized the hybridization of P and S body waves in an anisotropic layer to predict first-order Ps conversion amplitudes at crust and mantle interfaces. In an anisotropic layer, the P wave acquires small SV and SH components. To ensure continuity of displacement and traction at the top and bottom boundaries of the layer, shear waves are generated. Assuming hexagonal symmetry with an arbitrary symmetry axis, theory confirms the empirical stacking trick of phase-shifting transverse RFs by 90 degrees in back-azimuth [Shiomi and Park, 2008; Schulte-Pelkum and Mahan, 2014] to enhance 2-lobed and 4-lobed harmonic variation. Ps scattering is generated by sharp interfaces, so that RFs resemble the first derivative of the model. MTC RFs in the frequency domain can be manipulated to obtain a first-order reconstruction of the layered anisotropy, under the above modeling constraints and neglecting reverberations. Examples from long-running continental stations will be discussed. Park, J., and V. Levin, 2016a. doi:10.1093/gji/ggw291. Park, J., and V. Levin, 2016b. doi:10.1093/gji/ggw323. Schulte-Pelkum, V., and Mahan, K. H., 2014. doi:10.1007/s00024-014-0853-4. Shiomi, K., & Park, J., 2008. doi:10.1029/2007JB005535.

  13. Serum bilirubin levels are inversely associated with PAI-1 and fibrinogen in Korean subjects.

    PubMed

    Cho, Hyun Sun; Lee, Sung Won; Kim, Eun Sook; Shin, Juyoung; Moon, Sung Dae; Han, Je Ho; Cha, Bong Yun

    2016-01-01

    Oxidative stress may contribute to atherosclerosis and increased activation of the coagulation pathway. Bilirubin may reduce activation of the hemostatic system to inhibit oxidative stress, which would explain its cardioprotective properties shown in many epidemiological studies. This study investigated the association of serum bilirubin with fibrinogen and plasminogen activator inhibitor-1 (PAI-1), respectively. A cross-sectional analysis was performed on 968 subjects (mean age, 56.0 ± 11.2 years; 61.1% men) undergoing a general health checkup. Serum biochemistry was analyzed including bilirubin subtypes, insulin resistance (using homeostasis model of assessment [HOMA]), C-reactive protein (CRP), fibrinogen, and PAI-1. Compared with subjects with a total bilirubin (TB) concentration of <10.0 μmol/L, those with a TB concentration of >17.1 μmol/L had a smaller waist circumference, a lower triglyceride level, a lower prevalence of metabolic syndrome, and decreased HOMA-IR and CRP levels. Correlation analysis revealed linear relationships of fibrinogen with TB and direct bilirubin (DB), whereas PAI-1 was correlated with DB. After adjustment for confounding factors, bilirubin levels were inversely associated with fibrinogen and PAI-1 levels, respectively. Multivariate regression models showed a negative linear relationship between all types of bilirubin and fibrinogen, whereas there was a significant linear relationship between PAI-1 and DB. High bilirubin concentrations were independently associated with low levels of fibrinogen and PAI-1, respectively. The association between TB and PAI-1 was confined to the highest TB concentration category whereas DB showed a linear association with PAI-1. Bilirubin may protect against the development of atherothrombosis by reducing the hemostatic response. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. A Novel Fractional Order Model for the Dynamic Hysteresis of Piezoelectrically Actuated Fast Tool Servo

    PubMed Central

    Zhu, Zhiwei; Zhou, Xiaoqin

    2012-01-01

    The main contribution of this paper is the development of a linearized model for describing the dynamic hysteresis behaviors of piezoelectrically actuated fast tool servo (FTS). A linearized hysteresis force model is proposed and mathematically described by a fractional order differential equation. Combining the dynamic modeling of the FTS mechanism, a linearized fractional order dynamic hysteresis (LFDH) model for the piezoelectrically actuated FTS is established. The unique features of the LFDH model could be summarized as follows: (a) It could well describe the rate-dependent hysteresis due to its intrinsic characteristics of frequency-dependent nonlinear phase shifts and amplitude modulations; (b) The linearization scheme of the LFDH model would make it easier to implement the inverse dynamic control on piezoelectrically actuated micro-systems. To verify the effectiveness of the proposed model, a series of experiments are conducted. The toolpaths of the FTS for creating two typical micro-functional surfaces involving various harmonic components with different frequencies and amplitudes are scaled and employed as command signals for the piezoelectric actuator. The modeling errors in the steady state are less than ±2.5% within the full span range which is much smaller than certain state-of-the-art modeling methods, demonstrating the efficiency and superiority of the proposed model for modeling dynamic hysteresis effects. Moreover, it indicates that the piezoelectrically actuated micro systems would be more suitably described as a fractional order dynamic system.

  15. Modularized seismic full waveform inversion based on waveform sensitivity kernels - The software package ASKI

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang; Lamara, Samir; Gutt, Phillip; Paffrath, Marcel

    2015-04-01

    We present a seismic full waveform inversion concept for applications ranging from seismological to enineering contexts, based on sensitivity kernels for full waveforms. The kernels are derived from Born scattering theory as the Fréchet derivatives of linearized frequency-domain full waveform data functionals, quantifying the influence of elastic earth model parameters and density on the data values. For a specific source-receiver combination, the kernel is computed from the displacement and strain field spectrum originating from the source evaluated throughout the inversion domain, as well as the Green function spectrum and its strains originating from the receiver. By storing the wavefield spectra of specific sources/receivers, they can be re-used for kernel computation for different specific source-receiver combinations, optimizing the total number of required forward simulations. In the iterative inversion procedure, the solution of the forward problem, the computation of sensitivity kernels and the derivation of a model update is held completely separate. In particular, the model description for the forward problem and the description of the inverted model update are kept independent. Hence, the resolution of the inverted model as well as the complexity of solving the forward problem can be iteratively increased (with increasing frequency content of the inverted data subset). This may regularize the overall inverse problem and optimizes the computational effort of both, solving the forward problem and computing the model update. The required interconnection of arbitrary unstructured volume and point grids is realized by generalized high-order integration rules and 3D-unstructured interpolation methods. The model update is inferred solving a minimization problem in a least-squares sense, resulting in Gauss-Newton convergence of the overall inversion process. The inversion method was implemented in the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion), which provides a generalized interface to arbitrary external forward modelling codes. So far, the 3D spectral-element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework are supported. The creation of interfaces to further forward codes is planned in the near future. ASKI is freely available under the terms of the GPL at www.rub.de/aski . Since the independent modules of ASKI must communicate via file output/input, large storage capacities need to be accessible conveniently. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion. In the presentation, we will show some aspects of the theory behind the full waveform inversion method and its practical realization by the software package ASKI, as well as synthetic and real-data applications from different scales and geometries.

  16. Anisotropic k-essence cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chimento, Luis P.; Forte, Monica

    We investigate a Bianchi type-I cosmology with k-essence and find the set of models which dissipate the initial anisotropy. There are cosmological models with extended tachyon fields and k-essence having a constant barotropic index. We obtain the conditions leading to a regular bounce of the average geometry and the residual anisotropy on the bounce. For constant potential, we develop purely kinetic k-essence models which are dust dominated in their early stages, dissipate the initial anisotropy, and end in a stable de Sitter accelerated expansion scenario. We show that linear k-field and polynomial kinetic function models evolve asymptotically to Friedmann-Robertson-Walker cosmologies.more » The linear case is compatible with an asymptotic potential interpolating between V{sub l}{proportional_to}{phi}{sup -{gamma}{sub l}}, in the shear dominated regime, and V{sub l}{proportional_to}{phi}{sup -2} at late time. In the polynomial case, the general solution contains cosmological models with an oscillatory average geometry. For linear k-essence, we find the general solution in the Bianchi type-I cosmology when the k field is driven by an inverse square potential. This model shares the same geometry as a quintessence field driven by an exponential potential.« less

  17. Fully anisotropic 3-D EM modelling on a Lebedev grid with a multigrid pre-conditioner

    NASA Astrophysics Data System (ADS)

    Jaysaval, Piyoosh; Shantsev, Daniil V.; de la Kethulle de Ryhove, Sébastien; Bratteland, Tarjei

    2016-12-01

    We present a numerical algorithm for 3-D electromagnetic (EM) simulations in conducting media with general electric anisotropy. The algorithm is based on the finite-difference discretization of frequency-domain Maxwell's equations on a Lebedev grid, in which all components of the electric field are collocated but half a spatial step staggered with respect to the magnetic field components, which also are collocated. This leads to a system of linear equations that is solved using a stabilized biconjugate gradient method with a multigrid preconditioner. We validate the accuracy of the numerical results for layered and 3-D tilted transverse isotropic (TTI) earth models representing typical scenarios used in the marine controlled-source EM method. It is then demonstrated that not taking into account the full anisotropy of the conductivity tensor can lead to misleading inversion results. For synthetic data corresponding to a 3-D model with a TTI anticlinal structure, a standard vertical transverse isotropic (VTI) inversion is not able to image a resistor, while for a 3-D model with a TTI synclinal structure it produces a false resistive anomaly. However, if the VTI forward solver used in the inversion is replaced by the proposed TTI solver with perfect knowledge of the strike and dip of the dipping structures, the resulting resistivity images become consistent with the true models.

  18. Sliding-mode control combined with improved adaptive feedforward for wafer scanner

    NASA Astrophysics Data System (ADS)

    Li, Xiaojie; Wang, Yiguang

    2018-03-01

    In this paper, a sliding-mode control method combined with improved adaptive feedforward is proposed for wafer scanner to improve the tracking performance of the closed-loop system. Particularly, In addition to the inverse model, the nonlinear force ripple effect which may degrade the tracking accuracy of permanent magnet linear motor (PMLM) is considered in the proposed method. The dominant position periodicity of force ripple is determined by using the Fast Fourier Transform (FFT) analysis for experimental data and the improved feedforward control is achieved by the online recursive least-squares (RLS) estimation of the inverse model and the force ripple. The improved adaptive feedforward is given in a general form of nth-order model with force ripple effect. This proposed method is motivated by the motion controller design of the long-stroke PMLM and short-stroke voice coil motor for wafer scanner. The stability of the closed-loop control system and the convergence of the motion tracking are guaranteed by the proposed sliding-mode feedback and adaptive feedforward methods theoretically. Comparative experiments on a precision linear motion platform can verify the correctness and effectiveness of the proposed method. The experimental results show that comparing to traditional method the proposed one has better performance of rapidity and robustness, especially for high speed motion trajectory. And, the improvements on both tracking accuracy and settling time can be achieved.

  19. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas

    2016-11-01

    Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.

  20. VP Structure of Mount St. Helens, Washington, USA, imaged with local earthquake tomography

    USGS Publications Warehouse

    Waite, G.P.; Moran, S.C.

    2009-01-01

    We present a new P-wave velocity model for Mount St. Helens using local earthquake data recorded by the Pacific Northwest Seismograph Stations and Cascades Volcano Observatory since the 18 May 1980 eruption. These data were augmented with records from a dense array of 19 temporary stations deployed during the second half of 2005. Because the distribution of earthquakes in the study area is concentrated beneath the volcano and within two nearly linear trends, we used a graded inversion scheme to compute a coarse-grid model that focused on the regional structure, followed by a fine-grid inversion to improve spatial resolution directly beneath the volcanic edifice. The coarse-grid model results are largely consistent with earlier geophysical studies of the area; we find high-velocity anomalies NW and NE of the edifice that correspond with igneous intrusions and a prominent low-velocity zone NNW of the edifice that corresponds with the linear zone of high seismicity known as the St. Helens Seismic Zone. This low-velocity zone may continue past Mount St. Helens to the south at depths below 5??km. Directly beneath the edifice, the fine-grid model images a low-velocity zone between about 2 and 3.5??km below sea level that may correspond to a shallow magma storage zone. And although the model resolution is poor below about 6??km, we found low velocities that correspond with the aseismic zone between about 5.5 and 8??km that has previously been modeled as the location of a large magma storage volume. ?? 2009 Elsevier B.V.

  1. Regolith thermal property inversion in the LUNAR-A heat-flow experiment

    NASA Astrophysics Data System (ADS)

    Hagermann, A.; Tanaka, S.; Yoshida, S.; Fujimura, A.; Mizutani, H.

    2001-11-01

    In 2003, two penetrators of the LUNAR--A mission of ISAS will investigate the internal structure of the Moon by conducting seismic and heat--flow experiments. Heat-flow is the product of thermal gradient tial T / tial z, and thermal conductivity λ of the lunar regolith. For measuring the thermal conductivity (or dissusivity), each penetrator will carry five thermal property sensors, consisting of small disc heaters. The thermal response Ts(t) of the heater itself to the constant known power supply of approx. 50 mW serves as the data for the subsequent data interpretation. Horai et al. (1991) found a forward analytical solution to the problem of determining the thermal inertia λ ρ c of the regolith for constant thermal properties and a simplyfied geometry. In the inversion, the problem of deriving the unknown thermal properties of a medium from known heat sources and temperatures is an Identification Heat Conduction Problem (IDHCP), an ill--posed inverse problem. Assuming that thermal conductivity λ and heat capacity ρ c are linear functions of temperature (which is reasonable in most cases), one can apply a Kirchhoff transformation to linearize the heat conduction equation, which minimizes computing time. Then the error functional, i.e. the difference between the measured temperature response of the heater and the predicted temperature response, can be minimized, thus solving for thermal dissusivity κ = λ / (ρ c), wich will complete the set of parameters needed for a detailed description of thermal properties of the lunar regolith. Results of model calculations will be presented, in which synthetic data and calibration data are used to invert the unknown thermal diffusivity of the unknown medium by means of a modified Newton Method. Due to the ill-posedness of the problem, the number of parameters to be solved for should be limited. As the model calculations reveal, a homogeneous regolith allows for a fast and accurate inversion.

  2. Influence of the Atmospheric Model on Hanle Diagnostics

    NASA Astrophysics Data System (ADS)

    Ishikawa, Ryohko; Uitenbroek, Han; Goto, Motoshi; Iida, Yusuke; Tsuneta, Saku

    2018-05-01

    We clarify the uncertainty in the inferred magnetic field vector via the Hanle diagnostics of the hydrogen Lyman-α line when the stratification of the underlying atmosphere is unknown. We calculate the anisotropy of the radiation field with plane-parallel semi-empirical models under the nonlocal thermal equilibrium condition and derive linear polarization signals for all possible parameters of magnetic field vectors based on an analytical solution of the atomic polarization and Hanle effect. We find that the semi-empirical models of the inter-network region (FAL-A) and network region (FAL-F) show similar degrees of anisotropy in the radiation field, and this similarity results in an acceptable inversion error ( e.g., {˜} 40 G instead of 50 G in field strength and {˜} 100° instead of 90° in inclination) when FAL-A and FAL-F are swapped. However, the semi-empirical models of FAL-C (averaged quiet-Sun model including both inter-network and network regions) and FAL-P (plage regions) yield an atomic polarization that deviates from all other models, which makes it difficult to precisely determine the magnetic field vector if the correct atmospheric model is not known ( e.g., the inversion error is much larger than 40% of the field strength; {>} 70 G instead of 50 G). These results clearly demonstrate that the choice of model atmosphere is important for Hanle diagnostics. As is well known, one way to constrain the average atmospheric stratification is to measure the center-to-limb variation of the linear polarization signals. The dependence of the center-to-limb variations on the atmospheric model is also presented in this paper.

  3. Eikonal-Based Inversion of GPR Data from the Vaucluse Karst Aquifer

    NASA Astrophysics Data System (ADS)

    Yedlin, M. J.; van Vorst, D.; Guglielmi, Y.; Cappa, F.; Gaffet, S.

    2009-12-01

    In this paper, we present an easy-to-implement eikonal-based travel time inversion algorithm and apply it to borehole GPR measurement data obtained from a karst aquifer located in the Vaucluse in Provence. The boreholes are situated with a fault zone deep inside the aquifer, in the Laboratoire Souterrain à Bas Bruit (LSBB). The measurements were made using 250 MHz MALA RAMAC borehole GPR antennas. The inversion formulation is unique in its application of a fast-sweeping eikonal solver (Zhao [1]) to the minimization of an objective functional that is composed of a travel time misfit and a model-based regularization [2]. The solver is robust in the presence of large velocity contrasts, efficient, easy to implement, and does not require the use of a sorting algorithm. The computation of sensitivities, which are required for the inversion process, is achieved by tracing rays backward from receiver to source following the gradient of the travel time field [2]. A user wishing to implement this algorithm can opt to avoid the ray tracing step and simply perturb the model to obtain the required sensitivities. Despite the obvious computational inefficiency of such an approach, it is acceptable for 2D problems. The relationship between travel time and the velocity profile is non-linear, requiring an iterative approach to be used. At each iteration, a set of matrix equations is solved to determine the model update. As the inversion continues, the weighting of the regularization parameter is adjusted until an appropriate data misfit is obtained. The inversion results, shown in the attached image, are consistent with previously obtained geological structure. Future work will look at improving inversion resolution and incorporating other measurement methodologies, with the goal of providing useful data for groundwater analysis. References: [1] H. Zhao, “A fast sweeping method for Eikonal equations,” Mathematics of Computation, vol. 74, no. 250, pp. 603-627, 2004. [2] D. Aldridge and D. Oldenburg, “Two-dimensional tomographic inversion with finite-difference traveltimes,” Journal of Seismic Exploration, vol. 2, pp. 257-274, 1993. Recovered Permittivity Profiles

  4. Characterization of human passive muscles for impact loads using genetic algorithm and inverse finite element methods.

    PubMed

    Chawla, A; Mukherjee, S; Karthikeyan, B

    2009-02-01

    The objective of this study is to identify the dynamic material properties of human passive muscle tissues for the strain rates relevant to automobile crashes. A novel methodology involving genetic algorithm (GA) and finite element method is implemented to estimate the material parameters by inverse mapping the impact test data. Isolated unconfined impact tests for average strain rates ranging from 136 s(-1) to 262 s(-1) are performed on muscle tissues. Passive muscle tissues are modelled as isotropic, linear and viscoelastic material using three-element Zener model available in PAMCRASH(TM) explicit finite element software. In the GA based identification process, fitness values are calculated by comparing the estimated finite element forces with the measured experimental forces. Linear viscoelastic material parameters (bulk modulus, short term shear modulus and long term shear modulus) are thus identified at strain rates 136 s(-1), 183 s(-1) and 262 s(-1) for modelling muscles. Extracted optimal parameters from this study are comparable with reported parameters in literature. Bulk modulus and short term shear modulus are found to be more influential in predicting the stress-strain response than long term shear modulus for the considered strain rates. Variations within the set of parameters identified at different strain rates indicate the need for new or improved material model, which is capable of capturing the strain rate dependency of passive muscle response with single set of material parameters for wide range of strain rates.

  5. 1r2dinv: A finite-difference model for inverse analysis of two dimensional linear or radial groundwater flow

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Butler, J.J.

    2001-01-01

    We have developed a program for inverse analysis of two-dimensional linear or radial groundwater flow problems. The program, 1r2dinv, uses standard finite difference techniques to solve the groundwater flow equation for a horizontal or vertical plane with heterogeneous properties. In radial mode, the program simulates flow to a well in a vertical plane, transforming the radial flow equation into an equivalent problem in Cartesian coordinates. The physical parameters in the model are horizontal or x-direction hydraulic conductivity, anisotropy ratio (vertical to horizontal conductivity in a vertical model, y-direction to x-direction in a horizontal model), and specific storage. The program allows the user to specify arbitrary and independent zonations of these three parameters and also to specify which zonal parameter values are known and which are unknown. The Levenberg-Marquardt algorithm is used to estimate parameters from observed head values. Particularly powerful features of the program are the ability to perform simultaneous analysis of heads from different tests and the inclusion of the wellbore in the radial mode. These capabilities allow the program to be used for analysis of suites of well tests, such as multilevel slug tests or pumping tests in a tomographic format. The combination of information from tests stressing different vertical levels in an aquifer provides the means for accurately estimating vertical variations in conductivity, a factor profoundly influencing contaminant transport in the subsurface. ?? 2001 Elsevier Science Ltd. All rights reserved.

  6. Offset-electrode profile acquisition strategy for electrical resistivity tomography

    NASA Astrophysics Data System (ADS)

    Robbins, Austin R.; Plattner, Alain

    2018-04-01

    We present an electrode layout strategy that allows electrical resistivity profiles to image the third dimension close to the profile plane. This "offset-electrode profile" approach involves laterally displacing electrodes away from the profile line in an alternating fashion and then inverting the resulting data using three-dimensional electrical resistivity tomography software. In our synthetic and field surveys, the offset-electrode method succeeds in revealing three-dimensional structures in the vicinity of the profile plane, which we could not achieve using three-dimensional inversions of linear profiles. We confirm and explain the limits of linear electrode profiles through a discussion of the three-dimensional sensitivity patterns: For a homogeneous starting model together with a linear electrode layout, all sensitivities remain symmetric with respect to the profile plane through each inversion step. This limitation can be overcome with offset-electrode layouts by breaking the symmetry pattern among the sensitivities. Thanks to freely available powerful three-dimensional resistivity tomography software and cheap modern computing power, the requirement for full three-dimensional calculations does not create a significant burden and renders the offset-electrode approach a cost-effective method. By offsetting the electrodes in an alternating pattern, as opposed to laying the profile out in a U-shape, we minimize shortening the profile length.

  7. Iterative methods for mixed finite element equations

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.

    1985-01-01

    Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.

  8. Object-based inversion of crosswell radar tomography data to monitor vegetable oil injection experiments

    USGS Publications Warehouse

    Lane, John W.; Day-Lewis, Frederick D.; Versteeg, Roelof J.; Casey, Clifton C.

    2004-01-01

    Crosswell radar methods can be used to dynamically image ground-water flow and mass transport associated with tracer tests, hydraulic tests, and natural physical processes, for improved characterization of preferential flow paths and complex aquifer heterogeneity. Unfortunately, because the raypath coverage of the interwell region is limited by the borehole geometry, the tomographic inverse problem is typically underdetermined, and tomograms may contain artifacts such as spurious blurring or streaking that confuse interpretation.We implement object-based inversion (using a constrained, non-linear, least-squares algorithm) to improve results from pixel-based inversion approaches that utilize regularization criteria, such as damping or smoothness. Our approach requires pre- and post-injection travel-time data. Parameterization of the image plane comprises a small number of objects rather than a large number of pixels, resulting in an overdetermined problem that reduces the need for prior information. The nature and geometry of the objects are based on hydrologic insight into aquifer characteristics, the nature of the experiment, and the planned use of the geophysical results.The object-based inversion is demonstrated using synthetic and crosswell radar field data acquired during vegetable-oil injection experiments at a site in Fridley, Minnesota. The region where oil has displaced ground water is discretized as a stack of rectangles of variable horizontal extents. The inversion provides the geometry of the affected region and an estimate of the radar slowness change for each rectangle. Applying petrophysical models to these results and porosity from neutron logs, we estimate the vegetable-oil emulsion saturation in various layers.Using synthetic- and field-data examples, object-based inversion is shown to be an effective strategy for inverting crosswell radar tomography data acquired to monitor the emplacement of vegetable-oil emulsions. A principal advantage of object-based inversion is that it yields images that hydrologists and engineers can easily interpret and use for model calibration.

  9. Including geological information in the inverse problem of palaeothermal reconstruction

    NASA Astrophysics Data System (ADS)

    Trautner, S.; Nielsen, S. B.

    2003-04-01

    A reliable reconstruction of sediment thermal history is of central importance to the assessment of hydrocarbon potential and the understanding of basin evolution. However, only rarely do sedimentation history and borehole data in the form of present day temperatures and vitrinite reflectance constrain the past thermal evolution to a useful level of accuracy (Gallagher and Sambridge,1992; Nielsen,1998; Trautner and Nielsen,2003). This is reflected in the inverse solutions to the problem of determining heat flow history from borehole data: The recent heat flow is constrained by data while older values are governed by the chosen a prior heat flow. In this paper we reduce this problem by including geological information in the inverse problem. Through a careful analysis of geological and geophysical data the timing of the tectonic processes, which may influence heat flow, can be inferred. The heat flow history is then parameterised to allow for the temporal variations characteristic of the different tectonic events. The inversion scheme applies a Markov chain Monte Carlo (MCMC) approach (Nielsen and Gallagher, 1999; Ferrero and Gallagher,2002), which efficiently explores the model space and futhermore samples the posterior probability distribution of the model. The technique is demonstrated on wells in the northern North Sea with emphasis on the stretching event in Late Jurassic. The wells are characterised by maximum sediment temperature at the present day, which is the worst case for resolution of the past thermal history because vitrinite reflectance is determined mainly by the maximum temperature. Including geological information significantly improves the thermal resolution. Ferrero, C. and Gallagher,K.,2002. Stochastic thermal history modelling.1. Constraining heat flow histories and their uncertainty. Marine and Petroleum Geology, 19, 633-648. Gallagher,K. and Sambridge, M., 1992. The resolution of past heat flow in sedimentary basins from non-linear inversion of geochemical data: the smoothest model approach, with synthetic examples. Geophysical Journal International, 109, 78-95. Nielsen, S.B, 1998. Inversion and sensitivity analysis in basin modelling. Geoscience 98. Keele University, UK, Abstract Volume, 56. Nielsen, S.B. and Gallagher, K., 1999. Efficient sampling of 3-D basin modelling scenarios. Extended Abstracts Volume, 1999 AAPG International Conference &Exhibition, Birmingham, England, September 12-15, 1999, p. 369 - 372. Trautner S. and Nielsen, S.B., 2003. 2-D inverse thermal modelling in the Norwegian shelf using Fast Approximate Forward (FAF) solutions. In R. Marzi and Duppenbecker, S. (Ed.), Multi-Dimensional Basin Modeling, AAPG, in press.

  10. Optimization, Monotonicity and the Determination of Nash Equilibria — An Algorithmic Analysis

    NASA Astrophysics Data System (ADS)

    Lozovanu, D.; Pickl, S. W.; Weber, G.-W.

    2004-08-01

    This paper is concerned with the optimization of a nonlinear time-discrete model exploiting the special structure of the underlying cost game and the property of inverse matrices. The costs are interlinked by a system of linear inequalities. It is shown that, if the players cooperate, i.e., minimize the sum of all the costs, they achieve a Nash equilibrium. In order to determine Nash equilibria, the simplex method can be applied with respect to the dual problem. An introduction into the TEM model and its relationship to an economic Joint Implementation program is given. The equivalence problem is presented. The construction of the emission cost game and the allocation problem is explained. The assumption of inverse monotony for the matrices leads to a new result in the area of such allocation problems. A generalization of such problems is presented.

  11. Shape-based ultrasound tomography using a Born model with application to high intensity focused ultrasound therapy.

    PubMed

    Ulker Karbeyaz, Başak; Miller, Eric L; Cleveland, Robin O

    2008-05-01

    A shaped-based ultrasound tomography method is proposed to reconstruct ellipsoidal objects using a linearized scattering model. The method is motivated by the desire to detect the presence of lesions created by high intensity focused ultrasound (HIFU) in applications of cancer therapy. The computational size and limited view nature of the relevant three-dimensional inverse problem renders impractical the use of traditional pixel-based reconstruction methods. However, by employing a shape-based parametrization it is only necessary to estimate a small number of unknowns describing the geometry of the lesion, in this paper assumed to be ellipsoidal. The details of the shape-based nonlinear inversion method are provided. Results obtained from a commercial ultrasound scanner and a tissue phantom containing a HIFU-like lesion demonstrate the feasibility of the approach where a 20 mm x 5 mm x 6 mm ellipsoidal inclusion was detected with an accuracy of around 5%.

  12. Self-organizing radial basis function networks for adaptive flight control and aircraft engine state estimation

    NASA Astrophysics Data System (ADS)

    Shankar, Praveen

    The performance of nonlinear control algorithms such as feedback linearization and dynamic inversion is heavily dependent on the fidelity of the dynamic model being inverted. Incomplete or incorrect knowledge of the dynamics results in reduced performance and may lead to instability. Augmenting the baseline controller with approximators which utilize a parametrization structure that is adapted online reduces the effect of this error between the design model and actual dynamics. However, currently existing parameterizations employ a fixed set of basis functions that do not guarantee arbitrary tracking error performance. To address this problem, we develop a self-organizing parametrization structure that is proven to be stable and can guarantee arbitrary tracking error performance. The training algorithm to grow the network and adapt the parameters is derived from Lyapunov theory. In addition to growing the network of basis functions, a pruning strategy is incorporated to keep the size of the network as small as possible. This algorithm is implemented on a high performance flight vehicle such as F-15 military aircraft. The baseline dynamic inversion controller is augmented with a Self-Organizing Radial Basis Function Network (SORBFN) to minimize the effect of the inversion error which may occur due to imperfect modeling, approximate inversion or sudden changes in aircraft dynamics. The dynamic inversion controller is simulated for different situations including control surface failures, modeling errors and external disturbances with and without the adaptive network. A performance measure of maximum tracking error is specified for both the controllers a priori. Excellent tracking error minimization to a pre-specified level using the adaptive approximation based controller was achieved while the baseline dynamic inversion controller failed to meet this performance specification. The performance of the SORBFN based controller is also compared to a fixed RBF network based adaptive controller. While the fixed RBF network based controller which is tuned to compensate for control surface failures fails to achieve the same performance under modeling uncertainty and disturbances, the SORBFN is able to achieve good tracking convergence under all error conditions.

  13. On the optimization of electromagnetic geophysical data: Application of the PSO algorithm

    NASA Astrophysics Data System (ADS)

    Godio, A.; Santilano, A.

    2018-01-01

    Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.

  14. Extensions of the Ferry shear wave model for active linear and nonlinear microrheology

    PubMed Central

    Mitran, Sorin M.; Forest, M. Gregory; Yao, Lingxing; Lindley, Brandon; Hill, David B.

    2009-01-01

    The classical oscillatory shear wave model of Ferry et al. [J. Polym. Sci. 2:593-611, (1947)] is extended for active linear and nonlinear microrheology. In the Ferry protocol, oscillation and attenuation lengths of the shear wave measured from strobe photographs determine storage and loss moduli at each frequency of plate oscillation. The microliter volumes typical in biology require modifications of experimental method and theory. Microbead tracking replaces strobe photographs. Reflection from the top boundary yields counterpropagating modes which are modeled here for linear and nonlinear viscoelastic constitutive laws. Furthermore, bulk imposed strain is easily controlled, and we explore the onset of normal stress generation and shear thinning using nonlinear viscoelastic models. For this paper, we present the theory, exact linear and nonlinear solutions where possible, and simulation tools more generally. We then illustrate errors in inverse characterization by application of the Ferry formulas, due to both suppression of wave reflection and nonlinearity, even if there were no experimental error. This shear wave method presents an active and nonlinear analog of the two-point microrheology of Crocker et al. [Phys. Rev. Lett. 85: 888 - 891 (2000)]. Nonlocal (spatially extended) deformations and stresses are propagated through a small volume sample, on wavelengths long relative to bead size. The setup is ideal for exploration of nonlinear threshold behavior. PMID:20011614

  15. Integration of Visual and Joint Information to Enable Linear Reaching Motions

    NASA Astrophysics Data System (ADS)

    Eberle, Henry; Nasuto, Slawomir J.; Hayashi, Yoshikatsu

    2017-01-01

    A new dynamics-driven control law was developed for a robot arm, based on the feedback control law which uses the linear transformation directly from work space to joint space. This was validated using a simulation of a two-joint planar robot arm and an optimisation algorithm was used to find the optimum matrix to generate straight trajectories of the end-effector in the work space. We found that this linear matrix can be decomposed into the rotation matrix representing the orientation of the goal direction and the joint relation matrix (MJRM) representing the joint response to errors in the Cartesian work space. The decomposition of the linear matrix indicates the separation of path planning in terms of the direction of the reaching motion and the synergies of joint coordination. Once the MJRM is numerically obtained, the feedfoward planning of reaching direction allows us to provide asymptotically stable, linear trajectories in the entire work space through rotational transformation, completely avoiding the use of inverse kinematics. Our dynamics-driven control law suggests an interesting framework for interpreting human reaching motion control alternative to the dominant inverse method based explanations, avoiding expensive computation of the inverse kinematics and the point-to-point control along the desired trajectories.

  16. Linear distributed source modeling of local field potentials recorded with intra-cortical electrode arrays.

    PubMed

    Hindriks, Rikkert; Schmiedt, Joscha; Arsiwalla, Xerxes D; Peter, Alina; Verschure, Paul F M J; Fries, Pascal; Schmid, Michael C; Deco, Gustavo

    2017-01-01

    Planar intra-cortical electrode (Utah) arrays provide a unique window into the spatial organization of cortical activity. Reconstruction of the current source density (CSD) underlying such recordings, however, requires "inverting" Poisson's equation. For inter-laminar recordings, this is commonly done by the CSD method, which consists in taking the second-order spatial derivative of the recorded local field potentials (LFPs). Although the CSD method has been tremendously successful in mapping the current generators underlying inter-laminar LFPs, its application to planar recordings is more challenging. While for inter-laminar recordings the CSD method seems reasonably robust against violations of its assumptions, is it unclear as to what extent this holds for planar recordings. One of the objectives of this study is to characterize the conditions under which the CSD method can be successfully applied to Utah array data. Using forward modeling, we find that for spatially coherent CSDs, the CSD method yields inaccurate reconstructions due to volume-conducted contamination from currents in deeper cortical layers. An alternative approach is to "invert" a constructed forward model. The advantage of this approach is that any a priori knowledge about the geometrical and electrical properties of the tissue can be taken into account. Although several inverse methods have been proposed for LFP data, the applicability of existing electroencephalographic (EEG) and magnetoencephalographic (MEG) inverse methods to LFP data is largely unexplored. Another objective of our study therefore, is to assess the applicability of the most commonly used EEG/MEG inverse methods to Utah array data. Our main conclusion is that these inverse methods provide more accurate CSD reconstructions than the CSD method. We illustrate the inverse methods using event-related potentials recorded from primary visual cortex of a macaque monkey during a motion discrimination task.

  17. Linear distributed source modeling of local field potentials recorded with intra-cortical electrode arrays

    PubMed Central

    Schmiedt, Joscha; Arsiwalla, Xerxes D.; Peter, Alina; Verschure, Paul F. M. J.; Fries, Pascal; Schmid, Michael C.; Deco, Gustavo

    2017-01-01

    Planar intra-cortical electrode (Utah) arrays provide a unique window into the spatial organization of cortical activity. Reconstruction of the current source density (CSD) underlying such recordings, however, requires “inverting” Poisson’s equation. For inter-laminar recordings, this is commonly done by the CSD method, which consists in taking the second-order spatial derivative of the recorded local field potentials (LFPs). Although the CSD method has been tremendously successful in mapping the current generators underlying inter-laminar LFPs, its application to planar recordings is more challenging. While for inter-laminar recordings the CSD method seems reasonably robust against violations of its assumptions, is it unclear as to what extent this holds for planar recordings. One of the objectives of this study is to characterize the conditions under which the CSD method can be successfully applied to Utah array data. Using forward modeling, we find that for spatially coherent CSDs, the CSD method yields inaccurate reconstructions due to volume-conducted contamination from currents in deeper cortical layers. An alternative approach is to “invert” a constructed forward model. The advantage of this approach is that any a priori knowledge about the geometrical and electrical properties of the tissue can be taken into account. Although several inverse methods have been proposed for LFP data, the applicability of existing electroencephalographic (EEG) and magnetoencephalographic (MEG) inverse methods to LFP data is largely unexplored. Another objective of our study therefore, is to assess the applicability of the most commonly used EEG/MEG inverse methods to Utah array data. Our main conclusion is that these inverse methods provide more accurate CSD reconstructions than the CSD method. We illustrate the inverse methods using event-related potentials recorded from primary visual cortex of a macaque monkey during a motion discrimination task. PMID:29253006

  18. Passive acoustic measurement of bedload grain size distribution using self-generated noise

    NASA Astrophysics Data System (ADS)

    Petrut, Teodor; Geay, Thomas; Gervaise, Cédric; Belleudy, Philippe; Zanker, Sebastien

    2018-01-01

    Monitoring sediment transport processes in rivers is of particular interest to engineers and scientists to assess the stability of rivers and hydraulic structures. Various methods for sediment transport process description were proposed using conventional or surrogate measurement techniques. This paper addresses the topic of the passive acoustic monitoring of bedload transport in rivers and especially the estimation of the bedload grain size distribution from self-generated noise. It discusses the feasibility of linking the acoustic signal spectrum shape to bedload grain sizes involved in elastic impacts with the river bed treated as a massive slab. Bedload grain size distribution is estimated by a regularized algebraic inversion scheme fed with the power spectrum density of river noise estimated from one hydrophone. The inversion methodology relies upon a physical model that predicts the acoustic field generated by the collision between rigid bodies. Here we proposed an analytic model of the acoustic energy spectrum generated by the impacts between a sphere and a slab. The proposed model computes the power spectral density of bedload noise using a linear system of analytic energy spectra weighted by the grain size distribution. The algebraic system of equations is then solved by least square optimization and solution regularization methods. The result of inversion leads directly to the estimation of the bedload grain size distribution. The inversion method was applied to real acoustic data from passive acoustics experiments realized on the Isère River, in France. The inversion of in situ measured spectra reveals good estimations of grain size distribution, fairly close to what was estimated by physical sampling instruments. These results illustrate the potential of the hydrophone technique to be used as a standalone method that could ensure high spatial and temporal resolution measurements for sediment transport in rivers.

  19. Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion

    NASA Astrophysics Data System (ADS)

    Hesser, T.; Farthing, M. W.; Brodie, K.

    2016-02-01

    The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.

  20. Analysis of Interval Changes on Mammograms for Computer Aided Diagnosis

    DTIC Science & Technology

    2000-05-01

    tizer was calibrated so that the gray values were linearly and erage pixel values in the template and ROI, respectively. The inversely proportional to the...earlier for linearly and inversely proportional to the OD within the alignment of the breast regions, except that the regions to be range 0-4 OD...results versely proportional to the radial distance r from the nipple. in a decrease in the value of (to 20 mm. This decrease helps For the data set

  1. Application of linearized inverse scattering methods for the inspection in steel plates embedded in concrete structures

    NASA Astrophysics Data System (ADS)

    Tsunoda, Takaya; Suzuki, Keigo; Saitoh, Takahiro

    2018-04-01

    This study develops a method to visualize the state of steel-concrete interface with ultrasonic testing. Scattered waves are obtained by the UT pitch-catch mode from the surface of the concrete. Discrete wavelet transform is applied in order to extract echoes scattered from the steel-concrete interface. Then Linearized Inverse Scattering Methods are used for imaging the interface. The results show that LISM with Born and Kirchhoff approximation provide clear images for the target.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechant, Lawrence J.

    Wave packet analysis provides a connection between linear small disturbance theory and subsequent nonlinear turbulent spot flow behavior. The traditional association between linear stability analysis and nonlinear wave form is developed via the method of stationary phase whereby asymptotic (simplified) mean flow solutions are used to estimate dispersion behavior and stationary phase approximation are used to invert the associated Fourier transform. The resulting process typically requires nonlinear algebraic equations inversions that can be best performed numerically, which partially mitigates the value of the approximation as compared to a more complete, e.g. DNS or linear/nonlinear adjoint methods. To obtain a simpler,more » closed-form analytical result, the complete packet solution is modeled via approximate amplitude (linear convected kinematic wave initial value problem) and local sinusoidal (wave equation) expressions. Significantly, the initial value for the kinematic wave transport expression follows from a separable variable coefficient approximation to the linearized pressure fluctuation Poisson expression. The resulting amplitude solution, while approximate in nature, nonetheless, appears to mimic many of the global features, e.g. transitional flow intermittency and pressure fluctuation magnitude behavior. A low wave number wave packet models also recover meaningful auto-correlation and low frequency spectral behaviors.« less

  3. A nonlinear Kalman filtering approach to embedded control of turbocharged diesel engines

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Siano, Pierluigi; Arsie, Ivan

    2014-10-01

    The development of efficient embedded control for turbocharged Diesel engines, requires the programming of elaborated nonlinear control and filtering methods. To this end, in this paper nonlinear control for turbocharged Diesel engines is developed with the use of Differential flatness theory and the Derivative-free nonlinear Kalman Filter. It is shown that the dynamic model of the turbocharged Diesel engine is differentially flat and admits dynamic feedback linearization. It is also shown that the dynamic model can be written in the linear Brunovsky canonical form for which a state feedback controller can be easily designed. To compensate for modeling errors and external disturbances the Derivative-free nonlinear Kalman Filter is used and redesigned as a disturbance observer. The filter consists of the Kalman Filter recursion on the linearized equivalent of the Diesel engine model and of an inverse transformation based on differential flatness theory which enables to obtain estimates for the state variables of the initial nonlinear model. Once the disturbances variables are identified it is possible to compensate them by including an additional control term in the feedback loop. The efficiency of the proposed control method is tested through simulation experiments.

  4. A scalable, fully implicit algorithm for the reduced two-field low-β extended MHD model

    DOE PAGES

    Chacon, Luis; Stanier, Adam John

    2016-12-01

    Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm ismore » shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.« less

  5. Robust Adaptive Flight Control Design of Air-breathing Hypersonic Vehicles

    DTIC Science & Technology

    2016-12-07

    dynamic inversion controller design for a non -minimum phase hypersonic vehicle is derived by Kuipers et al. [2008]. Moreover, integrated guidance and...stabilization time for inner loop variables is lesser than the intermediate loop variables because of the three-loop-control design methodology . The control...adaptive design . Control Engineering Practice, 2016. Michael A Bolender and David B Doman. A non -linear model for the longitudinal dynamics of a

  6. How to Detect the Location and Time of a Covert Chemical Attack: A Bayesian Approach

    DTIC Science & Technology

    2009-12-01

    Inverse Problems, Design and Optimization Symposium 2004. Rio de Janeiro , Brazil. Chan, R., and Yee, E. (1997). A simple model for the probability...sensor interpretation applications and has been successfully applied, for example, to estimate the source strength of pollutant releases in multi...coagulation, and second-order pollutant diffusion in sorption- desorption, are not linear. Furthermore, wide uncertainty bounds exist for several of

  7. Topics in Statistical Calibration

    DTIC Science & Technology

    2014-03-27

    on a parametric bootstrap where, instead of sampling directly from the residuals , samples are drawn from a normal distribution. This procedure will...addition to centering them (Davison and Hinkley, 1997). When there are outliers in the residuals , the bootstrap distribution of x̂0 can become skewed or...based and inversion methods using the linear mixed-effects model. Then, a simple parametric bootstrap algorithm is proposed that can be used to either

  8. Dissipative particle dynamics: Effects of thermostating schemes on nano-colloid electrophoresis

    NASA Astrophysics Data System (ADS)

    Hassanzadeh Afrouzi, Hamid; Moshfegh, Abouzar; Farhadi, Mousa; Sedighi, Kurosh

    2018-05-01

    A novel fully explicit approach using dissipative particle dynamics (DPD) method is introduced in the present study to model the electrophoretic transport of nano-colloids in an electrolyte solution. Slater type charge smearing function included in 3D Ewald summation method is employed to treat electrostatic interaction. Performance of various thermostats are challenged to control the system temperature and study the dynamic response of colloidal electrophoretic mobility under practical ranges of external electric field (0 . 072 < E < 0 . 361 v/nm) covering linear to non-linear response regime, and ionic salt concentration (0.049 < SC < 0 . 69 [M]) covering weak to strong Debye screening of the colloid. System temperature and electrophoretic mobility both show a direct and inverse relationships respectively with electric field and colloidal repulsion; although they each respectively behave direct and inverse trends with salt concentration under various thermostats. Nosé-Hoover-Lowe-Andersen and Lowe-Andersen thermostats are found to function more effectively under high electric fields (E > 0 . 145[v/nm ]) while thermal equilibrium is maintained. Reasonable agreements are achieved by benchmarking the system radial distribution function with available EW3D modellings, as well as comparing reduced mobility against conventional Smoluchowski and Hückel theories, and numerical solution of Poisson-Boltzmann equation.

  9. Coupled channels description of the α-decay fine structure

    NASA Astrophysics Data System (ADS)

    Delion, D. S.; Ren, Zhongzhou; Dumitrescu, A.; Ni, Dongdong

    2018-05-01

    We review the coupled channels approach of α transitions to excited states. The α-decaying states are identified as narrow outgoing Gamow resonances in an α-daughter potential. The real part of the eigenvalue corresponds to the Q-value, while the imaginary part determines the half of the total α-decay width. We first review the calculations describing transitions to rotational states treated by the rigid rotator model, in even–even, odd-mass and odd–odd nuclei. It is found that the semiclassical method overestimates the branching ratios to excited 4+ for some even–even α-emitters and fails in explaining the unexpected inversion of branching ratios of some odd-mass nuclei, while the coupled-channels results show good agreement with the experimental data. Then, we review the coupled channels method for α-transitions to 2+ vibrational and transitional states. We present the results of the Coherent State Model that describes in a unified way the spectra of vibrational, transitional and rotational nuclei. We evidence general features of the α-decay fine structure, namely the linear dependence between α-intensities and excitation energy, the linear correlation between the strength of the α-core interaction and spectroscopic factor, and the inverse correlation between the nuclear collectivity, given by electromagnetic transitions, and α-clustering.

  10. The culmination of an inverse cascade: Mean flow and fluctuations

    NASA Astrophysics Data System (ADS)

    Frishman, Anna

    2017-12-01

    Two dimensional turbulence has a remarkable tendency to self-organize into large, coherent structures, forming a mean flow. The purpose of this paper is to elucidate how these structures are sustained and what determines them and the fluctuations around them. A recent theory for the mean flow will be reviewed. The theory assumes that turbulence is excited by a forcing supported on small scales and uses a linear shear model to relate the turbulent momentum flux to the mean shear rate. Extending the theory, it will be shown here that the relation between the momentum flux and mean shear is valid, and the momentum flux is non-zero, for both an isotropic forcing and an anisotropic forcing, independent of the dissipation mechanism at small scales. This conclusion requires taking into account that the linear shear model is an approximation to the real system. The proportionality between the momentum flux and the inverse of the shear can then be inferred most simply on dimensional grounds. Moreover, for a homogeneous pumping, the proportionality constant can be determined by symmetry considerations, recovering the result of the original theory. The regime of applicability of the theory, its compatibility with observations from simulations, a formula for the momentum flux for an inhomogeneous pumping, and results for the statistics of fluctuations will also be discussed.

  11. Optimal control of a coupled partial and ordinary differential equations system for the assimilation of polarimetry Stokes vector measurements in tokamak free-boundary equilibrium reconstruction with application to ITER

    NASA Astrophysics Data System (ADS)

    Faugeras, Blaise; Blum, Jacques; Heumann, Holger; Boulbe, Cédric

    2017-08-01

    The modelization of polarimetry Faraday rotation measurements commonly used in tokamak plasma equilibrium reconstruction codes is an approximation to the Stokes model. This approximation is not valid for the foreseen ITER scenarios where high current and electron density plasma regimes are expected. In this work a method enabling the consistent resolution of the inverse equilibrium reconstruction problem in the framework of non-linear free-boundary equilibrium coupled to the Stokes model equation for polarimetry is provided. Using optimal control theory we derive the optimality system for this inverse problem. A sequential quadratic programming (SQP) method is proposed for its numerical resolution. Numerical experiments with noisy synthetic measurements in the ITER tokamak configuration for two test cases, the second of which is an H-mode plasma, show that the method is efficient and that the accuracy of the identification of the unknown profile functions is improved compared to the use of classical Faraday measurements.

  12. Mapping the spatial distribution and time evolution of snow water equivalent with passive microwave measurements

    USGS Publications Warehouse

    Guo, J.; Tsang, L.; Josberger, E.G.; Wood, A.W.; Hwang, J.-N.; Lettenmaier, D.P.

    2003-01-01

    This paper presents an algorithm that estimates the spatial distribution and temporal evolution of snow water equivalent and snow depth based on passive remote sensing measurements. It combines the inversion of passive microwave remote sensing measurements via dense media radiative transfer modeling results with snow accumulation and melt model predictions to yield improved estimates of snow depth and snow water equivalent, at a pixel resolution of 5 arc-min. In the inversion, snow grain size evolution is constrained based on pattern matching by using the local snow temperature history. This algorithm is applied to produce spatial snow maps of Upper Rio Grande River basin in Colorado. The simulation results are compared with that of the snow accumulation and melt model and a linear regression method. The quantitative comparison with the ground truth measurements from four Snowpack Telemetry (SNOTEL) sites in the basin shows that this algorithm is able to improve the estimation of snow parameters.

  13. A framework for estimating stratospheric wind speeds from unknown sources and application to the 2010 December 25 bolide

    NASA Astrophysics Data System (ADS)

    Arrowsmith, Stephen J.; Marcillo, Omar; Drob, Douglas P.

    2013-10-01

    We present a methodology for infrasonic remote sensing of winds in the stratosphere that does not require discrete ground-truth events. Our method uses measured time delays between arrays of sensors to provide group velocities (referred to here as celerities) and then minimizes the difference between observed and predicted celerities by perturbing an initial atmospheric specification. Because we focus on interarray propagation effects, it is not necessary to simulate the full propagation path from source to receiver. This feature allows us to use a relatively simple forward model that is applicable over short-regional distances. By focusing on stratospheric returns, we show that our non-linear inversion scheme converges much better if the starting model contains a strong stratospheric duct. Using the Horizontal Wind Model (HWM)/Mass Spectrometer Incoherent Scatter (MSISE) empirical climatology as a starting model, we demonstrate that the inversion scheme is robust to large uncertainties in backazimuth, but that uncertainties in the measured trace velocity and celerity require the use of prior constraints to ensure suitable convergence. The inversion of synthetic data, using realistic estimates of measurement error, shows that our scheme will nevertheless improve upon a starting model under most scenarios. The inversion scheme is applied to infrasound data recorded from a large event on 2010 December 25, which is presumed to be a bolide, using data from a nine-element infrasound network in Utah. We show that our recorded data require a stronger zonal wind speed in the stratosphere than is present in the HWM profile, and are more consistent with the Ground-to-Space (G2S) profile.

  14. A three-dimensional gravity inversion applied to São Miguel Island (Azores)

    NASA Astrophysics Data System (ADS)

    Camacho, A. G.; Montesinos, F. G.; Vieira, R.

    1997-04-01

    Gravimetric studies are becoming more and more widely acknowledged as a useful tool for studying and modeling the distributions of subsurface masses that are associated with volcanic activity. In this paper, new gravimetric data for the volcanic island of São Miguel (Azores) were analyzed and interpreted by a stabilized linear inversion methodology. An inversion model of higher resolution was calculated for the Caldera of Furnas, which has a larger density of data. In order to filter out the noncorrelatable anomalies, least squares prediction was used, resulting in a correlated gravimetric signal model with an accuracy of the order of 0.9 mGal. The gravimetric inversion technique is based on the adjustment of a three-dimensional (3-D) model of cubes of unknown density that represents the island's subsurface. The problem of non-uniqueness is solved by minimization with appropriate covariance matrices of the data (resulting from the least squares prediction) and of the unknowns. We also propose a criterion for choosing a balance between the data fit (which in this case corresponds to residues with rms of the order of 0.6 mGal) and the smoothness of the solution. The global model of the island includes a low-density zone in a WNW-ESE direction and a depth of the order of 20 km, associated with the Terceira rift spreading center. The minimums located at a depth of 4 km may be associated with shallow magmatic chambers beneath the main volcanoes of the island. The main high-density area is related to the Nordeste basaltic shield. With regard to the Caldera Furnas, in addition to the minimum that can be associated with a magmatic chamber, there are other shallow minimums that correspond to eruptive processes.

  15. Geodynamic modelling of the rift-drift transition: Application to the Red Sea

    NASA Astrophysics Data System (ADS)

    Fierro, E.; Schettino, A.; Capitanio, F. A.; Ranalli, G.

    2017-12-01

    The onset of oceanic accretion after a rifting phase is generally accompanied by an initial fast pulse of spreading in the case of volcanic margins, such that the effective spreading rate exceeds the relative far-field velocity between the two plates for a short time interval. This pulse has been attributed to edge-driven convention (EDC), although our numerical modelling shows that the shear stress at the base of the lithosphere cannot exceed 1 MPa. In general, we have developed a 2D numerical model of the mantle instabilities during the rifting phase, in order to determine the geodynamic conditions at the rift-drift transition. The model was tested using Underworld II software, variable rheological parameters, and temperature and stress-dependent viscosity. Our results show an increase of strain rates at the top of the lithosphere with the lithosphere thickness as well as with the initial width of the margin up to 300 km. Beyond this value, the influence of the initial rift width can be neglected. An interesting outcome of the numerical model is the existence of an axial zone characterized by higher strain rates, which is flanked by two low-strain stripes. As a consequence, the model suggests the existence of an area of syn-rift compression within the rift valley. Regarding the post-rift phase, we propose that at the onset of a seafloor spreading, a phase of transient creep allows the release of the strain energy accumulated in the mantle lithosphere during the rifting phase, through anelastic relaxation. Then, the conjugated margins would be subject to post-rift contraction and eventually to tectonic inversion of the rift structures. To explore the tenability of this model, we introduce an anelastic component in the lithosphere rheology, assuming both the classical linear Kelvin-Voigt rheology and a non-linear Kelvin model. The non-linear model predicts viable relaxation times ( 1-2Myrs) to explain the post-rift tectonic inversion observed along the Arabian continental margin and the episodic initial fast seafloor spreading in the central Red Sea, where the role of EDC has been invoked.

  16. Serum Folate Shows an Inverse Association with Blood Pressure in a Cohort of Chinese Women of Childbearing Age: A Cross-Sectional Study

    PubMed Central

    Shen, Minxue; Tan, Hongzhuan; Zhou, Shujin; Retnakaran, Ravi; Smith, Graeme N.; Davidge, Sandra T.; Trasler, Jacquetta; Walker, Mark C.; Wen, Shi Wu

    2016-01-01

    Background It has been reported that higher folate intake from food and supplementation is associated with decreased blood pressure (BP). The association between serum folate concentration and BP has been examined in few studies. We aim to examine the association between serum folate and BP levels in a cohort of young Chinese women. Methods We used the baseline data from a pre-conception cohort of women of childbearing age in Liuyang, China, for this study. Demographic data were collected by structured interview. Serum folate concentration was measured by immunoassay, and homocysteine, blood glucose, triglyceride and total cholesterol were measured through standardized clinical procedures. Multiple linear regression and principal component regression model were applied in the analysis. Results A total of 1,532 healthy normotensive non-pregnant women were included in the final analysis. The mean concentration of serum folate was 7.5 ± 5.4 nmol/L and 55% of the women presented with folate deficiency (< 6.8 nmol/L). Multiple linear regression and principal component regression showed that serum folate levels were inversely associated with systolic and diastolic BP, after adjusting for demographic, anthropometric, and biochemical factors. Conclusions Serum folate is inversely associated with BP in non-pregnant women of childbearing age with high prevalence of folate deficiency. PMID:27182603

  17. Inverse odds ratio-weighted estimation for causal mediation analysis.

    PubMed

    Tchetgen Tchetgen, Eric J

    2013-11-20

    An important scientific goal of studies in the health and social sciences is increasingly to determine to what extent the total effect of a point exposure is mediated by an intermediate variable on the causal pathway between the exposure and the outcome. A causal framework has recently been proposed for mediation analysis, which gives rise to new definitions, formal identification results and novel estimators of direct and indirect effects. In the present paper, the author describes a new inverse odds ratio-weighted approach to estimate so-called natural direct and indirect effects. The approach, which uses as a weight the inverse of an estimate of the odds ratio function relating the exposure and the mediator, is universal in that it can be used to decompose total effects in a number of regression models commonly used in practice. Specifically, the approach may be used for effect decomposition in generalized linear models with a nonlinear link function, and in a number of other commonly used models such as the Cox proportional hazards regression for a survival outcome. The approach is simple and can be implemented in standard software provided a weight can be specified for each observation. An additional advantage of the method is that it easily incorporates multiple mediators of a categorical, discrete or continuous nature. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Magnesium and the Risk of Cardiovascular Events: A Meta-Analysis of Prospective Cohort Studies

    PubMed Central

    Hao, Yongqiang; Li, Huiwu; Tang, Tingting; Wang, Hao; Yan, Weili; Dai, Kerong

    2013-01-01

    Background Prospective studies that have examined the association between dietary magnesium intake and serum magnesium concentrations and the risk of cardiovascular disease (CVD) events have reported conflicting findings. We undertook a meta-analysis to evaluate the association between dietary magnesium intake and serum magnesium concentrations and the risk of total CVD events. Methodology/Principal Findings We performed systematic searches on MEDLINE, EMBASE, and OVID up to February 1, 2012 without limits. Categorical, linear, and nonlinear, dose-response, heterogeneity, publication bias, subgroup, and meta-regression analysis were performed. The analysis included 532,979 participants from 19 studies (11 studies on dietary magnesium intake, 6 studies on serum magnesium concentrations, and 2 studies on both) with 19,926 CVD events. The pooled relative risks of total CVD events for the highest vs. lowest category of dietary magnesium intake and serum magnesium concentrations were 0.85 (95% confidence interval 0.78 to 0.92) and 0.77 (0.66 to 0.87), respectively. In linear dose-response analysis, only serum magnesium concentrations ranging from 1.44 to 1.8 mEq/L were significantly associated with total CVD events risk (0.91, 0.85 to 0.97) per 0.1 mEq/L (Pnonlinearity = 0.465). However, significant inverse associations emerged in nonlinear models for dietary magnesium intake (Pnonlinearity = 0.024). The greatest risk reduction occurred when intake increased from 150 to 400 mg/d. There was no evidence of publication bias. Conclusions/Significance There is a statistically significant nonlinear inverse association between dietary magnesium intake and total CVD events risk. Serum magnesium concentrations are linearly and inversely associated with the risk of total CVD events. PMID:23520480

  19. Intereruptive deformation at Three Sisters volcano, Oregon, USA: a strategy for traking volume changes through coupled hydraulic-viscoelastic modeling

    NASA Astrophysics Data System (ADS)

    Charco, M.; Rodriguez Molina, S.; Gonzalez, P. J.; Negredo, A. M.; Poland, M. P.; Schmidt, D. A.

    2017-12-01

    The Three Sisters volcanic region Oregon (USA) is one of the most active volcanic areas in the Cascade Range and is densely populated with eruptive vents. An extensive area just west of South Sister volcano has been actively uplifting since about 1998. InSAR data from 1992 through 2001 showed an uplift rate in the area of 3-4 cm/yr. Then the deformation rate considerably decreased between 2004 and 2006 as shown by both InSAR and continuous GPS measurements. Once magmatic system geometry and location are determined, a linear inversion of all GPS and InSAR data available is performed in order to estimate the volume changes of the source along the analyzed time interval. For doing so, we applied a technique based on the Truncated Singular Value Decomposition (TSVD) of the Green's function matrix representing the linear inversion. Here, we develop a strategy to provide a cut-off for truncation removing the smallest singular values without too much loose of data resolution against the stability of the method. Furthermore, the strategy will give us a quantification of the uncertainty of the volume change time series. The strength of the methodology resides in allowing the joint inversion of InSAR measurements from multiple tracks with different look angles and three component GPS measurements from multiple sites.Finally, we analyze the temporal behavior of the source volume changes using a new analytical model that describes the process of injecting magma into a reservoir surrounded by a viscoelastic shell. This dynamic model is based on Hagen-Poiseuille flow through a vertical conduit that leads to an increase in pressure within a spherical reservoir and time-dependent surface deformation. The volume time series are compared to predictions from the dynamic model to constrain model parameters, namely characteristic Poiseuille and Maxwell time scales, inlet and outlet injection pressure, and source and shell geometries. The modeling approach used here could be used to develop a mathematically rigorous strategy for including time-series deformation data in the interpretation of volcanic unrest.

  20. Regularized wave equation migration for imaging and data reconstruction

    NASA Astrophysics Data System (ADS)

    Kaplan, Sam T.

    The reflection seismic experiment results in a measurement (reflection seismic data) of the seismic wavefield. The linear Born approximation to the seismic wavefield leads to a forward modelling operator that we use to approximate reflection seismic data in terms of a scattering potential. We consider approximations to the scattering potential using two methods: the adjoint of the forward modelling operator (migration), and regularized numerical inversion using the forward and adjoint operators. We implement two parameterizations of the forward modelling and migration operators: source-receiver and shot-profile. For both parameterizations, we find requisite Green's function using the split-step approximation. We first develop the forward modelling operator, and then find the adjoint (migration) operator by recognizing a Fredholm integral equation of the first kind. The resulting numerical system is generally under-determined, requiring prior information to find a solution. In source-receiver migration, the parameterization of the scattering potential is understood using the migration imaging condition, and this encourages us to apply sparse prior models to the scattering potential. To that end, we use both a Cauchy prior and a mixed Cauchy-Gaussian prior, finding better resolved estimates of the scattering potential than are given by the adjoint. In shot-profile migration, the parameterization of the scattering potential has its redundancy in multiple active energy sources (i.e. shots). We find that a smallest model regularized inverse representation of the scattering potential gives a more resolved picture of the earth, as compared to the simpler adjoint representation. The shot-profile parameterization allows us to introduce a joint inversion to further improve the estimate of the scattering potential. Moreover, it allows us to introduce a novel data reconstruction algorithm so that limited data can be interpolated/extrapolated. The linearized operators are expensive, encouraging their parallel implementation. For the source-receiver parameterization of the scattering potential this parallelization is non-trivial. Seismic data is typically corrupted by various types of noise. Sparse coding can be used to suppress noise prior to migration. It is a method that stems from information theory and that we apply to noise suppression in seismic data.

  1. In situ phytoplankton absorption, fluorescence emission, and particulate backscattering spectra determined from reflectance

    NASA Technical Reports Server (NTRS)

    Roesler, Collin S.; Pery, Mary Jane

    1995-01-01

    An inverse model was developed to extract the absortion and scattering (elastic and inelastic) properties of oceanic constituents from surface spectral reflectance measurements. In particular, phytoplankton spectral absorption coefficients, solar-stimulated chlorophyll a fluorescence spectra, and particle backscattering spectra were modeled. The model was tested on 35 reflectance spectra obtained from irradiance measurements in optically diverse ocean waters (0.07 to 25.35 mg/cu m range in surface chlorophyll a concentrations). The universality of the model was demonstrated by the accurate estimation of the spectral phytoplankton absorption coefficents over a range of 3 orders of magnitude (rho = 0.94 at 500 nm). Under most oceanic conditions (chlorophyll a less than 3 mg/cu m) the percent difference between measured and modeled phytoplankton absorption coefficents was less than 35%. Spectral variations in measured phytoplankton absorption spectra were well predicted by the inverse model. Modeled volume fluorescence was weakly correlated with measured chl a; fluorescence quantum yield varied from 0.008 to 0.09 as a function of environment and incident irradiance. Modeled particle backscattering coefficients were linearly related to total particle cross section over a twentyfold range in backscattering coefficents (rho = 0.996, n = 12).

  2. Dynamic data integration and stochastic inversion of a confined aquifer

    NASA Astrophysics Data System (ADS)

    Wang, D.; Zhang, Y.; Irsa, J.; Huang, H.; Wang, L.

    2013-12-01

    Much work has been done in developing and applying inverse methods to aquifer modeling. The scope of this paper is to investigate the applicability of a new direct method for large inversion problems and to incorporate uncertainty measures in the inversion outcomes (Wang et al., 2013). The problem considered is a two-dimensional inverse model (50×50 grid) of steady-state flow for a heterogeneous ground truth model (500×500 grid) with two hydrofacies. From the ground truth model, decreasing number of wells (12, 6, 3) were sampled for facies types, based on which experimental indicator histograms and directional variograms were computed. These parameters and models were used by Sequential Indicator Simulation to generate 100 realizations of hydrofacies patterns in a 100×100 (geostatistical) grid, which were conditioned to the facies measurements at wells. These realizations were smoothed with Simulated Annealing, coarsened to the 50×50 inverse grid, before they were conditioned with the direct method to the dynamic data, i.e., observed heads and groundwater fluxes at the same sampled wells. A set of realizations of estimated hydraulic conductivities (Ks), flow fields, and boundary conditions were created, which centered on the 'true' solutions from solving the ground truth model. Both hydrofacies conductivities were computed with an estimation accuracy of ×10% (12 wells), ×20% (6 wells), ×35% (3 wells) of the true values. For boundary condition estimation, the accuracy was within × 15% (12 wells), 30% (6 wells), and 50% (3 wells) of the true values. The inversion system of equations was solved with LSQR (Paige et al, 1982), for which coordinate transform and matrix scaling preprocessor were used to improve the condition number (CN) of the coefficient matrix. However, when the inverse grid was refined to 100×100, Gaussian Noise Perturbation was used to limit the growth of the CN before the matrix solve. To scale the inverse problem up (i.e., without smoothing and coarsening and therefore reducing the associated estimation uncertainty), a parallel LSQR solver was written and verified. For the 50×50 grid, the parallel solver sped up the serial solution time by 14X using 4 CPUs (research on parallel performance and scaling is ongoing). A sensitivity analysis was conducted to examine the relation between the observed data and the inversion outcomes, where measurement errors of increasing magnitudes (i.e., ×1, 2, 5, 10% of the total head variation and up to ×2% of the total flux variation) were imposed on the observed data. Inversion results were stable but the accuracy of Ks and boundary estimation degraded with increasing errors, as expected. In particular, quality of the observed heads is critical to hydraulic head recovery, while quality of the observed fluxes plays a dominant role in K estimation. References: Wang, D., Y. Zhang, J. Irsa, H. Huang, and L. Wang (2013), Data integration and stochastic inversion of a confined aquifer with high performance computing, Advances in Water Resources, in preparation. Paige, C. C., and M. A. Saunders (1982), LSQR: an algorithm for sparse linear equations and sparse least squares, ACM Transactions on Mathematical Software, 8(1), 43-71.

  3. Semi-empirical formulation of multiple scattering for the Gaussian beam model of heavy charged particles stopping in tissue-like matter.

    PubMed

    Kanematsu, Nobuyuki

    2009-03-07

    Dose calculation for radiotherapy with protons and heavier ions deals with a large volume of path integrals involving a scattering power of body tissue. This work provides a simple model for such demanding applications. There is an approximate linearity between RMS end-point displacement and range of incident particles in water, empirically found in measurements and detailed calculations. This fact was translated into a simple linear formula, from which the scattering power that is only inversely proportional to the residual range was derived. The simplicity enabled the analytical formulation for ions stopping in water, which was designed to be equivalent with the extended Highland model and agreed with measurements within 2% or 0.02 cm in RMS displacement. The simplicity will also improve the efficiency of numerical path integrals in the presence of heterogeneity.

  4. Photonic band gap structure simulator

    DOEpatents

    Chen, Chiping; Shapiro, Michael A.; Smirnova, Evgenya I.; Temkin, Richard J.; Sirigiri, Jagadishwar R.

    2006-10-03

    A system and method for designing photonic band gap structures. The system and method provide a user with the capability to produce a model of a two-dimensional array of conductors corresponding to a unit cell. The model involves a linear equation. Boundary conditions representative of conditions at the boundary of the unit cell are applied to a solution of the Helmholtz equation defined for the unit cell. The linear equation can be approximated by a Hermitian matrix. An eigenvalue of the Helmholtz equation is calculated. One computation approach involves calculating finite differences. The model can include a symmetry element, such as a center of inversion, a rotation axis, and a mirror plane. A graphical user interface is provided for the user's convenience. A display is provided to display to a user the calculated eigenvalue, corresponding to a photonic energy level in the Brilloin zone of the unit cell.

  5. Stochastic series expansion simulation of the t -V model

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Ye-Hua; Troyer, Matthias

    2016-04-01

    We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.

  6. On some stochastic formulations and related statistical moments of pharmacokinetic models.

    PubMed

    Matis, J H; Wehrly, T E; Metzler, C M

    1983-02-01

    This paper presents the deterministic and stochastic model for a linear compartment system with constant coefficients, and it develops expressions for the mean residence times (MRT) and the variances of the residence times (VRT) for the stochastic model. The expressions are relatively simple computationally, involving primarily matrix inversion, and they are elegant mathematically, in avoiding eigenvalue analysis and the complex domain. The MRT and VRT provide a set of new meaningful response measures for pharmacokinetic analysis and they give added insight into the system kinetics. The new analysis is illustrated with an example involving the cholesterol turnover in rats.

  7. Uncertainty Estimation in Tsunami Initial Condition From Rapid Bayesian Finite Fault Modeling

    NASA Astrophysics Data System (ADS)

    Benavente, R. F.; Dettmer, J.; Cummins, P. R.; Urrutia, A.; Cienfuegos, R.

    2017-12-01

    It is well known that kinematic rupture models for a given earthquake can present discrepancies even when similar datasets are employed in the inversion process. While quantifying this variability can be critical when making early estimates of the earthquake and triggered tsunami impact, "most likely models" are normally used for this purpose. In this work, we quantify the uncertainty of the tsunami initial condition for the great Illapel earthquake (Mw = 8.3, 2015, Chile). We focus on utilizing data and inversion methods that are suitable to rapid source characterization yet provide meaningful and robust results. Rupture models from teleseismic body and surface waves as well as W-phase are derived and accompanied by Bayesian uncertainty estimates from linearized inversion under positivity constraints. We show that robust and consistent features about the rupture kinematics appear when working within this probabilistic framework. Moreover, by using static dislocation theory, we translate the probabilistic slip distributions into seafloor deformation which we interpret as a tsunami initial condition. After considering uncertainty, our probabilistic seafloor deformation models obtained from different data types appear consistent with each other providing meaningful results. We also show that selecting just a single "representative" solution from the ensemble of initial conditions for tsunami propagation may lead to overestimating information content in the data. Our results suggest that rapid, probabilistic rupture models can play a significant role during emergency response by providing robust information about the extent of the disaster.

  8. An Efficient Spectral Method for Ordinary Differential Equations with Rational Function Coefficients

    NASA Technical Reports Server (NTRS)

    Coutsias, Evangelos A.; Torres, David; Hagstrom, Thomas

    1994-01-01

    We present some relations that allow the efficient approximate inversion of linear differential operators with rational function coefficients. We employ expansions in terms of a large class of orthogonal polynomial families, including all the classical orthogonal polynomials. These families obey a simple three-term recurrence relation for differentiation, which implies that on an appropriately restricted domain the differentiation operator has a unique banded inverse. The inverse is an integration operator for the family, and it is simply the tridiagonal coefficient matrix for the recurrence. Since in these families convolution operators (i.e. matrix representations of multiplication by a function) are banded for polynomials, we are able to obtain a banded representation for linear differential operators with rational coefficients. This leads to a method of solution of initial or boundary value problems that, besides having an operation count that scales linearly with the order of truncation N, is computationally well conditioned. Among the applications considered is the use of rational maps for the resolution of sharp interior layers.

  9. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  10. Two-Port Representation of a Linear Transmission Line in the Time Domain.

    DTIC Science & Technology

    1980-01-01

    which is a rational function. To use the Prony procedure it is necessary to inverse transform the admittance functions. For the transmission line, most...impulse is a constant, the inverse transform of Y0(s) contains an impulse of value ._ Therefore, if we were to numerically inverse transform Yo(s), we...would remove this im- pulse and inverse transform Y-(S) Y (S) 1’LR+C~ (23) The prony procedure would then be applied to the result. Of course, an impulse

  11. Porosity Estimation By Artificial Neural Networks Inversion . Application to Algerian South Field

    NASA Astrophysics Data System (ADS)

    Eladj, Said; Aliouane, Leila; Ouadfeul, Sid-Ali

    2017-04-01

    One of the main geophysicist's current challenge is the discovery and the study of stratigraphic traps, this last is a difficult task and requires a very fine analysis of the seismic data. The seismic data inversion allows obtaining lithological and stratigraphic information for the reservoir characterization . However, when solving the inverse problem we encounter difficult problems such as: Non-existence and non-uniqueness of the solution add to this the instability of the processing algorithm. Therefore, uncertainties in the data and the non-linearity of the relationship between the data and the parameters must be taken seriously. In this case, the artificial intelligence techniques such as Artificial Neural Networks(ANN) is used to resolve this ambiguity, this can be done by integrating different physical properties data which requires a supervised learning methods. In this work, we invert the acoustic impedance 3D seismic cube using the colored inversion method, then, the introduction of the acoustic impedance volume resulting from the first step as an input of based model inversion method allows to calculate the Porosity volume using the Multilayer Perceptron Artificial Neural Network. Application to an Algerian South hydrocarbon field clearly demonstrate the power of the proposed processing technique to predict the porosity for seismic data, obtained results can be used for reserves estimation, permeability prediction, recovery factor and reservoir monitoring. Keywords: Artificial Neural Networks, inversion, non-uniqueness , nonlinear, 3D porosity volume, reservoir characterization .

  12. Bayesian inversion of seismic and electromagnetic data for marine gas reservoir characterization using multi-chain Markov chain Monte Carlo sampling

    DOE PAGES

    Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan; ...

    2017-10-17

    In this paper we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach ismore » used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated — reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.« less

  13. Bayesian inversion of seismic and electromagnetic data for marine gas reservoir characterization using multi-chain Markov chain Monte Carlo sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan

    In this paper we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach ismore » used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated — reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.« less

  14. Minimum mean squared error (MSE) adjustment and the optimal Tykhonov-Phillips regularization parameter via reproducing best invariant quadratic uniformly unbiased estimates (repro-BIQUUE)

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard

    2008-02-01

    In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.

  15. On the modeling of the bottom particles segregation with non-linear diffusion equations: application to the marine sand ripples

    NASA Astrophysics Data System (ADS)

    Tiguercha, Djlalli; Bennis, Anne-claire; Ezersky, Alexander

    2015-04-01

    The elliptical motion in surface waves causes an oscillating motion of the sand grains leading to the formation of ripple patterns on the bottom. Investigation how the grains with different properties are distributed inside the ripples is a difficult task because of the segration of particle. The work of Fernandez et al. (2003) was extended from one-dimensional to two-dimensional case. A new numerical model, based on these non-linear diffusion equations, was developed to simulate the grain distribution inside the marine sand ripples. The one and two-dimensional models are validated on several test cases where segregation appears. Starting from an homogeneous mixture of grains, the two-dimensional simulations demonstrate different segregation patterns: a) formation of zones with high concentration of light and heavy particles, b) formation of «cat's eye» patterns, c) appearance of inverse Brazil nut effect. Comparisons of numerical results with the new set of field data and wave flume experiments show that the two-dimensional non-linear diffusion equations allow us to reproduce qualitatively experimental results on particles segregation.

  16. Probabilistic dual heuristic programming-based adaptive critic

    NASA Astrophysics Data System (ADS)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  17. Capturing Intuition Through Interactive Inverse Methods: Examples Drawn From Mechanical Non-Linearities in Structural Geology

    NASA Astrophysics Data System (ADS)

    Moresi, L.; May, D.; Peachey, T.; Enticott, C.; Abramson, D.; Robinson, T.

    2004-12-01

    Can you teach intuition ? Obviously we think that this is possible (though it's still just a hunch). People undoubtedly develop intuition for non-linear systems through painstaking repetition of complex tasks until they have sufficient feedback to begin to "see" the emergent behaviour. The better the exploration of the system can be exposed, the quicker the potential for developing an intuitive understanding. We have spent some time considering how to incorporate the intuitive knowledge of field geologists into mechanical modeling of geological processes. Our solution has been to allow expert geologist to steer (via a GUI) a genetic algorithm inversion of a mechanical forward model towards "structures" or patterns which are plausible in nature. The expert knowledge is then captured by analysis of the individual model parameters which are constrained by the steering (and by analysis of those which are unconstrained). The same system can also be used in reverse to expose the influence of individual parameters to the non-expert who is trying to learn just what does make a good match between model and observation. The ``distance'' between models preferred by experts, and those by an individual can be shown graphically to provide feedback. The examples we choose are from numerical models of extensional basins. We will first try to give each person some background information on the scientific problem from the poster and then we will let them loose on the numerical modeling tools with specific tasks to achieve. This will be an experiment in progress - we will later analyse how people use the GUI and whether there is really any significant difference between so-called experts and self-styled novices.

  18. Numerical Study of Frictional Properties and the Role of Cohesive End-Zones in Large Strike- Slip Earthquakes

    NASA Astrophysics Data System (ADS)

    Lovely, P. J.; Mutlu, O.; Pollard, D. D.

    2007-12-01

    Cohesive end-zones (CEZs) are regions of increased frictional strength and/or cohesion near the peripheries of faults that cause slip distributions to taper toward the fault-tip. Laboratory results, field observations, and theoretical models suggest an important role for CEZs in small-scale fractures and faults; however, their role in crustal-scale faulting and associated large earthquakes is less thoroughly understood. We present a numerical study of the potential role of CEZs on slip distributions in large, multi-segmented, strike-slip earthquake ruptures including the 1992 Landers Earthquake (Mw 7.2) and 1999 Hector Mine Earthquake (Mw 7.1). Displacement discontinuity is calculated using a quasi-static, 2D plane-strain boundary element (BEM) code for a homogeneous, isotropic, linear-elastic material. Friction is implemented by enforcing principles of complementarity. Model results with and without CEZs are compared with slip distributions measured by combined inversion of geodetic, strong ground motion, and teleseismic data. Stepwise and linear distributions of increasing frictional strength within CEZs are considered. The incorporation of CEZs in our model enables an improved match to slip distributions measured by inversion, suggesting that CEZs play a role in governing slip in large, strike-slip earthquakes. Additionally, we present a parametric study highlighting the very great sensitivity of modeled slip magnitude to small variations of the coefficient of friction. This result suggests that, provided a sufficiently well-constrained stress tensor and elastic moduli for the surrounding rock, relatively simple models could provide precise estimates of the magnitude of frictional strength. These results are verified by comparison with geometrically comparable finite element (FEM) models using the commercial code ABAQUS. In FEM models, friction is implemented by use of both Lagrange multipliers and penalty methods.

  19. New methodology for mechanical characterization of human superficial facial tissue anisotropic behaviour in vivo.

    PubMed

    Then, C; Stassen, B; Depta, K; Silber, G

    2017-07-01

    Mechanical characterization of human superficial facial tissue has important applications in biomedical science, computer assisted forensics, graphics, and consumer goods development. Specifically, the latter may include facial hair removal devices. Predictive accuracy of numerical models and their ability to elucidate biomechanically relevant questions depends on the acquisition of experimental data and mechanical tissue behavior representation. Anisotropic viscoelastic behavioral characterization of human facial tissue, deformed in vivo with finite strain, however, is sparse. Employing an experimental-numerical approach, a procedure is presented to evaluate multidirectional tensile properties of superficial tissue layers of the face in vivo. Specifically, in addition to stress relaxation, displacement-controlled multi-step ramp-and-hold protocols were performed to separate elastic from inelastic properties. For numerical representation, an anisotropic hyperelastic material model in conjunction with a time domain linear viscoelasticity formulation with Prony series was employed. Model parameters were inversely derived, employing finite element models, using multi-criteria optimization. The methodology provides insight into mechanical superficial facial tissue properties. Experimental data shows pronounced anisotropy, especially with large strain. The stress relaxation rate does not depend on the loading direction, but is strain-dependent. Preconditioning eliminates equilibrium hysteresis effects and leads to stress-strain repeatability. In the preconditioned state tissue stiffness and hysteresis insensitivity to strain rate in the applied range is evident. The employed material model fits the nonlinear anisotropic elastic results and the viscoelasticity model reasonably reproduces time-dependent results. Inversely deduced maximum anisotropic long-term shear modulus of linear elasticity is G ∞,max aniso =2.43kPa and instantaneous initial shear modulus at an applied rate of ramp loading is G 0,max aniso =15.38kPa. Derived mechanical model parameters constitute a basis for complex skin interaction simulation. Copyright © 2017. Published by Elsevier Ltd.

  20. The Linearized Bregman Method for Frugal Full-waveform Inversion with Compressive Sensing and Sparsity-promoting

    NASA Astrophysics Data System (ADS)

    Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong

    2018-03-01

    Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.

  1. Rayleigh wave dispersion curve inversion by using particle swarm optimization and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Buyuk, Ersin; Zor, Ekrem; Karaman, Abdullah

    2017-04-01

    Inversion of surface wave dispersion curves with its highly nonlinear nature has some difficulties using traditional linear inverse methods due to the need and strong dependence to the initial model, possibility of trapping in local minima and evaluation of partial derivatives. There are some modern global optimization methods to overcome of these difficulties in surface wave analysis such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). GA is based on biologic evolution consisting reproduction, crossover and mutation operations, while PSO algorithm developed after GA is inspired from the social behaviour of birds or fish of swarms. Utility of these methods require plausible convergence rate, acceptable relative error and optimum computation cost that are important for modelling studies. Even though PSO and GA processes are similar in appearence, the cross-over operation in GA is not used in PSO and the mutation operation is a stochastic process for changing the genes within chromosomes in GA. Unlike GA, the particles in PSO algorithm changes their position with logical velocities according to particle's own experience and swarm's experience. In this study, we applied PSO algorithm to estimate S wave velocities and thicknesses of the layered earth model by using Rayleigh wave dispersion curve and also compared these results with GA and we emphasize on the advantage of using PSO algorithm for geophysical modelling studies considering its rapid convergence, low misfit error and computation cost.

  2. Constraining DALECv2 using multiple data streams and ecological constraints: analysis and application

    DOE PAGES

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2017-07-10

    We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less

  3. Constraining DALECv2 using multiple data streams and ecological constraints: analysis and application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less

  4. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    PubMed

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  5. Integrated Analytic and Linearized Inverse Kinematics for Precise Full Body Interactions

    NASA Astrophysics Data System (ADS)

    Boulic, Ronan; Raunhardt, Daniel

    Despite the large success of games grounded on movement-based interactions the current state of full body motion capture technologies still prevents the exploitation of precise interactions with complex environments. This paper focuses on ensuring a precise spatial correspondence between the user and the avatar. We build upon our past effort in human postural control with a Prioritized Inverse Kinematics framework. One of its key advantage is to ease the dynamic combination of postural and collision avoidance constraints. However its reliance on a linearized approximation of the problem makes it vulnerable to the well-known full extension singularity of the limbs. In such context the tracking performance is reduced and/or less believable intermediate postural solutions are produced. We address this issue by introducing a new type of analytic constraint that smoothly integrates within the prioritized Inverse Kinematics framework. The paper first recalls the background of full body 3D interactions and the advantages and drawbacks of the linearized IK solution. Then the Flexion-EXTension constraint (FLEXT in short) is introduced for the partial position control of limb-like articulated structures. Comparative results illustrate the interest of this new type of integrated analytical and linearized IK control.

  6. Reversed inverse regression for the univariate linear calibration and its statistical properties derived using a new methodology

    NASA Astrophysics Data System (ADS)

    Kang, Pilsang; Koo, Changhoi; Roh, Hokyu

    2017-11-01

    Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.

  7. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agaltsov, A. D., E-mail: agalets@gmail.com; Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr; IEPT RAS, 117997 Moscow

    2014-10-15

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.

  8. FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)

    NASA Astrophysics Data System (ADS)

    2014-10-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2014 was a one-day workshop held in May 2014 which attracted around sixty attendees. Each of the submitted papers has been reviewed by two reviewers. There have been nine accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks (GDR ISIS, GDR MIA, GDR MOA, GDR Ondes). The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA, SATIE. Eric Vourc'h and Thomas Rodet

  9. New Global 3D Upper to Mid-mantle Electrical Conductivity Model Based on Observatory Data with Realistic Auroral Sources

    NASA Astrophysics Data System (ADS)

    Kelbert, A.; Egbert, G. D.; Sun, J.

    2011-12-01

    Poleward of 45-50 degrees (geomagnetic) observatory data are influenced significantly by auroral ionospheric current systems, invalidating the simplifying zonal dipole source assumption traditionally used for long period (T > 2 days) geomagnetic induction studies. Previous efforts to use these data to obtain the global electrical conductivity distribution in Earth's mantle have omitted high-latitude sites (further thinning an already sparse dataset) and/or corrected the affected transfer functions using a highly simplified model of auroral source currents. Although these strategies are partly effective, there remain clear suggestions of source contamination in most recent 3D inverse solutions - specifically, bands of conductive features are found near auroral latitudes. We report on a new approach to this problem, based on adjusting both external field structure and 3D Earth conductivity to fit observatory data. As an initial step towards full joint inversion we are using a two step procedure. In the first stage, we adopt a simplified conductivity model, with a thin-sheet of variable conductance (to represent the oceans) overlying a 1D Earth, to invert observed magnetic fields for external source spatial structure. Input data for this inversion are obtained from frequency domain principal components (PC) analysis of geomagnetic observatory hourly mean values. To make this (essentially linear) inverse problem well-posed we regularize using covariances for source field structure that are consistent with well-established properties of auroral ionospheric (and magnetospheric) current systems, and basic physics of the EM fields. In the second stage, we use a 3D finite difference inversion code, with source fields estimated from the first stage, to further fit the observatory PC modes. We incorporate higher latitude data into the inversion, and maximize the amount of available information by directly inverting the magnetic field components of the PC modes, instead of transfer functions such as C-responses used previously. Recent improvements in accuracy and speed of the forward and inverse finite difference codes (a secondary field formulation and parallelization over frequencies) allow us to use finer computational grid for inversion, and thus to model finer scale features, making full use of the expanded data set. Overall, our approach presents an improvement over earlier observatory data interpretation techniques, making better use of the available data, and allowing to explore the trade-offs between complications in source structure, and heterogeneities in mantle conductivity. We will also report on progress towards applying the same approach to simultaneous source/conductivity inversion of shorter period observatory data, focusing especially on the daily variation band.

  10. Decoupling control of vehicle chassis system based on neural network inverse system

    NASA Astrophysics Data System (ADS)

    Wang, Chunyan; Zhao, Wanzhong; Luan, Zhongkai; Gao, Qi; Deng, Ke

    2018-06-01

    Steering and suspension are two important subsystems affecting the handling stability and riding comfort of the chassis system. In order to avoid the interference and coupling of the control channels between active front steering (AFS) and active suspension subsystems (ASS), this paper presents a composite decoupling control method, which consists of a neural network inverse system and a robust controller. The neural network inverse system is composed of a static neural network with several integrators and state feedback of the original chassis system to approach the inverse system of the nonlinear systems. The existence of the inverse system for the chassis system is proved by the reversibility derivation of Interactor algorithm. The robust controller is based on the internal model control (IMC), which is designed to improve the robustness and anti-interference of the decoupled system by adding a pre-compensation controller to the pseudo linear system. The results of the simulation and vehicle test show that the proposed decoupling controller has excellent decoupling performance, which can transform the multivariable system into a number of single input and single output systems, and eliminate the mutual influence and interference. Furthermore, it has satisfactory tracking capability and robust performance, which can improve the comprehensive performance of the chassis system.

  11. Nonlinear reduction in risk for colorectal cancer by fruit and vegetable intake based on meta-analysis of prospective studies.

    PubMed

    Aune, Dagfinn; Lau, Rosa; Chan, Doris S M; Vieira, Rui; Greenwood, Darren C; Kampman, Ellen; Norat, Teresa

    2011-07-01

    The association between fruit and vegetable intake and colorectal cancer risk has been investigated by many studies but is controversial because of inconsistent results and weak observed associations. We summarized the evidence from cohort studies in categorical, linear, and nonlinear, dose-response meta-analyses. We searched PubMed for studies of fruit and vegetable intake and colorectal cancer risk that were published until the end of May 2010. We included 19 prospective studies that reported relative risk estimates and 95% confidence intervals (CIs) of colorectal cancer-associated with fruit and vegetable intake. Random effects models were used to estimate summary relative risks. The summary relative risk for the highest vs the lowest intake was 0.92 (95% CI: 0.86-0.99) for fruit and vegetables combined, 0.90 (95% CI: 0.83-0.98) for fruit, and 0.91 (95% CI: 0.86-0.96) for vegetables (P for heterogeneity=.24, .05, and .54, respectively). The inverse associations appeared to be restricted to colon cancer. In linear dose-response analysis, only intake of vegetables was significantly associated with colorectal cancer risk (summary relative risk=0.98; 95% CI: 0.97-0.99), per 100 g/d. However, significant inverse associations emerged in nonlinear models for fruits (Pnonlinearity<.001) and vegetables (Pnonlinearity=.001). The greatest risk reduction was observed when intake increased from very low levels of intake. There was generally little evidence of heterogeneity in the analyses and there was no evidence of small-study bias. Based on meta-analysis of prospective studies, there is a weak but statistically significant nonlinear inverse association between fruit and vegetable intake and colorectal cancer risk. Copyright © 2011 AGA Institute. Published by Elsevier Inc. All rights reserved.

  12. Joint inversion of shear wave travel time residuals and geoid and depth anomalies for long-wavelength variations in upper mantle temperature and composition along the Mid-Atlantic Ridge

    NASA Technical Reports Server (NTRS)

    Sheehan, Anne F.; Solomon, Sean C.

    1991-01-01

    Measurements were carried out for SS-S differential travel time residuals for nearly 500 paths crossing the northern Mid-Atlantic Ridge, assuming that the residuals are dominated by contributions from the upper mantle near the surface bounce point of the reflected phase SS. Results indicate that the SS-S travel time residuals decrease linearly with square root of age, to an age of 80-100 Ma, in general agreement with the plate cooling model. A joint inversion was formulated of travel time residuals and geoid and bathymetric anomalies for lateral variation in the upper mantle temperature and composition. The preferred inversion solutions were found to have variations in upper mantle temperature along the Mid-Atlantic Ridge of about 100 K. It was calculated that, for a constant bulk composition, such a temperature variation would produce about a 7-km variation in crustal thickness, larger than is generally observed.

  13. X-38 Experimental Controls Laws

    NASA Technical Reports Server (NTRS)

    Munday, Steve; Estes, Jay; Bordano, Aldo J.

    2000-01-01

    X-38 Experimental Control Laws X-38 is a NASA JSC/DFRC experimental flight test program developing a series of prototypes for an International Space Station (ISS) Crew Return Vehicle, often called an ISS "lifeboat." X- 38 Vehicle 132 Free Flight 3, currently scheduled for the end of this month, will be the first flight test of a modem FCS architecture called Multi-Application Control-Honeywell (MACH), originally developed by the Honeywell Technology Center. MACH wraps classical P&I outer attitude loops around a modem dynamic inversion attitude rate loop. The dynamic inversion process requires that the flight computer have an onboard aircraft model of expected vehicle dynamics based upon the aerodynamic database. Dynamic inversion is computationally intensive, so some timing modifications were made to implement MACH on the slower flight computers of the subsonic test vehicles. In addition to linear stability margin analyses and high fidelity 6-DOF simulation, hardware-in-the-loop testing is used to verify the implementation of MACH and its robustness to aerodynamic and environmental uncertainties and disturbances.

  14. The Prediction Properties of Inverse and Reverse Regression for the Simple Linear Calibration Problem

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.

    2010-01-01

    The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.

  15. Bayesian inversions of a dynamic vegetation model in four European grassland sites

    NASA Astrophysics Data System (ADS)

    Minet, J.; Laloy, E.; Tychon, B.; François, L.

    2015-01-01

    Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB dynamic vegetation model (DVM) with ten unknown parameters, using the DREAM(ZS) Markov chain Monte Carlo (MCMC) sampler. We compare model inversions considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a~priori or jointly inferred with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root-mean-square error (RMSE) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19 g C m-2 day-1, 1.04 to 1.56 g C m-2 day-1, and 0.50 to 1.28 mm day-1, respectively. In validation, mismatches between measured and simulated data are larger, but still with Nash-Sutcliffe efficiency scores above 0.5 for three out of the four sites. Although measurement errors associated with eddy covariance data are known to be heteroscedastic, we showed that assuming a classical linear heteroscedastic model of the residual errors in the inversion do not fully remove heteroscedasticity. Since the employed heteroscedastic error model allows for larger deviations between simulated and measured data as the magnitude of the measured data increases, this error model expectedly lead to poorer data fitting compared to inversions considering a constant variance of the residual errors. Furthermore, sampling the residual error variances along with model parameters results in overall similar model parameter posterior distributions as those obtained by fixing these variances beforehand, while slightly improving model performance. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides model behaviour, difference between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics. Lastly, the possibility of finding a common set of parameters among the four experimental sites is discussed.

  16. Combining experimental techniques with non-linear numerical models to assess the sorption of pesticides on soils

    NASA Astrophysics Data System (ADS)

    Magga, Zoi; Tzovolou, Dimitra N.; Theodoropoulou, Maria A.; Tsakiroglou, Christos D.

    2012-03-01

    The risk assessment of groundwater pollution by pesticides may be based on pesticide sorption and biodegradation kinetic parameters estimated with inverse modeling of datasets from either batch or continuous flow soil column experiments. In the present work, a chemical non-equilibrium and non-linear 2-site sorption model is incorporated into solute transport models to invert the datasets of batch and soil column experiments, and estimate the kinetic sorption parameters for two pesticides: N-phosphonomethyl glycine (glyphosate) and 2,4-dichlorophenoxy-acetic acid (2,4-D). When coupling the 2-site sorption model with the 2-region transport model, except of the kinetic sorption parameters, the soil column datasets enable us to estimate the mass-transfer coefficients associated with solute diffusion between mobile and immobile regions. In order to improve the reliability of models and kinetic parameter values, a stepwise strategy that combines batch and continuous flow tests with adequate true-to-the mechanism analytical of numerical models, and decouples the kinetics of purely reactive steps of sorption from physical mass-transfer processes is required.

  17. The Islamic State Battle Plan: Press Release Natural Language Processing

    DTIC Science & Technology

    2016-06-01

    Processing, text mining , corpus, generalized linear model, cascade, R Shiny, leaflet, data visualization 15. NUMBER OF PAGES 83 16. PRICE CODE...Terrorism and Responses to Terrorism TDM Term Document Matrix TF Term Frequency TF-IDF Term Frequency-Inverse Document Frequency tm text mining (R...package=leaflet. Feinerer I, Hornik K (2015) Text Mining Package “tm,” Version 0.6-2. (Jul 3) https://cran.r-project.org/web/packages/tm/tm.pdf

  18. jInv: A Modular and Scalable Framework for Electromagnetic Inverse Problems

    NASA Astrophysics Data System (ADS)

    Belliveau, P. T.; Haber, E.

    2016-12-01

    Inversion is a key tool in the interpretation of geophysical electromagnetic (EM) data. Three-dimensional (3D) EM inversion is very computationally expensive and practical software for inverting large 3D EM surveys must be able to take advantage of high performance computing (HPC) resources. It has traditionally been difficult to achieve those goals in a high level dynamic programming environment that allows rapid development and testing of new algorithms, which is important in a research setting. With those goals in mind, we have developed jInv, a framework for PDE constrained parameter estimation problems. jInv provides optimization and regularization routines, a framework for user defined forward problems, and interfaces to several direct and iterative solvers for sparse linear systems. The forward modeling framework provides finite volume discretizations of differential operators on rectangular tensor product meshes and tetrahedral unstructured meshes that can be used to easily construct forward modeling and sensitivity routines for forward problems described by partial differential equations. jInv is written in the emerging programming language Julia. Julia is a dynamic language targeted at the computational science community with a focus on high performance and native support for parallel programming. We have developed frequency and time-domain EM forward modeling and sensitivity routines for jInv. We will illustrate its capabilities and performance with two synthetic time-domain EM inversion examples. First, in airborne surveys, which use many sources, we achieve distributed memory parallelism by decoupling the forward and inverse meshes and performing forward modeling for each source on small, locally refined meshes. Secondly, we invert grounded source time-domain data from a gradient array style induced polarization survey using a novel time-stepping technique that allows us to compute data from different time-steps in parallel. These examples both show that it is possible to invert large scale 3D time-domain EM datasets within a modular, extensible framework written in a high-level, easy to use programming language.

  19. Alternative model of space-charge-limited thermionic current flow through a plasma

    DOE PAGES

    Campanell, M. D.

    2018-04-19

    It is widely assumed that thermionic current flow through a plasma is limited by a “space-charge-limited” (SCL) cathode sheath that consumes the hot cathode's negative bias and accelerates upstream ions into the cathode. In this paper, we formulate a fundamentally different current-limited mode. In the “inverse” mode, the potentials of both electrodes are above the plasma potential, so that the plasma ions are confined. The bias is consumed by the anode sheath. There is no potential gradient in the neutral plasma region from resistivity or presheath. The inverse cathode sheath pulls some thermoelectrons back to the cathode, thereby limiting themore » circuit current. Thermoelectrons entering the zero-field plasma region that undergo collisions may also be sent back to the cathode, further attenuating the circuit current. In planar geometry, the plasma density is shown to vary linearly across the electrode gap. A continuum kinetic planar plasma diode simulation model is set up to compare the properties of current modes with classical, conventional SCL, and inverse cathode sheaths. SCL modes can exist only if charge-exchange collisions are turned off in the potential well of the virtual cathode to prevent ion trapping. With the collisions, the current-limited equilibrium must be inverse. Inverse operating modes should therefore be present or possible in many plasma devices that rely on hot cathodes. Evidence from past experiments is discussed. Finally, the inverse mode may offer opportunities to minimize sputtering and power consumption that were not previously explored due to the common assumption of SCL sheaths.« less

  20. Alternative model of space-charge-limited thermionic current flow through a plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanell, M. D.

    It is widely assumed that thermionic current flow through a plasma is limited by a “space-charge-limited” (SCL) cathode sheath that consumes the hot cathode's negative bias and accelerates upstream ions into the cathode. In this paper, we formulate a fundamentally different current-limited mode. In the “inverse” mode, the potentials of both electrodes are above the plasma potential, so that the plasma ions are confined. The bias is consumed by the anode sheath. There is no potential gradient in the neutral plasma region from resistivity or presheath. The inverse cathode sheath pulls some thermoelectrons back to the cathode, thereby limiting themore » circuit current. Thermoelectrons entering the zero-field plasma region that undergo collisions may also be sent back to the cathode, further attenuating the circuit current. In planar geometry, the plasma density is shown to vary linearly across the electrode gap. A continuum kinetic planar plasma diode simulation model is set up to compare the properties of current modes with classical, conventional SCL, and inverse cathode sheaths. SCL modes can exist only if charge-exchange collisions are turned off in the potential well of the virtual cathode to prevent ion trapping. With the collisions, the current-limited equilibrium must be inverse. Inverse operating modes should therefore be present or possible in many plasma devices that rely on hot cathodes. Evidence from past experiments is discussed. Finally, the inverse mode may offer opportunities to minimize sputtering and power consumption that were not previously explored due to the common assumption of SCL sheaths.« less

  1. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.

  2. Modeling the 16 September 2015 Chile tsunami source with the inversion of deep-ocean tsunami records by means of the r - solution method

    NASA Astrophysics Data System (ADS)

    Voronina, Tatyana; Romanenko, Alexey; Loskutov, Artem

    2017-04-01

    The key point in the state-of-the-art in the tsunami forecasting is constructing a reliable tsunami source. In this study, we present an application of the original numerical inversion technique to modeling the tsunami sources of the 16 September 2015 Chile tsunami. The problem of recovering a tsunami source from remote measurements of the incoming wave in the deep-water tsunameters is considered as an inverse problem of mathematical physics in the class of ill-posed problems. This approach is based on the least squares and the truncated singular value decomposition techniques. The tsunami wave propagation is considered within the scope of the linear shallow-water theory. As in inverse seismic problem, the numerical solutions obtained by mathematical methods become unstable due to the presence of noise in real data. A method of r-solutions makes it possible to avoid instability in the solution to the ill-posed problem under study. This method seems to be attractive from the computational point of view since the main efforts are required only once for calculating the matrix whose columns consist of computed waveforms for each harmonic as a source (an unknown tsunami source is represented as a part of a spatial harmonics series in the source area). Furthermore, analyzing the singular spectra of the matrix obtained in the course of numerical calculations one can estimate the future inversion by a certain observational system that will allow offering a more effective disposition for the tsunameters with the help of precomputations. In other words, the results obtained allow finding a way to improve the inversion by selecting the most informative set of available recording stations. The case study of the 6 February 2013 Solomon Islands tsunami highlights a critical role of arranging deep-water tsunameters for obtaining the inversion results. Implementation of the proposed methodology to the 16 September 2015 Chile tsunami has successfully produced tsunami source model. The function recovered by the method proposed can find practical applications both as an initial condition for various optimization approaches and for computer calculation of the tsunami wave propagation.

  3. Lithospheric structure of Taiwan from gravity modelling and sequential inversion of seismological and gravity data

    NASA Astrophysics Data System (ADS)

    Masson, F.; Mouyen, M.; Hwang, C.; Wu, Y.-M.; Ponton, F.; Lehujeur, M.; Dorbath, C.

    2012-11-01

    Using a Bouguer anomaly map and a dense seismic data set, we have performed two studies in order to improve our knowledge of the deep structure of Taiwan. First, we model the Bouguer anomaly along a profile crossing the island using simple forward modelling. The modelling is 2D, with the hypothesis of cylindrical symmetry. Second we present a joint analysis of gravity anomaly and seismic arrival time data recorded in Taiwan. An initial velocity model has been obtained by local earthquake tomography (LET) of the seismological data. The LET velocity model was used to construct an initial 3D gravity model, using a linear velocity-density relationship (Birch's law). The synthetic Bouguer anomaly calculated for this model has the same shape and wavelength as the observed anomaly. However some characteristics of the anomaly map are not retrieved. To derive a crustal velocity/density model which accounts for both types of observations, we performed a sequential inversion of seismological and gravity data. The variance reduction of the arrival time data for the final sequential model was comparable to the variance reduction obtained by simple LET. Moreover, the sequential model explained about 80% of the observed gravity anomaly. New 3D model of Taiwan lithosphere is presented.

  4. Female Literacy Rate is a Better Predictor of Birth Rate and Infant Mortality Rate in India.

    PubMed

    Saurabh, Suman; Sarkar, Sonali; Pandey, Dhruv K

    2013-01-01

    Educated women are known to take informed reproductive and healthcare decisions. These result in population stabilization and better infant care reflected by lower birth rates and infant mortality rates (IMRs), respectively. Our objective was to study the relationship of male and female literacy rates with crude birth rates (CBRs) and IMRs of the states and union territories (UTs) of India. The data were analyzed using linear regression. CBR and IMR were taken as the dependent variables; while the overall literacy rates, male, and female literacy rates were the independent variables. CBRs were inversely related to literacy rates (slope parameter = -0.402, P < 0.001). On multiple linear regression with male and female literacy rates, a significant inverse relationship emerged between female literacy rate and CBR (slope = -0.363, P < 0.001), while male literacy rate was not significantly related to CBR (P = 0.674). IMR of the states were also inversely related to their literacy rates (slope = -1.254, P < 0.001). Multiple linear regression revealed a significant inverse relationship between IMR and female literacy (slope = -0.816, P = 0.031), whereas male literacy rate was not significantly related (P = 0.630). Female literacy is relatively highly important for both population stabilization and better infant health.

  5. Eating frequency is inversely associated with BMI, waist circumference and the proportion of body fat in Korean adults when diet quality is high, but not when it is low: analysis of the Fourth Korea National Health and Nutrition Examination Survey (KNHANES IV).

    PubMed

    Kim, Sunmi; Yang, Jeong Hee; Park, Gyeong-Hun

    2018-04-01

    The role of eating frequency (EF) in obesity development has been debated, and few studies have investigated Asian populations. Diet quality might affect the association between EF and obesity. Therefore, we investigated the association between EF and obesity indicators in a representative sample of Korean adults with consideration to diet quality. This cross-sectional study used data of 6951 participants aged 19-93 years (male 49·8 %, female 50·2 %) from the Fourth Korean National Health and Nutrition Examination Survey. EF was assessed using a questionnaire, and diet quality was defined as mean adequacy ratio (MAR). To explore the association between EF and obesity indicators, we used multiple linear regression analyses with and without interaction terms between diet quality and EF. EF was inversely associated with each obesity indicator, including body fat percentage (BF%), BMI and waist circumference (WC), showing a significant linear trend (P<0·001 for BF%, WC and BMI). In addition, the association between EF and each obesity indicator was significantly altered according to diet quality (P value of the interaction term EF×diet quality=0·008 in the regression model for BF%, <0·001 for BMI and 0·043 for WC). In the stratified analyses according to diet quality, EF had a significant inverse association with BF%, WC and BMI in the high diet quality groups, but not in the low diet quality groups. This study suggests that EF is inversely associated with the obesity indicators when diet quality is high, but not when it is low in Korean adults.

  6. Imaging electrical conductivity, permeability, and/or permittivity contrasts using the Born Scattering Inversion (BSI)

    NASA Astrophysics Data System (ADS)

    Darrh, A.; Downs, C. M.; Poppeliers, C.

    2017-12-01

    Born Scattering Inversion (BSI) of electromagnetic (EM) data is a geophysical imaging methodology for mapping weak conductivity, permeability, and/or permittivity contrasts in the subsurface. The high computational cost of full waveform inversion is reduced by adopting the First Born Approximation for scattered EM fields. This linearizes the inverse problem in terms of Born scattering amplitudes for a set of effective EM body sources within a 3D imaging volume. Estimation of scatterer amplitudes is subsequently achieved by solving the normal equations. Our present BSI numerical experiments entail Fourier transforming real-valued synthetic EM data to the frequency-domain, and minimizing the L2 residual between complex-valued observed and predicted data. We are testing the ability of BSI to resolve simple scattering models. For our initial experiments, synthetic data are acquired by three-component (3C) electric field receivers distributed on a plane above a single point electric dipole within a homogeneous and isotropic wholespace. To suppress artifacts, candidate Born scatterer locations are confined to a volume beneath the receiver array. Also, we explore two different numerical linear algebra algorithms for solving the normal equations: Damped Least Squares (DLS), and Non-Negative Least Squares (NNLS). Results from NNLS accurately recover the source location only for a large dense 3C receiver array, but fail when the array is decimated, or is restricted to horizontal component data. Using all receiver stations and all components per station, NNLS results are relatively insensitive to a sub-sampled frequency spectrum, suggesting that coarse frequency-domain sampling may be adequate for good target resolution. Results from DLS are insensitive to diminishing array density, but contain spatially oscillatory structure. DLS-generated images are consistently centered at the known point source location, despite an abundance of surrounding structure.

  7. Testing the existence of optical linear polarization in young brown dwarfs

    NASA Astrophysics Data System (ADS)

    Manjavacas, E.; Miles-Páez, P. A.; Zapatero-Osorio, M. R.; Goldman, B.; Buenzli, E.; Henning, T.; Pallé, E.; Fang, M.

    2017-07-01

    Linear polarization can be used as a probe of the existence of atmospheric condensates in ultracool dwarfs. Models predict that the observed linear polarization increases with the degree of oblateness, which is inversely proportional to the surface gravity. We aimed to test the existence of optical linear polarization in a sample of bright young brown dwarfs, with spectral types between M6 and L2, observable from the Calar Alto Observatory, and cataloged previously as low gravity objects using spectroscopy. Linear polarimetric images were collected in I and R band using CAFOS at the 2.2-m telescope in Calar Alto Observatory (Spain). The flux ratio method was employed to determine the linear polarization degrees. With a confidence of 3σ, our data indicate that all targets have a linear polarimetry degree in average below 0.69 per cent in the I band, and below 1.0 per cent in the R band, at the time they were observed. We detected significant (I.e. P/σ ≥ 3) linear polarization for the young M6 dwarf 2MASS J04221413+1530525 in the R band, with a degree of p* = 0.81 ± 0.17 per cent.

  8. Ambient noise adjoint tomography for a linear array in North China

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Yao, H.; Liu, Q.; Yuan, Y. O.; Zhang, P.; Feng, J.; Fang, L.

    2017-12-01

    Ambient noise tomography based on dispersion data and ray theory has been widely utilized for imaging crustal structures. In order to improve the inversion accuracy, ambient noise tomography based on the 3D adjoint approach or full waveform inversion has been developed recently, however, the computational cost is tremendous. In this study we present 2D ambient noise adjoint tomography for a linear array in north China with significant computational efficiency compared to 3D ambient noise adjoint tomography. During the preprocessing, we first convert the observed data in 3D media, i.e., surface-wave empirical Green's functions (EGFs) from ambient noise cross-correlation, to the reconstructed EGFs in 2D media using a 3D/2D transformation scheme. Different from the conventional steps of measuring phase dispersion, the 2D adjoint tomography refines 2D shear wave speeds along the profile directly from the reconstructed Rayleigh wave EGFs in the period band 6-35s. With the 2D initial model extracted from the 3D model from traditional ambient noise tomography, adjoint tomography updates the model by minimizing the frequency-dependent Rayleigh wave traveltime misfits between the reconstructed EGFs and synthetic Green function (SGFs) in 2D media generated by the spectral-element method (SEM), with a preconditioned conjugate gradient method. The multitaper traveltime difference measurement is applied in four period bands during the inversion: 20-35s, 15-30s, 10-20s and 6-15s. The recovered model shows more detailed crustal structures with pronounced low velocity anomaly in the mid-lower crust beneath the junction of Taihang Mountains and Yin-Yan Mountains compared with the initial model. This low velocity structure may imply the possible intense crust-mantle interactions, probably associated with the magmatic underplating during the Mesozoic to Cenozoic evolution of the region. To our knowledge, it's first time that ambient noise adjoint tomography is implemented in 2D media. Considering the intensive computational cost and storage of 3D adjoint tomography, this 2D ambient noise adjoint tomography has potential advantages to get high-resolution 2D crustal structures with limited computational resource.

  9. anisotropic microseismic focal mechanism inversion by waveform imaging matching

    NASA Astrophysics Data System (ADS)

    Wang, L.; Chang, X.; Wang, Y.; Xue, Z.

    2016-12-01

    The focal mechanism is one of the most important parameters in source inversion, for both natural earthquakes and human-induced seismic events. It has been reported to be useful for understanding stress distribution and evaluating the fracturing effect. The conventional focal mechanism inversion method picks the first arrival waveform of P wave. This method assumes the source as a Double Couple (DC) type and the media isotropic, which is usually not the case for induced seismic focal mechanism inversion. For induced seismic events, the inappropriate source and media model in inversion processing, by introducing ambiguity or strong simulation errors, will seriously reduce the inversion effectiveness. First, the focal mechanism contains significant non-DC source type. Generally, the source contains three components: DC, isotropic (ISO) and the compensated linear vector dipole (CLVD), which makes focal mechanisms more complicated. Second, the anisotropy of media will affect travel time and waveform to generate inversion bias. The common way to describe focal mechanism inversion is based on moment tensor (MT) inversion which can be decomposed into the combination of DC, ISO and CLVD components. There are two ways to achieve MT inversion. The wave-field migration method is applied to achieve moment tensor imaging. This method can construct elements imaging of MT in 3D space without picking the first arrival, but the retrieved MT value is influenced by imaging resolution. The full waveform inversion is employed to retrieve MT. In this method, the source position and MT can be reconstructed simultaneously. However, this method needs vast numerical calculation. Moreover, the source position and MT also influence each other in the inversion process. In this paper, the waveform imaging matching (WIM) method is proposed, which combines source imaging with waveform inversion for seismic focal mechanism inversion. Our method uses the 3D tilted transverse isotropic (TTI) elastic wave equation to approximate wave propagating in anisotropic media. First, a source imaging procedure is employed to obtain the source position. Second, we refine a waveform inversion algorithm to retrieve MT. We also use a microseismic data set recorded in surface acquisition to test our method.

  10. Finite-frequency tomography using adjoint methods-Methodology and examples using membrane surface waves

    NASA Astrophysics Data System (ADS)

    Tape, Carl; Liu, Qinya; Tromp, Jeroen

    2007-03-01

    We employ adjoint methods in a series of synthetic seismic tomography experiments to recover surface wave phase-speed models of southern California. Our approach involves computing the Fréchet derivative for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a 2-D spectral-element method (SEM) and a phase-speed model for southern California. A `target' phase-speed model is used to generate the `data' at the receivers. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the remaining differences between data and synthetics are time-reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernels. An event kernel may be thought of as a weighted sum of phase-specific (e.g. P) banana-doughnut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, that is, the Fréchet derivative. A non-linear conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. We illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions and joint source-structure inversions. Finally, we draw connections between classical Hessian-based tomography and gradient-based adjoint tomography.

  11. Review: Optimization methods for groundwater modeling and management

    NASA Astrophysics Data System (ADS)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  12. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  13. The inverse problem: Ocean tides derived from earth tide observations

    NASA Technical Reports Server (NTRS)

    Kuo, J. T.

    1978-01-01

    Indirect mapping ocean tides by means of land and island-based tidal gravity measurements is presented. The inverse scheme of linear programming is used for indirect mapping of ocean tides. Open ocean tides were measured by the numerical integration of Laplace's tidal equations.

  14. A large inversion in the linear chromosome of Streptomyces griseus caused by replicative transposition of a new Tn3 family transposon.

    PubMed

    Murata, M; Uchida, T; Yang, Y; Lezhava, A; Kinashi, H

    2011-04-01

    We have comprehensively analyzed the linear chromosomes of Streptomyces griseus mutants constructed and kept in our laboratory. During this study, macrorestriction analysis of AseI and DraI fragments of mutant 402-2 suggested a large chromosomal inversion. The junctions of chromosomal inversion were cloned and sequenced and compared with the corresponding target sequences in the parent strain 2247. Consequently, a transposon-involved mechanism was revealed. Namely, a transposon originally located at the left target site was replicatively transposed to the right target site in an inverted direction, which generated a second copy and at the same time caused a 2.5-Mb chromosomal inversion. The involved transposon named TnSGR was grouped into a new subfamily of the resolvase-encoding Tn3 family transposons based on its gene organization. At the end, terminal diversity of S. griseus chromosomes is discussed by comparing the sequences of strains 2247 and IFO13350.

  15. Analyzing industrial energy use through ordinary least squares regression models

    NASA Astrophysics Data System (ADS)

    Golden, Allyson Katherine

    Extensive research has been performed using regression analysis and calibrated simulations to create baseline energy consumption models for residential buildings and commercial institutions. However, few attempts have been made to discuss the applicability of these methodologies to establish baseline energy consumption models for industrial manufacturing facilities. In the few studies of industrial facilities, the presented linear change-point and degree-day regression analyses illustrate ideal cases. It follows that there is a need in the established literature to discuss the methodologies and to determine their applicability for establishing baseline energy consumption models of industrial manufacturing facilities. The thesis determines the effectiveness of simple inverse linear statistical regression models when establishing baseline energy consumption models for industrial manufacturing facilities. Ordinary least squares change-point and degree-day regression methods are used to create baseline energy consumption models for nine different case studies of industrial manufacturing facilities located in the southeastern United States. The influence of ambient dry-bulb temperature and production on total facility energy consumption is observed. The energy consumption behavior of industrial manufacturing facilities is only sometimes sufficiently explained by temperature, production, or a combination of the two variables. This thesis also provides methods for generating baseline energy models that are straightforward and accessible to anyone in the industrial manufacturing community. The methods outlined in this thesis may be easily replicated by anyone that possesses basic spreadsheet software and general knowledge of the relationship between energy consumption and weather, production, or other influential variables. With the help of simple inverse linear regression models, industrial manufacturing facilities may better understand their energy consumption and production behavior, and identify opportunities for energy and cost savings. This thesis study also utilizes change-point and degree-day baseline energy models to disaggregate facility annual energy consumption into separate industrial end-user categories. The baseline energy model provides a suitable and economical alternative to sub-metering individual manufacturing equipment. One case study describes the conjoined use of baseline energy models and facility information gathered during a one-day onsite visit to perform an end-point energy analysis of an injection molding facility conducted by the Alabama Industrial Assessment Center. Applying baseline regression model results to the end-point energy analysis allowed the AIAC to better approximate the annual energy consumption of the facility's HVAC system.

  16. Analysis of pumping tests of partially penetrating wells in an unconfined aquifer using inverse numerical optimization

    NASA Astrophysics Data System (ADS)

    Hvilshøj, S.; Jensen, K. H.; Barlebo, H. C.; Madsen, B.

    1999-08-01

    Inverse numerical modeling was applied to analyze pumping tests of partially penetrating wells carried out in three wells established in an unconfined aquifer in Vejen, Denmark, where extensive field investigations had previously been carried out, including tracer tests, mini-slug tests, and other hydraulic tests. Drawdown data from multiple piezometers located at various horizontal and vertical distances from the pumping well were included in the optimization. Horizontal and vertical hydraulic conductivities, specific storage, and specific yield were estimated, assuming that the aquifer was either a homogeneous system with vertical anisotropy or composed of two or three layers of different hydraulic properties. In two out of three cases, a more accurate interpretation was obtained for a multi-layer model defined on the basis of lithostratigraphic information obtained from geological descriptions of sediment samples, gammalogs, and flow-meter tests. Analysis of the pumping tests resulted in values for horizontal hydraulic conductivities that are in good accordance with those obtained from slug tests and mini-slug tests. Besides the horizontal hydraulic conductivity, it is possible to determine the vertical hydraulic conductivity, specific yield, and specific storage based on a pumping test of a partially penetrating well. The study demonstrates that pumping tests of partially penetrating wells can be analyzed using inverse numerical models. The model used in the study was a finite-element flow model combined with a non-linear regression model. Such a model can accommodate more geological information and complex boundary conditions, and the parameter-estimation procedure can be formalized to obtain optimum estimates of hydraulic parameters and their standard deviations.

  17. More green space is related to less antidepressant prescription rates in the Netherlands: A Bayesian geoadditive quantile regression approach.

    PubMed

    Helbich, Marco; Klein, Nadja; Roberts, Hannah; Hagedoorn, Paulien; Groenewegen, Peter P

    2018-06-20

    Exposure to green space seems to be beneficial for self-reported mental health. In this study we used an objective health indicator, namely antidepressant prescription rates. Current studies rely exclusively upon mean regression models assuming linear associations. It is, however, plausible that the presence of green space is non-linearly related with different quantiles of the outcome antidepressant prescription rates. These restrictions may contribute to inconsistent findings. Our aim was: a) to assess antidepressant prescription rates in relation to green space, and b) to analyze how the relationship varies non-linearly across different quantiles of antidepressant prescription rates. We used cross-sectional data for the year 2014 at a municipality level in the Netherlands. Ecological Bayesian geoadditive quantile regressions were fitted for the 15%, 50%, and 85% quantiles to estimate green space-prescription rate correlations, controlling for physical activity levels, socio-demographics, urbanicity, etc. RESULTS: The results suggested that green space was overall inversely and non-linearly associated with antidepressant prescription rates. More important, the associations differed across the quantiles, although the variation was modest. Significant non-linearities were apparent: The associations were slightly positive in the lower quantile and strongly negative in the upper one. Our findings imply that an increased availability of green space within a municipality may contribute to a reduction in the number of antidepressant prescriptions dispensed. Green space is thus a central health and community asset, whilst a minimum level of 28% needs to be established for health gains. The highest effectiveness occurred at a municipality surface percentage higher than 79%. This inverse dose-dependent relation has important implications for setting future community-level health and planning policies. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Point-source inversion techniques

    NASA Astrophysics Data System (ADS)

    Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.

    1982-11-01

    A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.

  19. Modelisations et inversions tri-dimensionnelles en prospections gravimetrique et electrique

    NASA Astrophysics Data System (ADS)

    Boulanger, Olivier

    The aim of this thesis is the application of gravity and resistivity methods for mining prospecting. The objectives of the present study are: (1) to build a fast gravity inversion method to interpret surface data; (2) to develop a tool for modelling the electrical potential acquired at surface and in boreholes when the resistivity distribution is heterogeneous; and (3) to define and implement a stochastic inversion scheme allowing the estimation of the subsurface resistivity from electrical data. The first technique concerns the elaboration of a three dimensional (3D) inversion program allowing the interpretation of gravity data using a selection of constraints such as the minimum distance, the flatness, the smoothness and the compactness. These constraints are integrated in a Lagrangian formulation. A multi-grid technique is also implemented to resolve separately large and short gravity wavelengths. The subsurface in the survey area is divided into juxtaposed rectangular prismatic blocks. The problem is solved by calculating the model parameters, i.e. the densities of each block. Weights are given to each block depending on depth, a priori information on density, and density range allowed for the region under investigation. The present code is tested on synthetic data. Advantages and behaviour of each method are compared in the 3D reconstruction. Recovery of geometry (depth, size) and density distribution of the original model is dependent on the set of constraints used. The best combination of constraints experimented for multiple bodies seems to be flatness and minimum volume for multiple bodies. The inversion method is tested on real gravity data. The second tool developed in this thesis is a three-dimensional electrical resistivity modelling code to interpret surface and subsurface data. Based on the integral equation, it calculates the charge density caused by conductivity gradients at each interface of the mesh allowing an exact estimation of the potential. Modelling generates a huge matrix made of Green's functions which is stored by using the method of pyramidal compression. The third method consists to interpret electrical potential measurements from a non-linear geostatistical approach including new constraints. This method estimates an analytical covariance model for the resistivity parameters from the potential data. (Abstract shortened by UMI.)

  20. Algorithms for solving large sparse systems of simultaneous linear equations on vector processors

    NASA Technical Reports Server (NTRS)

    David, R. E.

    1984-01-01

    Very efficient algorithms for solving large sparse systems of simultaneous linear equations have been developed for serial processing computers. These involve a reordering of matrix rows and columns in order to obtain a near triangular pattern of nonzero elements. Then an LU factorization is developed to represent the matrix inverse in terms of a sequence of elementary Gaussian eliminations, or pivots. In this paper it is shown how these algorithms are adapted for efficient implementation on vector processors. Results obtained on the CYBER 200 Model 205 are presented for a series of large test problems which show the comparative advantages of the triangularization and vector processing algorithms.

  1. Inversed estimation of critical factors for controlling over-prediction of summertime tropospheric O3 over East Asia based of the combination of DDM sensitivity analysis and modeled Green's function method

    NASA Astrophysics Data System (ADS)

    Itahashi, S.; Yumimoto, K.; Uno, I.; Kim, S.

    2012-12-01

    Air quality studies based on the chemical transport model have been provided many important results for promoting our knowledge of air pollution phenomena, however, discrepancies between modeling results and observation data are still important issue to overcome. One of the concerning issue would be an over-prediction of summertime tropospheric ozone in remote area of Japan. This problem has been pointed out in the model comparison study of both regional scale (e.g., MICS-Asia) and global scale model (e.g., TH-FTAP). Several reasons for this issue can be listed as, (i) the modeled reproducibility on the penetration of clean oceanic air mass, (ii) correct estimation of the anthropogenic NOx / VOC emissions over East Asia, (iii) the chemical reaction scheme used in model simulation. In this study, we attempt to inverse estimation of some important chemical reactions based on the combining system of DDM (decoupled direct method) sensitivity analysis and modeled Green's function approach. The decoupled direct method (DDM) is an efficient and accurate way of performing sensitivity analysis to model inputs, calculates sensitivity coefficients representing the responsiveness of atmospheric chemical concentrations to perturbations in a model input or parameter. The inverse solutions with the Green's functions are given by a linear, least-squares method but are still robust against nonlinearities, To construct the response matrix (i.e., Green's functions), we can directly use the results of DDM sensitivity analysis. The solution of chemical reaction constants which have relatively large uncertainties are determined with constraints of observed ozone concentration data over the remote area in Japan. Our inversed estimation demonstrated that the underestimation of reaction constant to produce HNO3 (NO2 + OH + M → HNO3 + M) in SAPRC99 chemical scheme, and the inversed results indicated the +29.0 % increment to this reaction. This estimation has good agreement when compared with the CB4 and CB5, and also to the SAPRC07 estimation. For the NO2 photolysis rates, 49.4 % reduction was pronounced. This result indicates the importance of heavy aerosol effect for the change of photolysis rate must be incorporated in the numerical study.

  2. Analytical flow duration curves for summer streamflow in Switzerland

    NASA Astrophysics Data System (ADS)

    Santos, Ana Clara; Portela, Maria Manuela; Rinaldo, Andrea; Schaefli, Bettina

    2018-04-01

    This paper proposes a systematic assessment of the performance of an analytical modeling framework for streamflow probability distributions for a set of 25 Swiss catchments. These catchments show a wide range of hydroclimatic regimes, including namely snow-influenced streamflows. The model parameters are calculated from a spatially averaged gridded daily precipitation data set and from observed daily discharge time series, both in a forward estimation mode (direct parameter calculation from observed data) and in an inverse estimation mode (maximum likelihood estimation). The performance of the linear and the nonlinear model versions is assessed in terms of reproducing observed flow duration curves and their natural variability. Overall, the nonlinear model version outperforms the linear model for all regimes, but the linear model shows a notable performance increase with catchment elevation. More importantly, the obtained results demonstrate that the analytical model performs well for summer discharge for all analyzed streamflow regimes, ranging from rainfall-driven regimes with summer low flow to snow and glacier regimes with summer high flow. These results suggest that the model's encoding of discharge-generating events based on stochastic soil moisture dynamics is more flexible than previously thought. As shown in this paper, the presence of snowmelt or ice melt is accommodated by a relative increase in the discharge-generating frequency, a key parameter of the model. Explicit quantification of this frequency increase as a function of mean catchment meteorological conditions is left for future research.

  3. A New Comprehensive Model for Crustal and Upper Mantle Structure of the European Plate

    NASA Astrophysics Data System (ADS)

    Morelli, A.; Danecek, P.; Molinari, I.; Postpischl, L.; Schivardi, R.; Serretti, P.; Tondi, M. R.

    2009-12-01

    We present a new comprehensive model of crustal and upper mantle structure of the whole European Plate — from the North Atlantic ridge to Urals, and from North Africa to the North Pole — describing seismic speeds (P and S) and density. Our description of crustal structure merges information from previous studies: large-scale compilations, seismic prospection, receiver functions, inversion of surface wave dispersion measurements and Green functions from noise correlation. We use a simple description of crustal structure, with laterally-varying sediment and cristalline layers thickness and seismic parameters. Most original information refers to P-wave speed, from which we derive S speed and density from scaling relations. This a priori crustal model by itself improves the overall fit to observed Bouguer anomaly maps, as derived from GRACE satellite data, over CRUST2.0. The new crustal model is then used as a constraint in the inversion for mantle shear wave speed, based on fitting Love and Rayleigh surface wave dispersion. In the inversion for transversely isotropic mantle structure, we use group speed measurements made on European event-to-station paths, and use a global a priori model (S20RTS) to ensure fair rendition of earth structure at depth and in border areas with little coverage from our data. The new mantle model sensibly improves over global S models in the imaging of shallow asthenospheric (slow) anomalies beneath the Alpine mobile belt, and fast lithospheric signatures under the two main Mediterranean subduction systems (Aegean and Tyrrhenian). We map compressional wave speed inverting ISC travel times (reprocessed by Engdahl et al.) with a non linear inversion scheme making use of finite-difference travel time calculation. The inversion is based on an a priori model obtained by scaling the 3D mantle S-wave speed to P. The new model substantially confirms images of descending lithospheric slabs and back-arc shallow asthenospheric regions, shown in other more local high-resolution tomographic studies, but covers the whole range of the European Plate. We also obtain three-dimensional mantle density structure by inversion of GRACE Bouguer anomalies locally adjusting density and the scaling relation between seismic wave speeds and density. We validate the new comprehensive model through comparison of recorded seismograms with numerical simulations based on SPECFEM3D. This work is a contribution towards the definition of a reference earth model for Europe. To this extent, in order to improve model dissemination and comparison, we propose the adoption of a common exchange format for tomographic earth models based on JSON, a lightweight data-interchange format supported by most high-level programming languages. We provide tools for manipulating and visualising models, described in this standard format, in Google Earth and GEON IDV.

  4. 3D linear inversion of magnetic susceptibility data acquired by frequency domain EMI

    NASA Astrophysics Data System (ADS)

    Thiesson, J.; Tabbagh, A.; Simon, F.-X.; Dabas, M.

    2017-01-01

    Low induction number EMI instruments are able to simultaneously measure a soil's apparent magnetic susceptibility and electrical conductivity. This family of dual measurement instruments is highly useful for the analysis of soils and archeological sites. However, the electromagnetic properties of soils are found to vary over considerably different ranges: whereas their electrical conductivity varies from ≤ 0.1 to ≥ 100 mS/m, their relative magnetic permeability remains within a very small range, between 1.0001 and 1.01 SI. Consequently, although apparent conductivity measurements need to be inverted using non-linear processes, the variations of the apparent magnetic susceptibility can be approximated through the use of linear processes, as in the case of the magnetic prospection technique. Our proposed 3D inversion algorithm starts from apparent susceptibility data sets, acquired using different instruments over a given area. A reference vertical profile is defined by considering the mode of the vertical distributions of both the electrical resistivity and of the magnetic susceptibility. At each point of the mapped area, the reference vertical profile response is subtracted to obtain the apparent susceptibility variation dataset. A 2D horizontal Fourier transform is applied to these variation datasets and to the dipole (impulse) response of each instrument, a (vertical) 1D inversion is performed at each point in the spectral domain, and finally the resulting dataset is inverse transformed to restore the apparent 3D susceptibility variations. It has been shown that when applied to synthetic results, this method is able to correct the apparent deformations of a buried object resulting from the geometry of the instrument, and to restore reliable quantitative susceptibility contrasts. It also allows the thin layer solution, similar to that used in magnetic prospection, to be implemented. When applied to field data it initially delivers a level of contrast comparable to that obtained with a non-linear 3D inversion. Over four different sites, this method is able to produce, following an acceptably short computation time, realistic values for the lateral and vertical variations in susceptibility, which are significantly different to those given by a point-by-point 1D inversion.

  5. Evaluation of the inverse dispersion modelling method for estimating ammonia multi-source emissions using low-cost long time averaging sensor

    NASA Astrophysics Data System (ADS)

    Loubet, Benjamin; Carozzi, Marco

    2015-04-01

    Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28 days. The meteorological dataset of the fluxnet FR-Gri site (Grignon, FR) in 2008 was employed. Several sensor heights were tested, from 0.25 m to 2 m. The multi-source inverse problem was solved based on several sampling and field trial strategies: considering 1 or 2 heights over each field, considering the background concentration as known or unknown, and considering block-repetitions in the field set-up (3 repetitions). The inverse modelling approach demonstrated to be adapted for discriminating large differences in NH3 emissions from small agronomic plots using integrating sensors. The method is sensitive to sensor heights. The uncertainties and systematic biases are evaluated and discussed.

  6. Age and neurodegeneration imaging biomarkers in persons with Alzheimer disease dementia

    PubMed Central

    Jack, Clifford R.; Wiste, Heather J.; Weigand, Stephen D.; Vemuri, Prashanthi; Lowe, Val J.; Kantarci, Kejal; Gunter, Jeffrey L.; Senjem, Matthew L.; Mielke, Michelle M.; Machulda, Mary M.; Roberts, Rosebud O.; Boeve, Bradley F.; Jones, David T.; Petersen, Ronald C.

    2016-01-01

    Objective: To examine neurodegenerative imaging biomarkers in Alzheimer disease (AD) dementia from middle to old age. Methods: Persons with AD dementia and elevated brain β-amyloid with Pittsburgh compound B (PiB)-PET imaging underwent [18F]-fluorodeoxyglucose (FDG)-PET and structural MRI. We evaluated 3 AD-related neurodegeneration biomarkers: hippocampal volume adjusted for total intracranial volume (HVa), FDG standardized uptake value ratio (SUVR) in regions of interest linked to AD, and cortical thickness in AD-related regions of interest. We examined associations of each biomarker with age and evaluated age effects on cutpoints defined by the 90th percentile in AD dementia. We assembled an age-, sex-, and intracranial volume-matched group of 194 similarly imaged clinically normal (CN) persons. Results: The 97 participants with AD dementia (aged 49–93 years) had PiB SUVR ≥1.8. A nonlinear (inverted-U) relationship between FDG SUVR and age was seen in the AD group but an inverse linear relationship with age was seen in the CN group. Cortical thickness had an inverse linear relationship with age in AD but a nonlinear (flat, then inverse linear) relationship in the CN group. HVa showed an inverse linear relationship with age in both AD and CN groups. Age effects on 90th percentile cutpoints were small for FDG SUVR and cortical thickness, but larger for HVa. Conclusions: In persons with AD dementia with elevated PiB SUVR, values of each neurodegeneration biomarker were associated with age. Cortical thickness had the smallest differences in 90th percentile cutpoints from middle to old age, and HVa the largest differences. PMID:27421543

  7. Age and neurodegeneration imaging biomarkers in persons with Alzheimer disease dementia.

    PubMed

    Knopman, David S; Jack, Clifford R; Wiste, Heather J; Weigand, Stephen D; Vemuri, Prashanthi; Lowe, Val J; Kantarci, Kejal; Gunter, Jeffrey L; Senjem, Matthew L; Mielke, Michelle M; Machulda, Mary M; Roberts, Rosebud O; Boeve, Bradley F; Jones, David T; Petersen, Ronald C

    2016-08-16

    To examine neurodegenerative imaging biomarkers in Alzheimer disease (AD) dementia from middle to old age. Persons with AD dementia and elevated brain β-amyloid with Pittsburgh compound B (PiB)-PET imaging underwent [(18)F]-fluorodeoxyglucose (FDG)-PET and structural MRI. We evaluated 3 AD-related neurodegeneration biomarkers: hippocampal volume adjusted for total intracranial volume (HVa), FDG standardized uptake value ratio (SUVR) in regions of interest linked to AD, and cortical thickness in AD-related regions of interest. We examined associations of each biomarker with age and evaluated age effects on cutpoints defined by the 90th percentile in AD dementia. We assembled an age-, sex-, and intracranial volume-matched group of 194 similarly imaged clinically normal (CN) persons. The 97 participants with AD dementia (aged 49-93 years) had PiB SUVR ≥1.8. A nonlinear (inverted-U) relationship between FDG SUVR and age was seen in the AD group but an inverse linear relationship with age was seen in the CN group. Cortical thickness had an inverse linear relationship with age in AD but a nonlinear (flat, then inverse linear) relationship in the CN group. HVa showed an inverse linear relationship with age in both AD and CN groups. Age effects on 90th percentile cutpoints were small for FDG SUVR and cortical thickness, but larger for HVa. In persons with AD dementia with elevated PiB SUVR, values of each neurodegeneration biomarker were associated with age. Cortical thickness had the smallest differences in 90th percentile cutpoints from middle to old age, and HVa the largest differences. © 2016 American Academy of Neurology.

  8. FWT2D: A massively parallel program for frequency-domain full-waveform tomography of wide-aperture seismic data—Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves

    2009-03-01

    This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.

  9. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  10. A comprehensive inversion approach for feedforward compensation of piezoactuator system at high frequency

    NASA Astrophysics Data System (ADS)

    Tian, Lizhi; Xiong, Zhenhua; Wu, Jianhua; Ding, Han

    2016-09-01

    Motion control of the piezoactuator system over broadband frequencies is limited due to its inherent hysteresis and system dynamics. One of the suggested ways is to use feedforward controller to linearize the input-output relationship of the piezoactuator system. Although there have been many feedforward approaches, it is still a challenge to develop feedforward controller for the piezoactuator system at high frequency. Hence, this paper presents a comprehensive inversion approach in consideration of the coupling of hysteresis and dynamics. In this work, the influence of dynamics compensation on the input-output relationship of the piezoactuator system is investigated first. With system dynamics compensation, the input-output relationship of the piezoactuator system will be further represented as rate-dependent nonlinearity due to the inevitable dynamics compensation error, especially at high frequency. Base on this result, the feedforward controller composed by a cascade of linear dynamics inversion and rate-dependent nonlinearity inversion is developed. Then, the system identification of the comprehensive inversion approach is proposed. Finally, experimental results show that the proposed approach can improve the performance on tracking of both periodic and non-periodic trajectories at medium and high frequency compared with the conventional feedforward approaches.

  11. Shear Wave Splitting Inversion in a Complex Crust

    NASA Astrophysics Data System (ADS)

    Lucas, A.

    2015-12-01

    Shear wave splitting (SWS) inversion presents a method whereby the upper crust can be interrogated for fracture density. It is caused when a shear wave traverses an area of anisotropy, splits in two, with each wave experiencing a different velocity resulting in an observable separation in arrival times. A SWS observation consists of the first arrival polarization direction and the time delay. Given the large amount of data common in SWS studies, manual inspection for polarization and time delay is considered prohibitively time intensive. All automated techniques used can produce high amounts of observations falsely interpreted as SWS. Thus introducing error into the interpretation. The technique often used for removing these false observations is to manually inspect all SWS observations defined as high quality by the automated routine, and remove false identifications. We investigate the nature of events falsely identified compared to those correctly identified. Once this identification is complete we conduct a inversion for crack density from SWS time delay. The current body of work on linear SWS inversion utilizes an equation that defines the time delay between arriving shear waves with respect to fracture density. This equation makes the assumption that no fluid flow occurs as a result of the passing shear wave, a situation called squirt flow. We show that the assumption is not applicable in all geological situations. When it is not true, its use in an inversion produces a result which is negatively affected by the assumptions. This is shown to be the case at the test case of 6894 SWS observations gathered in a small area at Puna geothermal field, Hawaii. To rectify this situation, a series of new time delay formulae, applicable to linear inversion, are derived from velocity equations presented in literature. The new formula use a 'fluid influence parameter' which indicates the degree to which squirt flow is influencing the SWS. It is found that accounting for squirt flow better fits the data and is more applicable. The fluid influence factor that best describes the data can be identified prior to solving the inversion. Implementing this formula in a linear inversion has a significantly improved fit to the time delay observations than that of the current methods.

  12. Retrieving rupture history using waveform inversions in time sequence

    NASA Astrophysics Data System (ADS)

    Yi, L.; Xu, C.; Zhang, X.

    2017-12-01

    The rupture history of large earthquakes is generally regenerated using the waveform inversion through utilizing seismological waveform records. In the waveform inversion, based on the superposition principle, the rupture process is linearly parameterized. After discretizing the fault plane into sub-faults, the local source time function of each sub-fault is usually parameterized using the multi-time window method, e.g., mutual overlapped triangular functions. Then the forward waveform of each sub-fault is synthesized through convoluting the source time function with its Green function. According to the superposition principle, these forward waveforms generated from the fault plane are summarized in the recorded waveforms after aligning the arrival times. Then the slip history is retrieved using the waveform inversion method after the superposing of all forward waveforms for each correspond seismological waveform records. Apart from the isolation of these forward waveforms generated from each sub-fault, we also realize that these waveforms are gradually and sequentially superimposed in the recorded waveforms. Thus we proposed a idea that the rupture model is possibly detachable in sequent rupture times. According to the constrained waveform length method emphasized in our previous work, the length of inverted waveforms used in the waveform inversion is objectively constrained by the rupture velocity and rise time. And one essential prior condition is the predetermined fault plane that limits the duration of rupture time, which means the waveform inversion is restricted in a pre-set rupture duration time. Therefore, we proposed a strategy to inverse the rupture process sequentially using the progressively shift rupture times as the rupture front expanding in the fault plane. And we have designed a simulation inversion to test the feasibility of the method. Our test result shows the prospect of this idea that requiring furthermore investigation.

  13. Linear single-step image reconstruction in the presence of nonscattering regions.

    PubMed

    Dehghani, H; Delpy, D T

    2002-06-01

    There is growing interest in the use of near-infrared spectroscopy for the noninvasive determination of the oxygenation level within biological tissue. Stemming from this application, there has been further research in using this technique for obtaining tomographic images of the neonatal head, with the view of determining the level of oxygenated and deoxygenated blood within the brain. Because of computational complexity, methods used for numerical modeling of photon transfer within tissue have usually been limited to the diffusion approximation of the Boltzmann transport equation. The diffusion approximation, however, is not valid in regions of low scatter, such as the cerebrospinal fluid. Methods have been proposed for dealing with nonscattering regions within diffusing materials through the use of a radiosity-diffusion model. Currently, this new model assumes prior knowledge of the void region; therefore it is instructive to examine the errors introduced in applying a simple diffusion-based reconstruction scheme in cases where a nonscattering region exists. We present reconstructed images, using linear algorithms, of models that contain a nonscattering region within a diffusing material. The forward data are calculated by using the radiosity-diffusion model, and the inverse problem is solved by using either the radiosity-diffusion model or the diffusion-only model. When using data from a model containing a clear layer and reconstructing with the correct model, one can reconstruct the anomaly, but the qualitative accuracy and the position of the reconstructed anomaly depend on the size and the position of the clear regions. If the inverse model has no information about the clear regions (i.e., it is a purely diffusing model), an anomaly can be reconstructed, but the resulting image has very poor qualitative accuracy and poor localization of the anomaly. The errors in quantitative and localization accuracies depend on the size and location of the clear regions.

  14. Linear single-step image reconstruction in the presence of nonscattering regions

    NASA Astrophysics Data System (ADS)

    Dehghani, H.; Delpy, D. T.

    2002-06-01

    There is growing interest in the use of near-infrared spectroscopy for the noninvasive determination of the oxygenation level within biological tissue. Stemming from this application, there has been further research in using this technique for obtaining tomographic images of the neonatal head, with the view of determining the level of oxygenated and deoxygenated blood within the brain. Because of computational complexity, methods used for numerical modeling of photon transfer within tissue have usually been limited to the diffusion approximation of the Boltzmann transport equation. The diffusion approximation, however, is not valid in regions of low scatter, such as the cerebrospinal fluid. Methods have been proposed for dealing with nonscattering regions within diffusing materials through the use of a radiosity-diffusion model. Currently, this new model assumes prior knowledge of the void region; therefore it is instructive to examine the errors introduced in applying a simple diffusion-based reconstruction scheme in cases where a nonscattering region exists. We present reconstructed images, using linear algorithms, of models that contain a nonscattering region within a diffusing material. The forward data are calculated by using the radiosity-diffusion model, and the inverse problem is solved by using either the radiosity-diffusion model or the diffusion-only model. When using data from a model containing a clear layer and reconstructing with the correct model, one can reconstruct the anomaly, but the qualitative accuracy and the position of the reconstructed anomaly depend on the size and the position of the clear regions. If the inverse model has no information about the clear regions (i.e., it is a purely diffusing model), an anomaly can be reconstructed, but the resulting image has very poor qualitative accuracy and poor localization of the anomaly. The errors in quantitative and localization accuracies depend on the size and location of the clear regions.

  15. Research In Nonlinear Flight Control for Tiltrotor Aircraft Operating in the Terminal Area

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Rysdyk, R.

    1996-01-01

    The research during the first year of the effort focused on the implementation of the recently developed combination of neural net work adaptive control and feedback linearization. At the core of this research is the comprehensive simulation code Generic Tiltrotor Simulator (GTRS) of the XV-15 tilt rotor aircraft. For this research the GTRS code has been ported to a Fortran environment for use on PC. The emphasis of the research is on terminal area approach procedures, including conversion from aircraft to helicopter configuration. This report focuses on the longitudinal control which is the more challenging case for augmentation. Therefore, an attitude command attitude hold (ACAH) control augmentation is considered which is typically used for the pitch channel during approach procedures. To evaluate the performance of the neural network adaptive control architecture it was necessary to develop a set of low order pilot models capable of performing such tasks as, follow desired altitude profiles, follow desired speed profiles, operate on both sides of powercurve, convert, including flaps as well as mastangle changes, operate with different stability and control augmentation system (SCAS) modes. The pilot models are divided in two sets, one for the backside of the powercurve and one for the frontside. These two sets are linearly blended with speed. The mastangle is also scheduled with speed. Different aspects of the proposed architecture for the neural network (NNW) augmented model inversion were also demonstrated. The demonstration involved implementation of a NNW architecture using linearized models from GTRS, including rotor states, to represent the XV-15 at various operating points. The dynamics used for the model inversion were based on the XV-15 operating at 30 Kts, with residualized rotor dynamics, and not including cross coupling between translational and rotational states. The neural network demonstrated ACAH control under various circumstances. Future efforts will include the implementation into the Fortran environment of GTRS, including pilot modeling and NNW augmentation for the lateral channels. These efforts should lead to the development of architectures that will provide for fully automated approach, using similar strategies.

  16. Transect-scale imaging of root zone electrical conductivity by inversion of multiple-height EMI measurements under different salinity conditions

    NASA Astrophysics Data System (ADS)

    Piero Deidda, Gian; Coppola, Antonio; Dragonetti, Giovanna; Comegna, Alessandro; Rodriguez, Giuseppe; Vignoli, Giulio

    2017-04-01

    The ability to determine the effects of salts on soils and plants, are of great importance to agriculture. To control its harmful effects, soil salinity needs to be monitored in space and time. This requires knowledge of its magnitude, temporal dynamics, and spatial variability. Soil salinity can be evaluated by measuring the bulk electrical conductivity (σb) in the field. Measurements of σb can be made with either in situ or remote devices (Rhoades and Oster, 1986; Rhoades and Corwin, 1990; Rhoades and Miyamoto, 1990). Time Domain Reflectometry (TDR) sensors allow simultaneous measurements of water content, θ, and σb. They may be calibrated in the laboratory for estimating the electrical conductivity of the soil solution (σw). However, they have a relatively small observation volume and thus they only provide local-scale measurements. The spatial range of the sensors is limited to tens of centimeters and extension of the information to a large area can be problematic. Also, information on the vertical distribution of the σb soil profile may only be obtained by installing sensors at different depths. In this sense, the TDR may be considered as an invasive technique. Compared to the TDR, non-invasive electromagnetic induction (EMI) techniques can be used for extensively mapping the bulk electrical conductivity in the field. The problem is that all these techniques give depth-weighted apparent electrical conductivity (ECa) measurements, depending on the specific depth distribution of the σb, as well as on the depth response function of the sensor used. In order to deduce the actual distribution of local σb in the soil profile, one may invert the signal coming from EMI sensors. Most studies use the linear model proposed by McNeill (1980), describing the relative depth-response of the ground conductivity meter. By using the forward linear model of McNeill, Borchers et al. (1997) implemented a Least Squares inverse procedure with second order Tikhonov regularization, to estimate σb vertical distribution from EMI field data. More recent studies (Hendrickx et al., 2002; Deidda et al., 2003; Deidda et al., 2014, among others), extended the approach to a more complicated non linear model of the response of a ground conductivity meter to changes with depth of σb. Noteworthy, these inverse procedures are only based on electromagnetic physics. Thus, they are only based on ECa readings, possibly taken with both the horizontal and vertical configurations and with the sensor at different heights above the ground, and do not require any further field calibration. Nevertheless, as discussed by Hendrickx et al. (2002), important issues on inverse approaches are about: i) the applicability to heterogeneous field soils of physical equations originally developed for the electromagnetic response of homogeneous media and ii) nonuniqueness and instability problems inherent to inverse procedures, even after Tikhonov regularization. Besides, as discussed by Cook and Walker (1992), these mathematical inversions procedures using layered-earth models were originally designed for interpreting porous systems with distinct layering. Where subsurface layers are not sharply defined, this type of inversion may be subject to considerable error. With these premises, the main aim of this study is estimating the vertical σb distribution by ECa measured using ground surface EMI methods under different salinity conditions and using TDR data as ground-truth data for validation of the inversion procedure. The latter is based on a regularized 1D inversion procedure designed to swiftly manage nonlinear multiple EMI-depth responses (Deidda et al., 2014). It is based on the coupling of the damped Gauss-Newton method with either the truncated singular value decomposition (TSVD) or the truncated generalized singular value decomposition (TGSVD), and it implements an explicit (exact) representation of the Jacobian to solve the nonlinear inverse problem. The experimental field (30 m x 15.6 m; for a total area of 468 m2) was divided into three transects 30 m long and 4.2 width, cultivated with green bean and irrigated with three different salinity levels (1 dS/m, 3 dS/m, and 6 dS/m). Each transect consisted of seven rows equipped with a sprinkler irrigation system, which supplied a water flux of 2 l/h. As for the salt application, CaCl2 were dissolved in tap water, and subsequently siphoned into the irrigation system. For each transect, 24 regularly spaced monitoring sites (1 m apart) were selected for soil measurements, using different equipments: i) a TDR100, ii), a Geonics EM-38; iii). Overall, fifteen measurement campaigns were carried out.

  17. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  18. 3D non-linear inversion of magnetic anomalies caused by prismatic bodies using differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Balkaya, Çağlayan; Ekinci, Yunus Levent; Göktürkler, Gökhan; Turan, Seçil

    2017-01-01

    3D non-linear inversion of total field magnetic anomalies caused by vertical-sided prismatic bodies has been achieved by differential evolution (DE), which is one of the population-based evolutionary algorithms. We have demonstrated the efficiency of the algorithm on both synthetic and field magnetic anomalies by estimating horizontal distances from the origin in both north and east directions, depths to the top and bottom of the bodies, inclination and declination angles of the magnetization, and intensity of magnetization of the causative bodies. In the synthetic anomaly case, we have considered both noise-free and noisy data sets due to two vertical-sided prismatic bodies in a non-magnetic medium. For the field case, airborne magnetic anomalies originated from intrusive granitoids at the eastern part of the Biga Peninsula (NW Turkey) which is composed of various kinds of sedimentary, metamorphic and igneous rocks, have been inverted and interpreted. Since the granitoids are the outcropped rocks in the field, the estimations for the top depths of two prisms representing the magnetic bodies were excluded during inversion studies. Estimated bottom depths are in good agreement with the ones obtained by a different approach based on 3D modelling of pseudogravity anomalies. Accuracy of the estimated parameters from both cases has been also investigated via probability density functions. Based on the tests in the present study, it can be concluded that DE is a useful tool for the parameter estimation of source bodies using magnetic anomalies.

  19. Coronal temperatures of selected active cool stars as derived from low resolution Einstein observations

    NASA Technical Reports Server (NTRS)

    Vilhu, Osmi; Linsky, Jeffrey L.

    1990-01-01

    Mean coronal temperatures of some active G-K stars were derived from Rev1-processed Einstein-observatory's IPC-spectra. The combined X-ray and transition region emission line data are in rough agreement with static coronal loop models. Although the sample is too small to derive any statistically significant conclusions, it suggests that the mean coronal temperature depends linearly on the inverse Rossby-number, with saturation at short rotation periods.

  20. Least squares restoration of multi-channel images

    NASA Technical Reports Server (NTRS)

    Chin, Roland T.; Galatsanos, Nikolas P.

    1989-01-01

    In this paper, a least squares filter for the restoration of multichannel imagery is presented. The restoration filter is based on a linear, space-invariant imaging model and makes use of an iterative matrix inversion algorithm. The restoration utilizes both within-channel (spatial) and cross-channel information as constraints. Experiments using color images (three-channel imagery with red, green, and blue components) were performed to evaluate the filter's performance and to compare it with other monochrome and multichannel filters.

  1. Tensorial Calibration. 2. Second Order Tensorial Calibration.

    DTIC Science & Technology

    1987-10-12

    index is repeated more than once only in one side of an equation, it implies a summation over the index valid range. 12 To avoid confusion of terms...and higher order tensor, the rank can be higher than the maximum dimensionality. 13 ,ON 6 LINEAR SECOND ORDER TENSORIAL CALIBRATION MODEL From...these equations are valid only if all the elements of the diagonal matrix B3 are non-zero because its inverse (-1) must be computed. This implies that M

  2. Chirped or time modulated excitation compared to short pulses for photoacoustic imaging in acoustic attenuating media

    NASA Astrophysics Data System (ADS)

    Burgholzer, P.; Motz, C.; Lang, O.; Berer, T.; Huemer, M.

    2018-02-01

    In photoacoustic imaging, optically generated acoustic waves transport the information about embedded structures to the sample surface. Usually, short laser pulses are used for the acoustic excitation. Acoustic attenuation increases for higher frequencies, which reduces the bandwidth and limits the spatial resolution. One could think of more efficient waveforms than single short pulses, such as pseudo noise codes, chirped, or harmonic excitation, which could enable a higher information-transfer from the samples interior to its surface by acoustic waves. We used a linear state space model to discretize the wave equation, such as the Stoke's equation, but this method could be used for any other linear wave equation. Linear estimators and a non-linear function inversion were applied to the measured surface data, for onedimensional image reconstruction. The proposed estimation method allows optimizing the temporal modulation of the excitation laser such that the accuracy and spatial resolution of the reconstructed image is maximized. We have restricted ourselves to one-dimensional models, as for higher dimensions the one-dimensional reconstruction, which corresponds to the acoustic wave without attenuation, can be used as input for any ultrasound imaging method, such as back-projection or time-reversal method.

  3. Modeling the dynamics of a phreatic eruption based on a tilt observation: Barrier breakage leading to the 2014 eruption of Mount Ontake, Japan

    NASA Astrophysics Data System (ADS)

    Maeda, Yuta; Kato, Aitaro; Yamanaka, Yoshiko

    2017-02-01

    Although phreatic eruptions are common volcanic phenomena that sometimes result in significant disasters, their dynamics are poorly understood. In this study, we address the dynamics of the phreatic eruption of Mount Ontake, Japan, in 2014 based on analyses of a tilt change observed immediately (450 s) before the eruption onset. We conducted two sets of analysis: a waveform inversion and a modified phase-space analysis. Our waveform inversion of the tilt signal points to a vertical tensile crack at a depth of 1100 m. Our modified phase-space analysis suggests that the tilt change was at first a linear function in time that then switched to exponential growth. We constructed simple analytical models to explain these temporal functions. The linear function was explained by the boiling of underground water controlled by a constant heat supply from a greater depth. The exponential function was explained by the decompression-induced boiling of water and the upward Darcy flow of the water vapor through a permeable region of small cracks that were newly created in response to ongoing boiling. We interpret that this region was intact prior to the start of the tilt change, and thus, it has acted as a permeability barrier for the upward migration of fluids; it was a breakage of this barrier that led to the eruption.

  4. Multiscale 2D Inversions of Active-source First-arrival Times in Taiwan

    NASA Astrophysics Data System (ADS)

    Lin, Y. P.; Zhao, L.; Hung, S. H.

    2015-12-01

    In this study, we make use of the active-source records collected by the TAIGER (TAiwan Integrated GEodynamics Research) project in 2008 at nearly 1400 locations on the island of Taiwan and the surrounding ocean bottom. We manually picked the first-arrival times from the waveform records to obtain a set of highly accurate P-wave traveltimes. Among the 1400 receivers, more than 1000 were deployed along four almost linear cross-island profiles with inter-seismometer spacing down to 200 m. This ground-truth dataset provides strong constrains on the structure between the exactly known active sources and densely distributed receivers, which can be used to calibrate the seismic structure in the upper crust in Taiwan. In this study, we use this dataset to image the two-dimensional P-wave structure along the four linear profiles. A wavelet parameterization of the model is adopted to achieve an objective and data-adaptive multiscale resolution to the 2D structures. Rigorous estimations of resolution lengths were also conducted to quantify the spatial resolutions of the tomography inversions. The resulting 2D models yield first-arrival time predictions that are in excellent agreement with the observations. The seismic structures along the 2D profiles display strong lateral variations (up to 80% relative to regional average) with more realistic amplitudes of velocity perturbations and spatial patterns consistent with geological zonations of Taiwan

  5. An equivalent unbalance identification method for the balancing of nonlinear squeeze-film damped rotordynamic systems

    NASA Astrophysics Data System (ADS)

    Torres Cedillo, Sergio G.; Bonello, Philip

    2016-01-01

    The high pressure (HP) rotor in an aero-engine assembly cannot be accessed under operational conditions because of the restricted space for instrumentation and high temperatures. This motivates the development of a non-invasive inverse problem approach for unbalance identification and balancing, requiring prior knowledge of the structure. Most such methods in the literature necessitate linear bearing models, making them unsuitable for aero-engine applications which use nonlinear squeeze-film damper (SFD) bearings. A previously proposed inverse method for nonlinear rotating systems was highly limited in its application (e.g. assumed circular centered SFD orbits). The methodology proposed in this paper overcomes such limitations. It uses the Receptance Harmonic Balance Method (RHBM) to generate the backward operator using measurements of the vibration at the engine casing, provided there is at least one linear connection between rotor and casing, apart from the nonlinear connections. A least-squares solution yields the equivalent unbalance distribution in prescribed planes of the rotor, which is consequently used to balance it. The method is validated on distinct rotordynamic systems using simulated casing vibration readings. The method is shown to provide effective balancing under hitherto unconsidered practical conditions. The repeatability of the method, as well as its robustness to noise, model uncertainty and balancing errors, are satisfactorily demonstrated and the limitations of the process discussed.

  6. Analysis of Operating Principles with S-system Models

    PubMed Central

    Lee, Yun; Chen, Po-Wei; Voit, Eberhard O.

    2011-01-01

    Operating principles address general questions regarding the response dynamics of biological systems as we observe or hypothesize them, in comparison to a priori equally valid alternatives. In analogy to design principles, the question arises: Why are some operating strategies encountered more frequently than others and in what sense might they be superior? It is at this point impossible to study operation principles in complete generality, but the work here discusses the important situation where a biological system must shift operation from its normal steady state to a new steady state. This situation is quite common and includes many stress responses. We present two distinct methods for determining different solutions to this task of achieving a new target steady state. Both methods utilize the property of S-system models within Biochemical Systems Theory (BST) that steady-states can be explicitly represented as systems of linear algebraic equations. The first method uses matrix inversion, a pseudo-inverse, or regression to characterize the entire admissible solution space. Operations on the basis of the solution space permit modest alterations of the transients toward the target steady state. The second method uses standard or mixed integer linear programming to determine admissible solutions that satisfy criteria of functional effectiveness, which are specified beforehand. As an illustration, we use both methods to characterize alternative response patterns of yeast subjected to heat stress, and compare them with observations from the literature. PMID:21377479

  7. Youth and young adult physical activity and body composition of young adult women: findings from the dietary intervention study in children.

    PubMed

    Hodge, Melissa G; Hovinga, Mary; Shepherd, John A; Egleston, Brian; Gabriel, Kelley; Van Horn, Linda; Robson, Alan; Snetselaar, Linda; Stevens, Victor K; Jung, Seungyoun; Dorgan, Joanne

    2015-02-01

    This study prospectively investigates associations between youth moderate-to-vigorous-intensity physical activity (MVPA) and body composition in young adult women using data from the Dietary Intervention Study in Children (DISC) and the DISC06 Follow-Up Study. MVPA was assessed by questionnaire on 5 occasions between the ages 8 and 18 years and at age 25-29 years in 215 DISC female participants. Using whole body dual-energy x-ray absorptiometry (DXA), overall adiposity and body fat distribution were assessed at age 25-29 years by percent body fat (%fat) and android-to-gynoid (A:G) fat ratio, respectively. Linear mixed effects models and generalized linear latent and mixed models were used to assess associations of youth MVPA with both outcomes. Young adult MVPA, adjusted for other young adult characteristics, was significantly inversely associated with young adult %fat (%fat decreased from 37.4% in the lowest MVPA quartile to 32.8% in the highest (p-trend = 0.02)). Adjusted for youth and young adult characteristics including young adult MVPA, youth MVPA also was significantly inversely associated with young adult %fat (β=-0.40 per 10 MET-hrs/wk, p = .02) . No significant associations between MVPA and A:G fat ratio were observed. Results suggest that youth and young adult MVPA are important independent predictors of adiposity in young women.

  8. Rethinking moment tensor inversion methods to retrieve the source mechanisms of low-frequency seismic events

    NASA Astrophysics Data System (ADS)

    Karl, S.; Neuberg, J.

    2011-12-01

    Volcanoes exhibit a variety of seismic signals. One specific type, the so-called long-period (LP) or low-frequency event, has proven to be crucial for understanding the internal dynamics of the volcanic system. These long period (LP) seismic events have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements (Chouet, 1996; Neuberg et al., 2006). While the seismic wavefield is well established, the actual trigger mechanism of these events is still poorly understood. Neuberg et al. (2006) proposed a conceptual model for the trigger of LP events at Montserrat involving the brittle failure of magma in the glass transition in response to the upwards movement of magma. In an attempt to gain a better quantitative understanding of the driving forces of LPs, inversions for the physical source mechanisms have become increasingly common. Previous studies have assumed a point source for waveform inversion. Knowing that applying a point source model to synthetic seismograms representing an extended source process does not yield the real source mechanism, it can, however, still lead to apparent moment tensor elements which then can be compared to previous results in the literature. Therefore, this study follows the proposed concepts of Neuberg et al. (2006), modelling the extended LP source as an octagonal arrangement of double couples approximating a circular ringfault bounding the circumference of the volcanic conduit. Synthetic seismograms were inverted for the physical source mechanisms of LPs using the moment tensor inversion code TDMTISO_INVC by Dreger (2003). Here, we will present the effects of changing the source parameters on the apparent moment tensor elements. First results show that, due to negative interference, the amplitude of the seismic signals of a ringfault structure is greatly reduced when compared to a single double couple source. Furthermore, best inversion results yield a solution comprised of positive isotropic and compensated linear vector dipole components. Thus, the physical source mechanisms of volcano seismic signals may be misinterpreted as opening shear or tensile cracks when wrongly assuming a point source. In order to approach the real physical sources with our models, inversions based on higher-order tensors might have to be considered in the future. An inversion technique where the point source is replaced by a so-called moment tensor density would allow inversions of volcano seismic signals for sources that can then be temporally and spatially extended.

  9. On the sensitivity of teleseismic full-waveform inversion to earth parametrization, initial model and acquisition design

    NASA Astrophysics Data System (ADS)

    Beller, S.; Monteiller, V.; Combe, L.; Operto, S.; Nolet, G.

    2018-02-01

    Full-waveform inversion (FWI) is not yet a mature imaging technology for lithospheric imaging from teleseismic data. Therefore, its promise and pitfalls need to be assessed more accurately according to the specifications of teleseismic experiments. Three important issues are related to (1) the choice of the lithospheric parametrization for optimization and visualization, (2) the initial model and (3) the acquisition design, in particular in terms of receiver spread and sampling. These three issues are investigated with a realistic synthetic example inspired by the CIFALPS experiment in the Western Alps. Isotropic elastic FWI is implemented with an adjoint-state formalism and aims to update three parameter classes by minimization of a classical least-squares difference-based misfit function. Three different subsurface parametrizations, combining density (ρ) with P and S wave speeds (Vp and Vs) , P and S impedances (Ip and Is), or elastic moduli (λ and μ) are first discussed based on their radiation patterns before their assessment by FWI. We conclude that the (ρ, λ, μ) parametrization provides the FWI models that best correlate with the true ones after recombining a posteriori the (ρ, λ, μ) optimization parameters into Ip and Is. Owing to the low frequency content of teleseismic data, 1-D reference global models as PREM provide sufficiently accurate initial models for FWI after smoothing that is necessary to remove the imprint of the layering. Two kinds of station deployments are assessed: coarse areal geometry versus dense linear one. We unambiguously conclude that a coarse areal geometry should be favoured as it dramatically increases the penetration in depth of the imaging as well as the horizontal resolution. This results because the areal geometry significantly increases local wavenumber coverage, through a broader sampling of the scattering and dip angles, compared to a linear deployment.

  10. Integrating conventional and inverse representation for face recognition.

    PubMed

    Xu, Yong; Li, Xuelong; Yang, Jian; Lai, Zhihui; Zhang, David

    2014-10-01

    Representation-based classification methods are all constructed on the basis of the conventional representation, which first expresses the test sample as a linear combination of the training samples and then exploits the deviation between the test sample and the expression result of every class to perform classification. However, this deviation does not always well reflect the difference between the test sample and each class. With this paper, we propose a novel representation-based classification method for face recognition. This method integrates conventional and the inverse representation-based classification for better recognizing the face. It first produces conventional representation of the test sample, i.e., uses a linear combination of the training samples to represent the test sample. Then it obtains the inverse representation, i.e., provides an approximation representation of each training sample of a subject by exploiting the test sample and training samples of the other subjects. Finally, the proposed method exploits the conventional and inverse representation to generate two kinds of scores of the test sample with respect to each class and combines them to recognize the face. The paper shows the theoretical foundation and rationale of the proposed method. Moreover, this paper for the first time shows that a basic nature of the human face, i.e., the symmetry of the face can be exploited to generate new training and test samples. As these new samples really reflect some possible appearance of the face, the use of them will enable us to obtain higher accuracy. The experiments show that the proposed conventional and inverse representation-based linear regression classification (CIRLRC), an improvement to linear regression classification (LRC), can obtain very high accuracy and greatly outperforms the naive LRC and other state-of-the-art conventional representation based face recognition methods. The accuracy of CIRLRC can be 10% greater than that of LRC.

  11. Anisotropic S-wave velocity structure from joint inversion of surface wave group velocity dispersion: A case study from India

    NASA Astrophysics Data System (ADS)

    Mitra, S.; Dey, S.; Siddartha, G.; Bhattacharya, S.

    2016-12-01

    We estimate 1-dimensional path average fundamental mode group velocity dispersion curves from regional Rayleigh and Love waves sampling the Indian subcontinent. The path average measurements are combined through a tomographic inversion to obtain 2-dimensional group velocity variation maps between periods of 10 and 80 s. The region of study is parametrised as triangular grids with 1° sides for the tomographic inversion. Rayleigh and Love wave dispersion curves from each node point is subsequently extracted and jointly inverted to obtain a radially anisotropic shear wave velocity model through global optimisation using Genetic Algorithm. The parametrization of the model space is done using three crustal layers and four mantle layers over a half-space with varying VpH , VsV and VsH. The anisotropic parameter (η) is calculated from empirical relations and the density of the layers are taken from PREM. Misfit for the model is calculated as a sum of error-weighted average dispersion curves. The 1-dimensional anisotropic shear wave velocity at each node point is combined using linear interpolation to obtain 3-dimensional structure beneath the region. Synthetic tests are performed to estimate the resolution of the tomographic maps which will be presented with our results. We envision to extend this to a larger dataset in near future to obtain high resolution anisotrpic shear wave velocity structure beneath India, Himalaya and Tibet.

  12. iTOUGH2 V6.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, Stefan A.

    2010-11-01

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional , multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. It performs sensitivity analysis, parameter estimation, and uncertainty propagation, analysis in geosciences and reservoir engineering and other application areas. It supports a number of different combination of fluids and components [equation-of-state (EOS) modules]. In addition, the optimization routines implemented in iTOUGH2 can also be used or sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files. This link is achieved by means of the PEST application programmingmore » interface. iTOUGH2 solves the inverse problem by minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative fee, gradient-based and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlos simulation for uncertainty propagation analysis. A detailed residual and error analysis is provided. This upgrade includes new EOS modules (specifically EOS7c, ECO2N and TMVOC), hysteretic relative permeability and capillary pressure functions and the PEST API. More details can be found at http://esd.lbl.gov/iTOUGH2 and the publications cited there. Hardware Req.: Multi-platform; Related/auxiliary software PVM (if running in parallel).« less

  13. iTOUGH2 v7.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL

    2016-09-15

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less

  14. Source process of the 2016 Kumamoto earthquake (Mj7.3) inferred from kinematic inversion of strong-motion records

    NASA Astrophysics Data System (ADS)

    Yoshida, Kunikazu; Miyakoshi, Ken; Somei, Kazuhiro; Irikura, Kojiro

    2017-05-01

    In this study, we estimated source process of the 2016 Kumamoto earthquake from strong-motion data by using the multiple-time window linear kinematic waveform inversion method to discuss generation of strong motions and to explain crustal deformation pattern with a seismic source inversion model. A four-segment fault model was assumed based on the aftershock distribution, active fault traces, and interferometric synthetic aperture radar data. Three western segments were set to be northwest-dipping planes, and the most eastern segment under the Aso caldera was examined to be a southeast-dipping plane. The velocity structure models used in this study were estimated by using waveform modeling of moderate earthquakes that occurred in the source region. We applied a two-step approach of the inversions of 20 strong-motion datasets observed by K-NET and KiK-net by using band-pass-filtered strong-motion data at 0.05-0.5 Hz and then at 0.05-1.0 Hz. The rupture area of the fault plane was determined by applying the criterion of Somerville et al. (Seismol Res Lett 70:59-80, 1999) to the inverted slip distribution. From the first-step inversion, the fault length was trimmed from 52 to 44 km, whereas the fault width was kept at 18 km. The trimmed rupture area was not changed in the second-step inversion. The source model obtained from the two-step approach indicated 4.7 × 1019 Nm of the total moment release and 1.8 m average slip of the entire fault with a rupture area of 792 km2. Large slip areas were estimated in the seismogenic zone and in the shallow part corresponding to the surface rupture that occurred during the Mj7.3 mainshock. The areas of the high peak moment rate correlated roughly with those of large slip; however, the moment rate functions near the Earth surface have low peak, bell shape, and long duration. These subfaults with long-duration moment release are expected to cause weak short-period ground motions. We confirmed that the southeast dipping of the most eastern segment is more plausible rather than northwest-dipping from the observed subsidence around the central cones of the Aso volcano.[Figure not available: see fulltext.

  15. Comparison of various methods for mathematical analysis of the Foucault knife edge test pattern to determine optical imperfections

    NASA Technical Reports Server (NTRS)

    Gatewood, B. E.

    1971-01-01

    The linearized integral equation for the Foucault test of a solid mirror was solved by various methods: power series, Fourier series, collocation, iteration, and inversion integral. The case of the Cassegrain mirror was solved by a particular power series method, collocation, and inversion integral. The inversion integral method appears to be the best overall method for both the solid and Cassegrain mirrors. Certain particular types of power series and Fourier series are satisfactory for the Cassegrain mirror. Numerical integration of the nonlinear equation for selected surface imperfections showed that results start to deviate from those given by the linearized equation at a surface deviation of about 3 percent of the wavelength of light. Several possible procedures for calibrating and scaling the input data for the integral equation are described.

  16. Biophysical and spectral modeling for crop identification and assessment

    NASA Technical Reports Server (NTRS)

    Goel, N. S. (Principal Investigator)

    1984-01-01

    The development of a technique for estimating all canopy parameters occurring in a canopy reflectance model from the measured canopy reflectance data is summarized. The Suits and the SAIL model for a uniform and homogeneous crop canopy were used to determine if the leaf area index and the leaf angle distribution could be estimated. Optimal solar/view angles for measuring CR were also investigated. The use of CR in many wavelengths or spectral bands and of linear and nonlinear transforms of CRs for various solar/view angles and various spectral bands is discussed as well as the inversion of rediance data inside the canopy, angle transforms for filtering out terrain slope effects, and modification of one dimensional models.

  17. Equivalent uniform dose concept evaluated by theoretical dose volume histograms for thoracic irradiation.

    PubMed

    Dumas, J L; Lorchel, F; Perrot, Y; Aletti, P; Noel, A; Wolf, D; Courvoisier, P; Bosset, J F

    2007-03-01

    The goal of our study was to quantify the limits of the EUD models for use in score functions in inverse planning software, and for clinical application. We focused on oesophagus cancer irradiation. Our evaluation was based on theoretical dose volume histograms (DVH), and we analyzed them using volumetric and linear quadratic EUD models, average and maximum dose concepts, the linear quadratic model and the differential area between each DVH. We evaluated our models using theoretical and more complex DVHs for the above regions of interest. We studied three types of DVH for the target volume: the first followed the ICRU dose homogeneity recommendations; the second was built out of the first requirements and the same average dose was built in for all cases; the third was truncated by a small dose hole. We also built theoretical DVHs for the organs at risk, in order to evaluate the limits of, and the ways to use both EUD(1) and EUD/LQ models, comparing them to the traditional ways of scoring a treatment plan. For each volume of interest we built theoretical treatment plans with differences in the fractionation. We concluded that both volumetric and linear quadratic EUDs should be used. Volumetric EUD(1) takes into account neither hot-cold spot compensation nor the differences in fractionation, but it is more sensitive to the increase of the irradiated volume. With linear quadratic EUD/LQ, a volumetric analysis of fractionation variation effort can be performed.

  18. A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications

    NASA Astrophysics Data System (ADS)

    Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.

    2018-04-01

    Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.

  19. Relation of raw and cooked vegetable consumption to blood pressure: the INTERMAP Study.

    PubMed

    Chan, Q; Stamler, J; Brown, I J; Daviglus, M L; Van Horn, L; Dyer, A R; Oude Griep, L M; Miura, K; Ueshima, H; Zhao, L; Nicholson, J K; Holmes, E; Elliott, P

    2014-06-01

    Inverse associations have been reported of overall vegetable intake to blood pressure (BP); whether such relations prevail for both raw and cooked vegetables has not been examined. Here we report cross-sectional associations of vegetable intakes with BP for 2195 Americans ages 40-59 in the International Study of Macro/Micronutrients and Blood Pressure (INTERMAP) using four standardized multi-pass 24-h dietary recalls and eight BP measurements. Relations to BP of raw and cooked vegetables consumption, and main individual constituents were assessed by multiple linear regression. Intakes of both total raw and total cooked vegetables considered separately were inversely related to BP in multivariate-adjusted models. Estimated average systolic BP differences associated with two s.d. differences in raw vegetable intake (68 g per 1000 kcal) and cooked vegetable intake (92 g per 1000 kcal) were -1.9 mm Hg (95% confidence interval (CI): -3.1, -0.8; P=0.001) and -1.3 mm Hg (95% CI: -2.5, -0.2; P=0.03) without body mass index (BMI) in the full model; -1.3 mm Hg (95% CI: -2.4, -0.2; P=0.02) and -0.9 mm Hg (95% CI: -2.0, 0.2; P=0.1) with additional adjustment for BMI. Among commonly consumed individual raw vegetables, tomatoes, carrots, and scallions related significantly inversely to BP. Among commonly eaten cooked vegetables, tomatoes, peas, celery, and scallions related significantly inversely to BP.

  20. Uncertain Representations of Sub-Grid Pollutant Transport in Chemistry-Transport Models and Impacts on Long-Range Transport and Global Composition

    NASA Technical Reports Server (NTRS)

    Pawson, Steven; Zhu, Z.; Ott, L. E.; Molod, A.; Duncan, B. N.; Nielsen, J. E.

    2009-01-01

    Sub-grid transport, by convection and turbulence, is known to play an important role in lofting pollutants from their source regions. Consequently, the long-range transport and climatology of simulated atmospheric composition are impacted. This study uses the Goddard Earth Observing System, Version 5 (GEOS-5) atmospheric model to study pollutant transport. The baseline model uses a Relaxed Arakawa-Schubert (RAS) scheme that represents convection through a sequence of linearly entraining cloud plumes characterized by unique detrainment levels. Thermodynamics, moisture and trace gases are transported in the same manner. Various approximate forms of trace-gas transport are implemented, in which the box-averaged cloud mass fluxes from RAS are used with different numerical approaches. Substantial impacts on forward-model simulations of CO (using a linearized chemistry) are evident. In particular, some aspects of simulations using a diffusive form of sub-grid transport bear more resemblance to space-biased CO observations than do the baseline simulations with RAS transport. Implications for transport in the real atmosphere will be discussed. Another issue of importance is that many adjoint/inversion computations use simplified representations of sub-grid transport that may be inconsistent with the forward models: implications will be discussed. Finally, simulations using a complex chemistry model in GEOS-5 (in place of the linearized CO model) are underway: noteworthy results from this simulation will be mentioned.

  1. Constraining convective regions with asteroseismic linear structural inversions

    NASA Astrophysics Data System (ADS)

    Buldgen, G.; Reese, D. R.; Dupret, M. A.

    2018-01-01

    Context. Convective regions in stellar models are always associated with uncertainties, for example, due to extra-mixing or the possible inaccurate position of the transition from convective to radiative transport of energy. Such inaccuracies have a strong impact on stellar models and the fundamental parameters we derive from them. The most promising method to reduce these uncertainties is to use asteroseismology to derive appropriate diagnostics probing the structural characteristics of these regions. Aims: We wish to use custom-made integrated quantities to improve the capabilities of seismology to probe convective regions in stellar interiors. By doing so, we hope to increase the number of indicators obtained with structural seismic inversions to provide additional constraints on stellar models and the fundamental parameters we determine from theoretical modeling. Methods: First, we present new kernels associated with a proxy of the entropy in stellar interiors. We then show how these kernels can be used to build custom-made integrated quantities probing convective regions inside stellar models. We present two indicators suited to probe convective cores and envelopes, respectively, and test them on artificial data. Results: We show that it is possible to probe both convective cores and envelopes using appropriate indicators obtained with structural inversion techniques. These indicators provide direct constraints on a proxy of the entropy of the stellar plasma, sensitive to the characteristics of convective regions. These constraints can then be used to improve the modeling of solar-like stars by providing an additional degree of selection of models obtained from classical forward modeling approaches. We also show that in order to obtain very accurate indicators, we need ℓ = 3 modes for the envelope but that the core-conditions indicator is more flexible in terms of the seismic data required for its use.

  2. [Determinants of pride and shame: outcome, expected success and attribution].

    PubMed

    Schützwohl, A

    1991-01-01

    In two experiments we investigated the relationship between subjective probability of success and pride and shame. According to Atkinson (1957), pride (the incentive of success) is an inverse linear function of the probability of success, shame (the incentive of failure) being a negative linear function. Attribution theory predicts an inverse U-shaped relationship between subjective probability of success and pride and shame. The results presented here are at variance with both theories: Pride and shame do not vary with subjective probability of success. However, pride and shame are systematically correlated with internal attributions of action outcome.

  3. Pilots Rate Augmented Generalized Predictive Control for Reconfiguration

    NASA Technical Reports Server (NTRS)

    Soloway, Don; Haley, Pam

    2004-01-01

    The objective of this paper is to report the results from the research being conducted in reconfigurable fight controls at NASA Ames. A study was conducted with three NASA Dryden test pilots to evaluate two approaches of reconfiguring an aircraft's control system when failures occur in the control surfaces and engine. NASA Ames is investigating both a Neural Generalized Predictive Control scheme and a Neural Network based Dynamic Inverse controller. This paper highlights the Predictive Control scheme where a simple augmentation to reduce zero steady-state error led to the neural network predictor model becoming redundant for the task. Instead of using a neural network predictor model, a nominal single point linear model was used and then augmented with an error corrector. This paper shows that the Generalized Predictive Controller and the Dynamic Inverse Neural Network controller perform equally well at reconfiguration, but with less rate requirements from the actuators. Also presented are the pilot ratings for each controller for various failure scenarios and two samples of the required control actuation during reconfiguration. Finally, the paper concludes by stepping through the Generalized Predictive Control's reconfiguration process for an elevator failure.

  4. Estimating the electron energy distribution during ionospheric modification from spectrographic airglow measurements

    NASA Astrophysics Data System (ADS)

    Hysell, D. L.; Varney, R. H.; Vlasov, M. N.; Nossa, E.; Watkins, B.; Pedersen, T.; Huba, J. D.

    2012-02-01

    The electron energy distribution during an F region ionospheric modification experiment at the HAARP facility near Gakona, Alaska, is inferred from spectrographic airglow emission data. Emission lines at 630.0, 557.7, and 844.6 nm are considered along with the absence of detectable emissions at 427.8 nm. Estimating the electron energy distribution function from the airglow data is a problem in classical linear inverse theory. We describe an augmented version of the method of Backus and Gilbert which we use to invert the data. The method optimizes the model resolution, the precision of the mapping between the actual electron energy distribution and its estimate. Here, the method has also been augmented so as to limit the model prediction error. Model estimates of the suprathermal electron energy distribution versus energy and altitude are incorporated in the inverse problem formulation as representer functions. Our methodology indicates a heater-induced electron energy distribution with a broad peak near 5 eV that decreases approximately exponentially by 30 dB between 5-50 eV.

  5. Uniform apparent contrast noise: A picture of the noise of the visual contrast detection system

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.; Watson, A. B.

    1984-01-01

    A picture which is a sample of random contrast noise is generated. The noise amplitude spectrum in each region of the picture is inversely proportional to spatial frequency contrast sensitivity for that region, assuming the observer fixates the center of the picture and is the appropriate distance from it. In this case, the picture appears to have approximately the same contrast everywhere. To the extent that contrast detection thresholds are determined by visual system noise, this picture can be regarded as a picture of the noise of that system. There is evidence that, at different eccentricities, contrast sensitivity functions differ only by a magnification factor. The picture was generated by filtering a sample of white noise with a filter whose frequency response is inversely proportional to foveal contrast sensitivity. It was then stretched by a space-varying magnification function. The picture summmarizes a noise linear model of detection and discrimination of contrast signals by referring the model noise to the input picture domain.

  6. On seismological moments and magnitudes

    USGS Publications Warehouse

    Bolt, B. A.

    1991-01-01

    My approach to seismology over the years has always been from the point of view of applied mathematics, as exemplified broadly by the work of the late Sir Harold Jeffreys and Professor K. E. Bullen. Both stresses the development of mathematics in the context of physical systems and of modeling, with an eye always on the side of inference. Seismology provided for them and still provides today the almost perfect paradigm; the problem is the resolution of the detailed consitution of the Earth and its geologically short-term dynamics. The latter part, includes, of course, seismic-risk estimation. The last 20 years have seen the construction of a brilliant theoretical  formalism for linear inverse problems in seismology , although, oddly enough, the current popular Earth models do not take account it. It is interesting too that the narrow opinion, prevelent a decade ago, to the effect that the traditional seismic body-wave approaches to structural definition were superceded, has been largely abandoned under today's banner of tomography-as though the Oldham-Jeffreys-Gutenbery inversions were not tomography. 

  7. Cruciferous vegetable intake is inversely associated with lung cancer risk among smokers: a case-control study

    PubMed Central

    2010-01-01

    Background Inverse associations between cruciferous vegetable intake and lung cancer risk have been consistently reported. However, associations within smoking status subgroups have not been consistently addressed. Methods We conducted a hospital-based case-control study with lung cancer cases and controls matched on smoking status, and further adjusted for smoking status, duration, and intensity in the multivariate models. A total of 948 cases and 1743 controls were included in the analysis. Results Inverse linear trends were observed between intake of fruits, total vegetables, and cruciferous vegetables and risk of lung cancer (ORs ranged from 0.53-0.70, with P for trend < 0.05). Interestingly, significant associations were observed for intake of fruits and total vegetables with lung cancer among never smokers. Conversely, significant inverse associations with cruciferous vegetable intake were observed primarily among smokers, in particular former smokers, although significant interactions were not detected between smoking and intake of any food group. Of four lung cancer histological subtypes, significant inverse associations were observed primarily among patients with squamous or small cell carcinoma - the two subtypes more strongly associated with heavy smoking. Conclusions Our findings are consistent with the smoking-related carcinogen-modulating effect of isothiocyanates, a group of phytochemicals uniquely present in cruciferous vegetables. Our data support consumption of a diet rich in cruciferous vegetables may reduce the risk of lung cancer among smokers. PMID:20423504

  8. Female Literacy Rate is a Better Predictor of Birth Rate and Infant Mortality Rate in India

    PubMed Central

    Saurabh, Suman; Sarkar, Sonali; Pandey, Dhruv K.

    2013-01-01

    Background: Educated women are known to take informed reproductive and healthcare decisions. These result in population stabilization and better infant care reflected by lower birth rates and infant mortality rates (IMRs), respectively. Materials and Methods: Our objective was to study the relationship of male and female literacy rates with crude birth rates (CBRs) and IMRs of the states and union territories (UTs) of India. The data were analyzed using linear regression. CBR and IMR were taken as the dependent variables; while the overall literacy rates, male, and female literacy rates were the independent variables. Results: CBRs were inversely related to literacy rates (slope parameter = −0.402, P < 0.001). On multiple linear regression with male and female literacy rates, a significant inverse relationship emerged between female literacy rate and CBR (slope = −0.363, P < 0.001), while male literacy rate was not significantly related to CBR (P = 0.674). IMR of the states were also inversely related to their literacy rates (slope = −1.254, P < 0.001). Multiple linear regression revealed a significant inverse relationship between IMR and female literacy (slope = −0.816, P = 0.031), whereas male literacy rate was not significantly related (P = 0.630). Conclusion: Female literacy is relatively highly important for both population stabilization and better infant health. PMID:26664840

  9. Studying the Transient Thermal Contact Conductance Between the Exhaust Valve and Its Seat Using the Inverse Method

    NASA Astrophysics Data System (ADS)

    Nezhad, Mohsen Motahari; Shojaeefard, Mohammad Hassan; Shahraki, Saeid

    2016-02-01

    In this study, the experiments aimed at analyzing thermally the exhaust valve in an air-cooled internal combustion engine and estimating the thermal contact conductance in fixed and periodic contacts. Due to the nature of internal combustion engines, the duration of contact between the valve and its seat is too short, and much time is needed to reach the quasi-steady state in the periodic contact between the exhaust valve and its seat. Using the methods of linear extrapolation and the inverse solution, the surface contact temperatures and the fixed and periodic thermal contact conductance were calculated. The results of linear extrapolation and inverse methods have similar trends, and based on the error analysis, they are accurate enough to estimate the thermal contact conductance. Moreover, due to the error analysis, a linear extrapolation method using inverse ratio is preferred. The effects of pressure, contact frequency, heat flux, and cooling air speed on thermal contact conductance have been investigated. The results show that by increasing the contact pressure the thermal contact conductance increases substantially. In addition, by increasing the engine speed the thermal contact conductance decreases. On the other hand, by boosting the air speed the thermal contact conductance increases, and by raising the heat flux the thermal contact conductance reduces. The average calculated error equals to 12.9 %.

  10. Deformation Time-Series of the Lost-Hills Oil Field using a Multi-Baseline Interferometric SAR Inversion Algorithm with Finite Difference Smoothing Constraints

    NASA Astrophysics Data System (ADS)

    Werner, C. L.; Wegmüller, U.; Strozzi, T.

    2012-12-01

    The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities during the time intervals spanned by the interferogram and a DEM height correction. The sensitivity of the phase to the height correction depends on the length of the perpendicular baseline of each interferogram. This design matrix is augmented with a set of additional weighted constraints on the acceleration that penalize rapid velocity variations. The weighting factor γ can be varied from 0 (no smoothing) to a large values (> 10) that yield an essentially linear time-series solution. The factor can be tuned to take into account a priori knowledge of the deformation non-linearity. The difference between the time-series solution and the unconstrained time-series can be interpreted as due to a combination of tropospheric path delay and baseline error. Spatial smoothing of the residual phase leads to an improved atmospheric model that can be fed back into the model and iterated. Our analysis shows non-linear deformation related to changes in the oil extraction as well as local height corrections improving on the low resolution 3 arc-sec SRTM DEM.

  11. Quantum description of light propagation in generalized media

    NASA Astrophysics Data System (ADS)

    Häyrynen, Teppo; Oksanen, Jani

    2016-02-01

    Linear quantum input-output relation based models are widely applied to describe the light propagation in a lossy medium. The details of the interaction and the associated added noise depend on whether the device is configured to operate as an amplifier or an attenuator. Using the traveling wave (TW) approach, we generalize the linear material model to simultaneously account for both the emission and absorption processes and to have point-wise defined noise field statistics and intensity dependent interaction strengths. Thus, our approach describes the quantum input-output relations of linear media with net attenuation, amplification or transparency without pre-selection of the operation point. The TW approach is then applied to investigate materials at thermal equilibrium, inverted materials, the transparency limit where losses are compensated, and the saturating amplifiers. We also apply the approach to investigate media in nonuniform states which can be e.g. consequences of a temperature gradient over the medium or a position dependent inversion of the amplifier. Furthermore, by using the generalized model we investigate devices with intensity dependent interactions and show how an initial thermal field transforms to a field having coherent statistics due to gain saturation.

  12. An approach to multivariable control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    The paper presents simple schemes for multivariable control of multiple-joint robot manipulators in joint and Cartesian coordinates. The joint control scheme consists of two independent multivariable feedforward and feedback controllers. The feedforward controller is the minimal inverse of the linearized model of robot dynamics and contains only proportional-double-derivative (PD2) terms - implying feedforward from the desired position, velocity and acceleration. This controller ensures that the manipulator joint angles track any reference trajectories. The feedback controller is of proportional-integral-derivative (PID) type and is designed to achieve pole placement. This controller reduces any initial tracking error to zero as desired and also ensures that robust steady-state tracking of step-plus-exponential trajectories is achieved by the joint angles. Simple and explicit expressions of computation of the feedforward and feedback gains are obtained based on the linearized model of robot dynamics. This leads to computationally efficient schemes for either on-line gain computation or off-line gain scheduling to account for variations in the linearized robot model due to changes in the operating point. The joint control scheme is extended to direct control of the end-effector motion in Cartesian space. Simulation results are given for illustration.

  13. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods

    PubMed Central

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-01-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452

  14. SFDBSI_GLS v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poppeliers, Christian

    Matlab code for inversion of frequency domain, electrostatic geophysical data in terms of scalar scattering amplitudes in the subsurface. The data is assumed to be the difference between two measurements: electric field measurements prior to the injection of an electrically conductive proppant, and the electric field measurements after proppant injection. The proppant is injected into the subsurface via a well, and its purpose is to prop open fractures created by hydraulic fracturing. In both cases the illuminating electric field is assumed to be a vertically incident plane wave. The inversion strategy is to solve a set of linear system ofmore » equations, where each equation defines the amplitude of a candidate scattering volume. The model space is defined by M potential scattering locations and the frequency domain (of which there are k frequencies) data are recorded on N receivers. The solution thus solves a kN x M system of linear equations for M scalar amplitudes within the user-defined solution space. Practical Application: Oilfield environments where observed electrostatic geophysical data can reasonably be assumed to be scattered by subsurface proppant volumes. No field validation examples have so far been provided.« less

  15. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.

    PubMed

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-05-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.

  16. Deconvolving regional and fault-driven uplift in Calabria using drainage inversion techniques and field observations

    NASA Astrophysics Data System (ADS)

    Quye-Sawyer, Jennifer; Whittaker, Alexander; Roberts, Gareth; Rood, Dylan

    2017-04-01

    A key challenge in the Earth Sciences is to understand the timing and extent of the coupling between geodynamics, tectonics, and surface processes. In principle, the landscape adjusts to surface uplift or tectonic events, and present-day topography records a convolution of these processes. The inverse problem, the ability to find the 'best fit' theoretical scenario to match present day observations, is particularly desirable as it makes use of real data, encompasses the complexity of natural systems and quantifies model uncertainty through misfit. The region of Calabria, Italy, is known to have experienced geologically rapid uplift ( 1 mm/yr) since the Early Pleistocene, inferred from widespread marine terraces (ca. 1 Myr old) at elevations greater than 1 km. In addition, this is a tectonically active area of normal faulting with several highly destructive earthquakes in recent centuries. Since there has been some debate about the relative magnitudes of the uplift caused by regional processes or by faulting, the ability to model these effects on a regional scale may help resolve this problem. Therefore, Calabria is both a suitable and important site to model large magnitude recent geomorphic change. 1368 river longitudinal profiles have been generated from satellite digital elevation models (DEMs). These longitudinal profiles were compared to aerial photography to confirm the accuracy of this automated process. The longitudinal profiles contain numerous non-lithologically controlled knickpoints. Field observations support the presence of knickpoints extracted from the DEM and measurements of pebble imbrication from fluvial terraces suggest the planform stability of the drainage network in the last 1 Myr. By assuming fluvial erosion obeys stream power laws with an exponent of upstream area of 0.5 ± 0.1, the evolution of the landscape is computed using a linearized joint inversion of the longitudinal profiles. This has produced a spatially and temporally continuous model of cumulative uplift for the Calabria region. We have used independently-collated stratigraphic data to provide absolute ages for the inversion model. In particular, uplift rates of well-dated marine terraces constrain the inversion near the coastline and we are using cosmogenic isotope isochron burial dating to refine the timing of the onset of uplift. Preliminary inversion results show the initiation of uplift at approximately 1.9 Ma. The model output is consistent with field observations of regional uplift, later combined with fault related extension. Furthermore, these results are consistent with an increase in regional uplift rate prior to fault initiation.

  17. Observer-dependent sign inversions of polarization singularities.

    PubMed

    Freund, Isaac

    2014-10-15

    We describe observer-dependent sign inversions of the topological charges of vector field polarization singularities: C points (points of circular polarization), L points (points of linear polarization), and two virtually unknown singularities we call γ(C) and α(L) points. In all cases, the sign of the charge seen by an observer can change as she changes the direction from which she views the singularity. Analytic formulas are given for all C and all L point sign inversions.

  18. A Higher Order Iterative Method for Computing the Drazin Inverse

    PubMed Central

    Soleymani, F.; Stanimirović, Predrag S.

    2013-01-01

    A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper. PMID:24222747

  19. A study on characterization of stratospheric aerosol and gas parameters with the spacecraft solar occultation experiment

    NASA Technical Reports Server (NTRS)

    Chu, W. P.

    1977-01-01

    Spacecraft remote sensing of stratospheric aerosol and ozone vertical profiles using the solar occultation experiment has been analyzed. A computer algorithm has been developed in which a two step inversion of the simulated data can be performed. The radiometric data are first inverted into a vertical extinction profile using a linear inversion algorithm. Then the multiwavelength extinction profiles are solved with a nonlinear least square algorithm to produce aerosol and ozone vertical profiles. Examples of inversion results are shown illustrating the resolution and noise sensitivity of the inversion algorithms.

  20. tomo3d: a new 3-D joint refraction and reflection travel-time tomography code for active-source seismic data

    NASA Astrophysics Data System (ADS)

    Meléndez, A.; Korenaga, J.; Sallares, V.; Ranero, C. R.

    2012-12-01

    We present the development state of tomo3d, a code for three-dimensional refraction and reflection travel-time tomography of wide-angle seismic data based on the previous two-dimensional version of the code, tomo2d. The core of both forward and inverse problems is inherited from the 2-D version. The ray tracing is performed by a hybrid method combining the graph and bending methods. The graph method finds an ordered array of discrete model nodes, which satisfies Fermat's principle, that is, whose corresponding travel time is a global minimum within the space of discrete nodal connections. The bending method is then applied to produce a more accurate ray path by using the nodes as support points for an interpolation with beta-splines. Travel time tomography is formulated as an iterative linearized inversion, and each step is solved using an LSQR algorithm. In order to avoid the singularity of the sensitivity kernel and to reduce the instability of inversion, regularization parameters are introduced in the inversion in the form of smoothing and damping constraints. Velocity models are built as 3-D meshes, and velocity values at intermediate locations are obtained by trilinear interpolation within the corresponding pseudo-cubic cell. Meshes are sheared to account for topographic relief. A floating reflector is represented by a 2-D grid, and depths at intermediate locations are calculated by bilinear interpolation within the corresponding square cell. The trade-off between the resolution of the final model and the associated computational cost is controlled by the relation between the selected forward star for the graph method (i.e. the number of nodes that each node considers as its neighbors) and the refinement of the velocity mesh. Including reflected phases is advantageous because it provides a better coverage and allows us to define the geometry of those geological interfaces with velocity contrasts sharp enough to be observed on record sections. The code also offers the possibility of including water-layer multiples in the modeling, which is useful whenever these phases can be followed to greater offsets than the primary ones. This increases the amount of information available from the data, yielding more extensive and better constrained velocity and geometry models. We will present synthetic results from benchmark tests for the forward and inverse problems, as well as from more complex inversion tests for different inversions possibilities such as one with travel times from refracted waves only (i.e. first arrivals) and one with travel-times from both refracted and reflected waves. In addition, we will show some preliminary results for the inversion of real 3-D OBS data acquired off-shore Ecuador and Colombia.

Top