Neural Network method for Inverse Modeling of Material Deformation
Allen, J.D., Jr.; Ivezic, N.D.; Zacharia, T.
1999-07-10
A method is described for inverse modeling of material deformation in applications of importance to the sheet metal forming industry. The method was developed in order to assess the feasibility of utilizing empirical data in the early stages of the design process as an alternative to conventional prototyping methods. Because properly prepared and employed artificial neural networks (ANN) were known to be capable of codifying and generalizing large bodies of empirical data, they were the natural choice for the application. The product of the work described here is a desktop ANN system that can produce in one pass an accurate die design for a user-specified part shape.
Asteroid spin and shape modelling using two lightcurve inversion methods
NASA Astrophysics Data System (ADS)
Marciniak, Anna; Bartczak, Przemyslaw; Konstanciak, Izabella; Dudzinski, Grzegorz; Mueller, Thomas G.; Duffard, Rene
2016-10-01
We are conducting an observing campaign to counteract strong selection effects in photometric studies of asteroids. Our targets are long-period (P>12 hours) and low-amplitude (a_max<0.25 mag) asteroids, that although numerous, have poor lightcurve datasets (Marciniak et al. 2015, PSS 118, 256). As a result such asteroids are very poorly studied in terms of their spins and shapes. Our campaign targets a sample of around 100 bright (H<11 mag) main belt asteroids sharing both of these features, resulting in a few tens of new composite lightcurves each year. At present the data gathered so far allowed to construct detailed models for the shape and spin for about ten targets.In this study we perform spin and shape modelling using two lightcurve inversion methods: convex inversion (Kaasalainen et al. 2001, Icarus, 153, 37) and nonconvex SAGE modelling algorithm (Shaping Asteroids with Genetic Evolution, Bartczak et al. 2014, MNRAS, 443, 1802). These two methods are independent from each other, and are based on different assumptions for the shape.Thus, the results obtained on the same datasets provide a cross-check of both the methods and the resulting spin and shape models. The results for the spin solutions are highly consistent, and the shape models are similar, though the ones from SAGE algorithm provide more details of the surface features. Nonconvex shape produced by SAGE have been compared with direct images from spacecrafts and the first results for targets like Eros or Lutetia (Batczak et al. 2014, ACM conf. 29B) provide a high level of agreement.Another way of validation is the shape model comparison with the asteroid shape contours obtained using different techniques (like the stellar occultation timings or adaptive optics imaging) or against data in thermal infrared range gathered by ground and space-bound observatories. The thermal data could provide assignment of size and albedo, but also can help to resolve spin-pole ambiguities. In special cases, the
Gao Yajun
2008-08-15
A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.
Dynamic inversion method based on the time-staggered stereo-modeling scheme and its acceleration
NASA Astrophysics Data System (ADS)
Jing, Hao; Yang, Dinghui; Wu, Hao
2016-12-01
A set of second-order differential equations describing the space-time behaviour of derivatives of displacement with respect to model parameters (i.e. waveform sensitivities) is obtained via taking the derivative of the original wave equations. The dynamic inversion method obtains sensitivities of the seismic displacement field with respect to earth properties directly by solving differential equations for them instead of constructing sensitivities from the displacement field itself. In this study, we have taken a new perspective on the dynamic inversion method and used acceleration approaches to reduce the computational time and memory usage to improve its ability of performing high-resolution imaging. The dynamic inversion method, which can simultaneously use different waves and multicomponent observation data, is appropriate for directly inverting elastic parameters, medium density or wave velocities. Full wavefield information is utilized as much as possible at the expense of a larger amount of calculations. To mitigate the computational burden, two ways are proposed to accelerate the method from a computer-implementation point of view. One is source encoding which uses a linear combination of all shots, and the other is to reduce the amount of calculations on forward modeling. We applied a new finite-difference (FD) method to the dynamic inversion to improve the computational accuracy and speed up the performance. Numerical experiments indicated that the new FD method can effectively suppress the numerical dispersion caused by the discretization of wave equations, resulting in enhanced computational efficiency with less memory cost for seismic modeling and inversion based on the full wave equations. We present some inversion results to demonstrate the validity of this method through both checkerboard and Marmousi models. It shows that this method is also convergent even with big deviations for the initial model. Besides, parallel calculations can be easily
Odor emission rate estimation of indoor industrial sources using a modified inverse modeling method.
Li, Xiang; Wang, Tingting; Sattayatewa, Chakkrid; Venkatesan, Dhesikan; Noll, Kenneth E; Pagilla, Krishna R; Moschandreas, Demetrios J
2011-08-01
Odor emission rates are commonly measured in the laboratory or occasionally estimated with inverse modeling techniques. A modified inverse modeling approach is used to estimate source emission rates inside of a postdigestion centrifuge building of a water reclamation plant. Conventionally, inverse modeling methods divide an indoor environment in zones on the basis of structural design and estimate source emission rates using models that assume homogeneous distribution of agent concentrations within a zone and experimentally determined link functions to simulate airflows among zones. The modified approach segregates zones as a function of agent distribution rather than building design and identifies near and far fields. Near-field agent concentrations do not satisfy the assumption of homogeneous odor concentrations; far-field concentrations satisfy this assumption and are the only ones used to estimate emission rates. The predictive ability of the modified inverse modeling approach was validated with measured emission rate values; the difference between corresponding estimated and measured odor emission rates is not statistically significant. Similarly, the difference between measured and estimated hydrogen sulfide emission rates is also not statistically significant. The modified inverse modeling approach is easy to perform because it uses odor and odorant field measurements instead of complex chamber emission rate measurements.
Inversion of tsunami sources by the adjoint method in the presence of observational and model errors
NASA Astrophysics Data System (ADS)
Pires, C.; Miranda, P. M. A.
2003-04-01
The adjoint method is applied to the inversion of tsumani sources from tide-gauge observations in both idealized and realistic setups, with emphasis on the effects of observational, bathymetric and other model errors in the quality of the inversion. The method is developed in a way that allows for the direct optimization of seismic focal parameters, in the case of seismic tsunamis, through a 4-step inversion procedure that can be fully automated, consisting in (i) source area delimitation, by adjoint backward ray-tracing, (ii) adjoint optimization of the initial sea state, from a vanishing first-guess, (iii) non-linear adjustment of the fault model and (iv) final adjoint optimization in the fault parameter space. The methodology is systematically tested with synthetic data, showing its flexibility and robustness in the presence of significant amounts of error.
Proximal point methods for the inverse problem of identifying parameters in beam models
NASA Astrophysics Data System (ADS)
Jadamba, B.; Khan, A. A.; Paulhamus, M.; Sama, M.
2012-07-01
This paper studies the nonlinear inverse problem of identifying certain material parameters in the fourth-order boundary value problem representing the beam model. The inverse problem is solved by posing a convex optimization problem whose solution is an approximation of the sought parameters. The optimization problem is solved by the gradient based approaches, and in this setting, the most challenging aspect is the computation of the gradient of the objective functional. We present a detailed treatment of the adjoint stiffness matrix based approach for the gradient computation. We employ recently proposed self-adaptive inexact proximal point methods by Hager and Zhang [6] to solve the inverse problem. It is known that the regularization features of the proximal point methods are quite different from that of the Tikhonov regularization. We present a comparative analysis of the numerical efficiency of the used proximal point methods without using the Tikhonov regularization.
Studies of Trace Gas Chemical Cycles Using Inverse Methods and Global Chemical Transport Models
NASA Technical Reports Server (NTRS)
Prinn, Ronald G.
2003-01-01
We report progress in the first year, and summarize proposed work for the second year of the three-year dynamical-chemical modeling project devoted to: (a) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for long lived gases important in ozone depletion and climate forcing, (b) utilization of inverse methods to determine these source/sink strengths using either MATCH (Model for Atmospheric Transport and Chemistry) which is based on analyzed observed wind fields or back-trajectories computed from these wind fields, (c) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple titrating gases, and (d) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3D models. Important goals include determination of regional source strengths of methane, nitrous oxide, methyl bromide, and other climatically and chemically important biogenic/anthropogenic trace gases and also of halocarbons restricted by the Montreal protocol and its follow-on agreements and hydrohalocarbons now used as alternatives to the restricted halocarbons.
Interpretation of Trace Gas Data Using Inverse Methods and Global Chemical Transport Models
NASA Technical Reports Server (NTRS)
Prinn, Ronald G.
1997-01-01
This is a theoretical research project aimed at: (1) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for long lived gases important in ozone depletion and climate forcing, (2) utilization of inverse methods to determine these source/sink strengths which use the NCAR/Boulder CCM2-T42 3-D model and a global 3-D Model for Atmospheric Transport and Chemistry (MATCH) which is based on analyzed observed wind fields (developed in collaboration by MIT and NCAR/Boulder), (3) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple titrating gases, and, (4) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3-D models. Important goals include determination of regional source strengths of methane, nitrous oxide, and other climatically and chemically important biogenic trace gases and also of halocarbons restricted by the Montreal Protocol and its follow-on agreements and hydrohalocarbons used as alternatives to the restricted halocarbons.
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H.-J.; Alcolea, A.; Riva, M.; Bakr, M.; van de Wiel, N.; Stauffer, F.; Guadagnini, A.,
2009-04-01
While several inverse modeling methods for groundwater flow have been developed during the last decades, hardly any comparisons among them have been published. We present a comparison of the performance of seven inverse methods, the Regularized Pilot Points Method (both in its classical estimation (RPPM-CE) and Monte Carlo (MC) simulation (RPPM-CS) variants), the Monte-Carlo variant of the Representer Method (RM), the Sequential-Self Calibration method (SSC), the Zonation Method (ZM), the Moment Equations Method (MEM) and a recently developed Semi-Analytical Method (SAM). The aforementioned methods are applied to a two-dimensional synthetic set-up, depicting the steady-state groundwater flow around an extraction well in the presence of distributed recharge. Their relative performances were assessed in terms of characterization of (a) the log-transmissivity field, (b) the hydraulic head distribution and (c) the well catchment delineation with respect to the reference scenario. Simulations were performed for a mildly and strongly heterogeneous transmissivity field. Adopted comparison measures include the absolute mean error, the root mean square error and the average ensemble standard deviation (whenever a method allows evaluating it) of the log-transmissivity and hydraulic head distributions. In addition, the estimated median and reference well catchments were compared and the uncertainty associated with the estimated catchment was evaluated. We found that the MC-based methods (RPPM-CS, RM and SSC) yield very similar results in all tested scenarios, despite they use different parameterization schemes and different objective functions. The linear correlation coefficient between the estimates obtained by the different MC methods increases with the number of stochastic realizations adopted and attains values up to 0.99 for 500 stochastic realisations. For the mildly heterogeneous case, the other inverse methods (i.e., non MC) yielded results which were consistent with
Global inverse modeling of CH4 sources and sinks: an overview of methods
NASA Astrophysics Data System (ADS)
Houweling, Sander; Bergamaschi, Peter; Chevallier, Frederic; Heimann, Martin; Kaminski, Thomas; Krol, Maarten; Michalak, Anna M.; Patra, Prabir
2017-01-01
The aim of this paper is to present an overview of inverse modeling methods that have been developed over the years for estimating the global sources and sinks of CH4. It provides insight into how techniques and estimates have evolved over time and what the remaining shortcomings are. As such, it serves a didactical purpose of introducing apprentices to the field, but it also takes stock of developments so far and reflects on promising new directions. The main focus is on methodological aspects that are particularly relevant for CH4, such as its atmospheric oxidation, the use of methane isotopologues, and specific challenges in atmospheric transport modeling of CH4. The use of satellite retrievals receives special attention as it is an active field of methodological development, with special requirements on the sampling of the model and the treatment of data uncertainty. Regional scale flux estimation and attribution is still a grand challenge, which calls for new methods capable of combining information from multiple data streams of different measured parameters. A process model representation of sources and sinks in atmospheric transport inversion schemes allows the integrated use of such data. These new developments are needed not only to improve our understanding of the main processes driving the observed global trend but also to support international efforts to reduce greenhouse gas emissions.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
Inverse Monte Carlo method in a multilayered tissue model for diffuse reflectance spectroscopy.
Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas
2012-04-01
Model based data analysis of diffuse reflectance spectroscopy data enables the estimation of optical and structural tissue parameters. The aim of this study was to present an inverse Monte Carlo method based on spectra from two source-detector distances (0.4 and 1.2 mm), using a multilayered tissue model. The tissue model variables include geometrical properties, light scattering properties, tissue chromophores such as melanin and hemoglobin, oxygen saturation and average vessel diameter. The method utilizes a small set of presimulated Monte Carlo data for combinations of different levels of epidermal thickness and tissue scattering. The path length distributions in the different layers are stored and the effect of the other parameters is added in the post-processing. The accuracy of the method was evaluated using Monte Carlo simulations of tissue-like models containing discrete blood vessels, evaluating blood tissue fraction and oxygenation. It was also compared to a homogeneous model. The multilayer model performed better than the homogeneous model and all tissue parameters significantly improved spectral fitting. Recorded in vivo spectra were fitted well at both distances, which we previously found was not possible with a homogeneous model. No absolute intensity calibration is needed and the algorithm is fast enough for real-time processing.
NASA Astrophysics Data System (ADS)
Goncharsky, Alexander V.; Romanov, Sergey Y.
2017-02-01
We develop efficient iterative methods for solving inverse problems of wave tomography in models incorporating both diffraction effects and attenuation. In the inverse problem the aim is to reconstruct the velocity structure and the function that characterizes the distribution of attenuation properties in the object studied. We prove mathematically and rigorously the differentiability of the residual functional in normed spaces, and derive the corresponding formula for the Fréchet derivative. The computation of the Fréchet derivative includes solving both the direct problem with the Neumann boundary condition and the reversed-time conjugate problem. We develop efficient methods for numerical computations where the approximate solution is found using the detector measurements of the wave field and its normal derivative. The wave field derivative values at detector locations are found by solving the exterior boundary value problem with the Dirichlet boundary conditions. We illustrate the efficiency of this approach by applying it to model problems. The algorithms developed are highly parallelizable and designed to be run on supercomputers. Among the most promising medical applications of our results is the development of ultrasonic tomographs for differential diagnosis of breast cancer.
NASA Astrophysics Data System (ADS)
Mehl, S.; Foglia, L.; Hill, M. C.
2009-12-01
Methods for analyzing inverse modeling results can be separated into two categories: (1) linear methods, such as Cook’s D, which are computationally frugal and do not require additional model runs, and (2) nonlinear methods, such as cross validation, which are computationally more expensive because they generally require additional model runs. Depending on the type of nonlinear analysis performed, the additional runs can be the difference between 10’s of runs and 1000’s of runs. For example, cross-validation studies require the model to be recalibrated (the regression repeated) for each observation or set of observations analyzed. This can be computationally prohibitive if many observations or sets of observations are investigated and/or the model has many estimated parameters. A tradeoff exists between linear and nonlinear methods, with linear methods being computationally efficient, but the results being questioned when models are nonlinear. The trade offs between computational efficiency and accuracy are investigated by comparing results from several linear measures of observation importance (for example, Cook’s D, DFBETA’s) to their nonlinear counterparts based on cross validation. Examples from ground water models of the Maggia Valley in southern Switzerland are used to make comparisons. The models include representation of the stream-aquifer interaction and range from simple to complex, with associated modified Beale’s measure ranging from mildly nonlinear to highly nonlinear, respectively. These results demonstrate applicability and limitations of applying linear methods over a range of model complexity and linearity and can be used to better understand when the additional computation burden of nonlinear methods may be necessary.
NASA Astrophysics Data System (ADS)
Olsen, Scott Charles
In this dissertation, new inverse scattering algorithms are derived for the Helmholtz equation using the Extended Born field model (eikonal rescattered field), and the angular spectrum (parabolic) layered field model. These two field models performed the 'best' of all the field models evaluated. Algorithms are solved with conjugate gradient methods. An advanced ultrasonic data acquisition system is also designed. Many different field models for use in a reconstruction algorithm are investigated. 'Layered' field models that mathematically partition the field calculation in layers in space possess the advantage that the field in layer n is calculated from the field in layer n - 1. Several of the 'layered' field models are investigated in terms of accuracy and computational complexity. Field model accuracy using field rescattering is also tested. The models investigated are the eikonal field model, the angular spectrum (AS) field model, and the parabolic field models known as the Split-Step Fast-Fourier Transform and the Crank-Nicolson algorithms. All of the 'layered' field models can be referred to as Extended Born field models since the 'layered' field models are more accurate than the Born approximated total field. The Rescattered Extended Born (eikonal rescattered field) Transmission Mode (REBTM) algorithm with the AS field model and the Nonrescattered AS Reconstruction (NASR) algorithm are tested with several types of objects: a single-layer cylinder, double-layer cylinders, two double-layer cylinders and the breast model. Both algorithms, REBTM and NASR work well; however, the NASR algorithm is faster and more accurate than the REBTM algorithm. The NASR algorithm is matched well with the requirements of breast model reconstructions. A major purpose of new scanner development is to collect both transmission and reflection data from multiple ultrasonic transducer arrays to test the next generation of reconstruction algorithms. The data acquisition system advanced
NASA Astrophysics Data System (ADS)
Pham, H. V.; Elshall, A. S.; Tsai, F. T.; Yan, L.
2012-12-01
The inverse problem in groundwater modeling deals with a rugged (i.e. ill-conditioned and multimodal), nonseparable and noisy function since it involves solving second order nonlinear partial deferential equations with forcing terms. Derivative-based optimization algorithms may fail to reach a near global solution due to their stagnation at a local minimum solution. To avoid entrapment in a local optimum and enhance search efficiency, this study introduces the covariance matrix adaptation-evolution strategy (CMA-ES) as a local derivative-free optimization method. In the first part of the study, we compare CMA-ES with five commonly used heuristic methods and the traditional derivative-based Gauss-Newton method on a hypothetical problem. This problem involves four different cases to allow a rigorous assessment against ten criterions: ruggedness in terms of nonsmooth and multimodal, ruggedness in terms of ill-conditioning and high nonlinearity, nonseparablity, high dimensionality, noise, algorithm adaptation, algorithm tuning, performance, consistency, parallelization (scaling with number of cores) and invariance (solution vector and function values). The CMA-ES adapts a covariance matrix representing the pair-wise dependency between decision variables, which approximates the inverse of the Hessian matrix up to a certain factor. The solution is updated with the covariance matrix and an adaptable step size, which are adapted through two conjugates that implement heuristic control terms. The covariance matrix adaptation uses information from the current population of solutions and from the previous search path. Since such an elaborate search mechanism is not common in the other heuristic methods, CMA-ES proves to be more robust than other population-based heuristic methods in terms of reaching a near-optimal solution for a rugged, nonseparable and noisy inverse problem. Other favorable properties that the CMA-ES exhibits are the consistency of the solution for repeated
NASA Astrophysics Data System (ADS)
Hendricks Franssen, Harrie-Jan; Brunner, Philip; Eugster, Martin; Bauer, Peter; Kinzelbach, Wolfgang
The study area is the Chobe Enclave region in semi-arid Northern Botswana. Growing water demand in the local villages led to the development of different water supply scenarios one of which uses groundwater from a nearby aquifer. A regional groundwater flow model was established, both within a stochastic and a deterministic approach. In principle recharge can be derived from a surface water balance. The input data for the water balance, evapotranspiration and precipitation, were calculated using remotely sensed data. The calculation of evapotranspiration is based on the surface energy balance using multi-channel images from the Advanced Very High Resolution Radiometer (AVHRR). For several days of the year, actual ET is calculated and compared to station potential ET to yield crop coefficients. The crop coefficients are interpolated in time. Finally long-term ET is calculated by multiplying the crop coefficients with station potential ET. Precipitation is taken from station data and precipitation maps prepared by USAID using Meteosat images. As in most of the area surface runoff is small, subtracting evapotranspiration from precipitation yields recharge maps for the period 1990-2000. However, the values thus calculated are very inaccurate, as the errors both in precipitation and evapotranspiration estimates are large. Still, zones of different recharge and probable errors can be identified. The absolute value of the recharge flux in each zone is derived from the chloride method. Alternatively, the recharge flux was also estimated by the sequential self-calibrated method, a stochastic inverse modelling approach based on observed heads and pumping test data. Recharge values and transmissivities are estimated jointly in this method. The recharge zones derived from the water balance together with their stochastic properties are used as prior information. The method generates multiple equally likely solutions to the estimation problem and allows to assess the uncertainty
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Oneida, Erin K.; Shell, Eric B.; Sabbagh, Harold A.; Sabbagh, Elias; Murphy, R. Kim; Mazdiyasni, Siamack; Lindgren, Eric A.; Mooers, Ryan D.
2017-02-01
A model-based calibration process is introduced that estimates the state of the eddy current probe. First, a carefully designed surrogate model was built using VIC-3D® simulations covering the critical range of probe rotation angles, tilt in two directions, and probe offset (liftoff) for both transverse and longitudinal flaw orientations. Some approximations and numerical compromises in the model were made to represent tilt in two directions and reduce simulation time; however, this surrogate model was found to represent the key trends in the eddy current response for each of the four probe properties in experimental verification studies well. Next, this model was incorporated into an iterative inversion scheme during the calibration process, to estimate the probe state while also addressing the amplitude/phase fit and centering the calibration notch indication. Results are presented showing several examples of the blind estimation of tilt and rotation angle for known experimental cases with reasonable agreement. Once the probe state is estimated, the final step is to transform the base crack inversion surrogate model and apply it for crack characterization. Using this process, results are presented demonstrating improved crack inversion performance for extreme probe states.
Wang, Fei; Lin, Qi-zhong; Wang, Qin-jun; Li, Shuai
2011-05-01
The rapid identification of the minerals in the field is crucial in the remote sensing geology study and mineral exploration. The characteristic spectrum linear inversion modeling is able to obtain the mineral information quickly in the field study. However, the authors found that there was significant difference among the results of the model using the different kinds of spectra of the same sample. The present paper mainly studied the continuum based fast Fourier transform processing (CFFT) method and the characteristic spectrum linear inversion modeling (CSLM). On one hand, the authors obtained the optimal preferences of the CFFT method when applying it to rock samples: setting the CFFT low-pass frequency to 150 Hz. On the other hand, through the evaluation and study of the results of CSLM using different spectra, the authors found that the ASD spectra which were denoised in the CFFT method could provide better results when using them to extract the mineral information in the field.
NASA Astrophysics Data System (ADS)
Bellet, Michel; Massoni, Elisabeth; Boude, Serge
2004-06-01
Superplastic forming is a thermoforming-like process commonly applied to titanium and aluminum alloys at high temperature and in specific conditions. This paper presents the application of an inverse analysis technique to the identification of rheological and tribological parameters. The method consists of two steps. First, two different kinds of forming tests have been carried out for rheological and tribological identification, using specific mold shapes. Accurate instrumentation and measurements have been done in order to feed an experimental database (values of appropriate observables). In a second step, the development of an inverse method has been carried out. It consists of the minimization of an objective function representative of the distance — in a least squares sense — between measured and calculated values of the observables. The algorithm, which is coupled with the finite element model FORGE2®, is based on a Gauss-Newton method, including a sensitivity matrix calculated by the semi-analytical method.
NASA Astrophysics Data System (ADS)
Shin, Wae-Gyeong; Lee, Soo-Hong
Reliability of automotive parts has been one of the most interesting fields in the automotive industry. Especially small DC motor was issued because of the increasing adoption for passengers' safety and convenience. This study was performed to develop the accelerated life test method using Inverse power law model for small DC motors. The failure mode of small DC motor includes brush wear-out. Inverse power law model is applied effectively the electronic components to reduce the testing time and to achieve the accelerating test conditions. Accelerated life testing method was induced to bring on the brush wear-out as increasing voltage of motor. Life distribution of the small DC motor was supposed to follow Weibull distribution and life test time was calculated under the conditions of B10 life and 90% confidence level.
A model-assisted radio occultation data inversion method based on data ingestion into NeQuick
NASA Astrophysics Data System (ADS)
Shaikh, M. M.; Nava, B.; Kashcheyev, A.
2017-01-01
Inverse Abel transform is the most common method to invert radio occultation (RO) data in the ionosphere and it is based on the assumption of the spherical symmetry for the electron density distribution in the vicinity of an occultation event. It is understood that this 'spherical symmetry hypothesis' could fail, above all, in the presence of strong horizontal electron density gradients. As a consequence, in some cases wrong electron density profiles could be obtained. In this work, in order to incorporate the knowledge of horizontal gradients, we have suggested an inversion technique based on the adaption of the empirical ionospheric model, NeQuick2, to RO-derived TEC. The method relies on the minimization of a cost function involving experimental and model-derived TEC data to determine NeQuick2 input parameters (effective local ionization parameters) at specific locations and times. These parameters are then used to obtain the electron density profile along the tangent point (TP) positions associated with the relevant RO event using NeQuick2. The main focus of our research has been laid on the mitigation of spherical symmetry effects from RO data inversion without using external data such as data from global ionospheric maps (GIM). By using RO data from Constellation Observing System for Meteorology Ionosphere and Climate (FORMOSAT-3/COSMIC) mission and manually scaled peak density data from a network of ionosondes along Asian and American longitudinal sectors, we have obtained a global improvement of 5% with 7% in Asian longitudinal sector (considering the data used in this work), in the retrieval of peak electron density (NmF2) with model-assisted inversion as compared to the Abel inversion. Mean errors of NmF2 in Asian longitudinal sector are calculated to be much higher compared to American sector.
Bledsoe, Keith C.
2015-04-01
The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratory’s INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric. This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.
NASA Astrophysics Data System (ADS)
Agata, R.; Ichimura, T.; Hirahara, K.; Hori, T.; Hyodo, M.; Hori, M.
2013-12-01
Many studies have focused on geodetic inversion analysis method of coseismic slip distribution with combination of observation data of coseismic crustal deformation on the ground and simplified crustal models such like analytical solution in elastic half-space (Okada, 1985). On the other hand, displacements on the seafloor or near trench axes due to actual earthquakes has been observed by seafloor observatories (e.g. the 2011 Tohoku-oki Earthquake (Tohoku Earthquake) (Sato et. al. 2011) (Kido et. al. 2011)). Also, some studies on tsunamis due to the Tohoku Earthquake indicate that large fault slips near the trench axis may have occurred. Those facts suggest that crustal models considering complex geometry and heterogeneity of the material property near the trench axis should be used for geodetic inversion analysis. Therefore, our group has developed a mesh generation method for finite element models of the Japanese Islands of higher fidelity and a fast crustal deformation analysis method for the models. Degree-of-freedom of the models generated by this method is about 150 million. In this research, the method is extended for inversion analyses of coseismic slip distribution. Since inversion analyses need computation of hundreds of slip response functions due to a unit fault slip assigned for respective divided cells on the fault, parallel computing environment is used. Plural crustal deformation analyses are simultaneously run in a Message Passing Interface (MPI) job. In the job, dynamic load balancing is implemented so that a better parallel efficiency is obtained. Submitting the necessary number of serial job of our previous method is also possible, but the proposed method needs less computation time, places less stress on file systems, and allows simpler job management. A method for considering the fault slip right near the trench axis is also developed. As the displacement distribution of unit fault slip for computing response function, 3rd order B
NASA Astrophysics Data System (ADS)
Haley, Craig; McLinden, Chris; Sioris, Christopher; Brohede, Samuel
Key to the retrieval of stratospheric minor species information from limb-scatter measurements are the selections of a radiative transfer model (RTM) and inversion method (solver). Here we assess the impact of choice of RTM and solver on the retrievals of stratospheric ozone and nitrogen dioxide from the OSIRIS instrument using the ‘Ozone Triplet' and Differential Optical Absorption Spectroscopy (DOAS) techniques that are used in the operational Level 2 processing algorithms. The RTMs assessed are LIMBTRAN, VECTOR, SCIARAYS, and SASKTRAN. The solvers studied include the Maximum A Posteriori (MAP), Maximum Likelihood (ML), Iterative Least Squares (ILS), and Chahine methods.
NASA Astrophysics Data System (ADS)
Gillet-Chaulet, F.; Gagliardini, O.; Nodet, M.; Ritz, C.; Durand, G.; Zwinger, T.; Seddik, H.; Greve, R.
2010-12-01
About a third of the current sea level rise is attributed to the release of Greenland and Antarctic ice, and their respective contribution is continuously increasing since the first diagnostic of the acceleration of their coastal outlet glaciers, a decade ago. Due to their related societal implications, good scenario of the ice sheets evolutions are needed to constrain the sea level rise forecast in the coming centuries. The quality of the model predictions depend primary on the good description of the physical processes involved and on a good initial state reproducing the main present observations (geometry, surface velocities and ideally the trend in elevation change). We model ice dynamics on the whole Greenland ice sheet using the full-Stokes finite element code Elmer. The finite element mesh is generated using the anisotropic mesh adaptation tool YAMS, and shows a high density around the major ice streams. For the initial state, we use an iterative procedure to compute the ice velocities, the temperature field, and the basal sliding coefficient field. The basal sliding coefficient is obtained with an inverse method by minimizing a cost function that measures the misfit between the present day surface velocities and the modelled surface velocities. We use two inverse methods for this: an inverse Robin problem recently proposed by Arthern and Gudmundsson (J. Glaciol. 2010), and a control method taking advantage of the fact that the Stokes equations are self adjoint in the particular case of a Newtonian rheology. From the initial states obtained by these two methods, we run transient simulations to evaluate the impact of the initial state of the Greenland ice sheet onto its related contribution to sea level rise for the next centuries.
NASA Astrophysics Data System (ADS)
Stukel, Michael R.; Landry, Michael R.; Ohman, Mark D.; Goericke, Ralf; Samo, Ty; Benitez-Nelson, Claudia R.
2012-03-01
Despite the increasing use of linear inverse modeling techniques to elucidate fluxes in undersampled marine ecosystems, the accuracy with which they estimate food web flows has not been resolved. New Markov Chain Monte Carlo (MCMC) solution methods have also called into question the biases of the commonly used L2 minimum norm (L 2MN) solution technique. Here, we test the abilities of MCMC and L 2MN methods to recover field-measured ecosystem rates that are sequentially excluded from the model input. For data, we use experimental measurements from process cruises of the California Current Ecosystem (CCE-LTER) Program that include rate estimates of phytoplankton and bacterial production, micro- and mesozooplankton grazing, and carbon export from eight study sites varying from rich coastal upwelling to offshore oligotrophic conditions. Both the MCMC and L 2MN methods predicted well-constrained rates of protozoan and mesozooplankton grazing with reasonable accuracy, but the MCMC method overestimated primary production. The MCMC method more accurately predicted the poorly constrained rate of vertical carbon export than the L 2MN method, which consistently overestimated export. Results involving DOC and bacterial production were equivocal. Overall, when primary production is provided as model input, the MCMC method gives a robust depiction of ecosystem processes. Uncertainty in inverse ecosystem models is large and arises primarily from solution under-determinacy. We thus suggest that experimental programs focusing on food web fluxes expand the range of experimental measurements to include the nature and fate of detrital pools, which play large roles in the model.
Variational Bayesian Approximation methods for inverse problems
NASA Astrophysics Data System (ADS)
Mohammad-Djafari, Ali
2012-09-01
Variational Bayesian Approximation (VBA) methods are recent tools for effective Bayesian computations. In this paper, these tools are used for inverse problems where the prior models include hidden variables and where where the estimation of the hyper parameters has also to be addressed. In particular two specific prior models (Student-t and mixture of Gaussian models) are considered and details of the algorithms are given.
Multiphase inverse modeling: An Overview
Finsterle, S.
1998-03-01
Inverse modeling is a technique to derive model-related parameters from a variety of observations made on hydrogeologic systems, from small-scale laboratory experiments to field tests to long-term geothermal reservoir responses. If properly chosen, these observations contain information about the system behavior that is relevant to the performance of a geothermal field. Estimating model-related parameters and reducing their uncertainty is an important step in model development, because errors in the parameters constitute a major source of prediction errors. This paper contains an overview of inverse modeling applications using the ITOUGH2 code, demonstrating the possibilities and limitations of a formalized approach to the parameter estimation problem.
A Bayesian method for microseismic source inversion
NASA Astrophysics Data System (ADS)
Pugh, D. J.; White, R. S.; Christie, P. A. F.
2016-08-01
Earthquake source inversion is highly dependent on location determination and velocity models. Uncertainties in both the model parameters and the observations need to be rigorously incorporated into an inversion approach. Here, we show a probabilistic Bayesian method that allows formal inclusion of the uncertainties in the moment tensor inversion. This method allows the combination of different sets of far-field observations, such as P-wave and S-wave polarities and amplitude ratios, into one inversion. Additional observations can be included by deriving a suitable likelihood function from the uncertainties. This inversion produces samples from the source posterior probability distribution, including a best-fitting solution for the source mechanism and associated probability. The inversion can be constrained to the double-couple space or allowed to explore the gamut of moment tensor solutions, allowing volumetric and other non-double-couple components. The posterior probability of the double-couple and full moment tensor source models can be evaluated from the Bayesian evidence, using samples from the likelihood distributions for the two source models, producing an estimate of whether or not a source is double-couple. Such an approach is ideally suited to microseismic studies where there are many sources of uncertainty and it is often difficult to produce reliability estimates of the source mechanism, although this can be true of many other cases. Using full-waveform synthetic seismograms, we also show the effects of noise, location, network distribution and velocity model uncertainty on the source probability density function. The noise has the largest effect on the results, especially as it can affect other parts of the event processing. This uncertainty can lead to erroneous non-double-couple source probability distributions, even when no other uncertainties exist. Although including amplitude ratios can improve the constraint on the source probability
NASA Astrophysics Data System (ADS)
Shonkwiler, K. B.; Ham, J. M.; Williams, C.
2012-12-01
Development Initiative. Food and Agriculture Organization of the United Nations, Rome, Italy. [2] Loubet, B., Génermont, S., Ferrara, R., Bedos, C., Decuq, C., Personne, E., Fanucci, O., Durand, B., Rana, G., Cellier, P., 2010. An inverse model to estimate ammonia emissions from fields. Eur. J. Soil Sci. 61: 793-805. Panorama of a weather station (left) utilizing micrometeorological methods to aid in estimating emissions of methane and ammonia from an anaerobic livestock lagoon (center) at a commercial dairy in Northern Colorado, USA.
Relative risk regression models with inverse polynomials.
Ning, Yang; Woodward, Mark
2013-08-30
The proportional hazards model assumes that the log hazard ratio is a linear function of parameters. In the current paper, we model the log relative risk as an inverse polynomial, which is particularly suitable for modeling bounded and asymmetric functions. The parameters estimated by maximizing the partial likelihood are consistent and asymptotically normal. The advantages of the inverse polynomial model over the ordinary polynomial model and the fractional polynomial model for fitting various asymmetric log relative risk functions are shown by simulation. The utility of the method is further supported by analyzing two real data sets, addressing the specific question of the location of the minimum risk threshold.
Tsunami waveform inversion by adjoint methods
NASA Astrophysics Data System (ADS)
Pires, Carlos; Miranda, Pedro M. A.
2001-09-01
An adjoint method for tsunami waveform inversion is proposed, as an alternative to the technique based on Green's functions of the linear long wave model. The method has the advantage of being able to use the nonlinear shallow water equations, or other appropriate equation sets, and to optimize an initial state given as a linear or nonlinear function of any set of free parameters. This last facility is used to perform explicit optimization of the focal fault parameters, characterizing the initial sea surface displacement of tsunamigenic earthquakes. The proposed methodology is validated with experiments using synthetic data, showing the possibility of recovering all relevant details of a tsunami source from tide gauge observations, providing that the adjoint method is constrained in an appropriate manner. It is found, as in other methods, that the inversion skill of tsunami sources increases with the azimuthal and temporal coverage of assimilated tide gauge stations; furthermore, it is shown that the eigenvalue analysis of the Hessian matrix of the cost function provides a consistent and useful methodology to choose the subset of independent parameters that can be inverted with a given dataset of observations and to evaluate the error of the inversion process. The method is also applied to real tide gauge series, from the tsunami of the February 28, 1969, Gorringe Bank earthquake, suggesting some reasonable changes to the assumed focal parameters of that event. It is suggested that the method proposed may be able to deal with transient tsunami sources such as those generated by submarine landslides.
Inversion methods for interpretation of asteroid lightcurves
NASA Technical Reports Server (NTRS)
Kaasalainen, Mikko; Lamberg, L.; Lumme, K.
1992-01-01
We have developed methods of inversion that can be used in the determination of the three-dimensional shape or the albedo distribution of the surface of a body from disk-integrated photometry, assuming the shape to be strictly convex. In addition to the theory of inversion methods, we have studied the practical aspects of the inversion problem and applied our methods to lightcurve data of 39 Laetitia and 16 Psyche.
Error handling strategies in multiphase inverse modeling
Finsterle, S.; Zhang, Y.
2010-12-01
Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.
Oh, Geok Lian; Brunskog, Jonas
2014-08-01
Techniques have been studied for the localization of an underground source with seismic interrogation signals. Much of the work has involved defining either a P-wave acoustic model or a dispersive surface wave model to the received signal and applying the time-delay processing technique and frequency-wavenumber processing to determine the location of the underground tunnel. Considering the case of determining the location of an underground tunnel, this paper proposed two physical models, the acoustic approximation ray tracing model and the finite difference time domain three-dimensional (3D) elastic wave model to represent the received seismic signal. Two localization algorithms, beamforming and Bayesian inversion, are developed for each physical model. The beam-forming algorithms implemented are the modified time-and-delay beamformer and the F-K beamformer. Inversion is posed as an optimization problem to estimate the unknown position variable using the described physical forward models. The proposed four methodologies are demonstrated and compared using seismic signals recorded by geophones set up on ground surface generated by a surface seismic excitation. The examples show that for field data, inversion for localization is most advantageous when the forward model completely describe all the elastic wave components as is the case of the FDTD 3D elastic model.
An exact inverse method for subsonic flows
NASA Technical Reports Server (NTRS)
Daripa, Prabir
1988-01-01
A new inverse method for the aerodynamic design of airfoils is presented for subcritical flows. The pressure distribution in this method can be prescribed as a function of the arclength of the still unknown body. It is shown that this inverse problem is mathematically equivalent to solving only one nonlinear boundary value problem subject to known Dirichlet data on the boundary.
FNAS/Rapid Spectral Inversion Methods
NASA Technical Reports Server (NTRS)
Poularikas, Alexander
1997-01-01
The purpose of this investigation was to study methods and ways for rapid inversion programs involving the correlated k-method, and to study the infrared observations of Saturn from the Cassini orbiter.
MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS
Asensio Ramos, A.; Manso Sainz, R.; Martinez Gonzalez, M. J.; Socas-Navarro, H.; Viticchie, B.
2012-04-01
Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.
Modelling and inversion -progress, problems, and challenges
NASA Astrophysics Data System (ADS)
Raiche, Art
1994-03-01
Researchers in the field of electromagnetic modelling and inversion have taken advantage of the impressive improvements of new computer hardware to explore exciting new initiatives and solid extensions of older ideas. Finite-difference time-stepping methods have been successfully applied to full-domain 3D models. Another new method combines time-stepping with spatial frequency solutions. The 2D model 3D source (2.5D) problem is also receiving fresh attention both for continental and sea floor applications. The 3D inversion problem is being attacked by several researchers using distorted Born approximation methods. Q-domain inversions using transformation to pseudo-wave field and travel time tomography have also been successfully tested for low contrast problems. Subspace methods have been successful in dramatically reducing the computational burden of the under-determined style of inversion. Static magnetic field interpretation methods are proving useful for delineating the position of closely-spaced multiple targets. Novel (“appeals to nature”) methods are also being investigated. Neural net algorithms have been tested for determining the depth and offset of buried pipes from EM ellipticity data. Genetic algorithms and simulated annealing have been tested for extremal model construction. The failure of researchers to take adequate account of the properties of the mathematical transformation from algorithms to the number domain represented by the computing process remains a major stumbling block. Structured programming, functional languages, and other software tools and methods are presented as an essential part of the serial process leading from EM theory to geological interpretation.
Methodology Using Inverse Methods for Pit Characterization in Multilayer Structures
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Sabbagh, Harold A.; Sabbagh, Elias H.; Murphy, R. Kim; Concordia, Michael; Judd, David R.; Lindgren, Eric; Knopp, Jeremy
2006-03-01
This paper presents a methodology incorporating ultrasonic and eddy current data and NDE models to characterize pits in first and second layers. Approaches such as equivalent pit dimensions, approximate probe models, and iterative inversion schemes were designed to improve the reliability and speed of inverse methods for second layer pit characterization. A novel clutter removal algorithm was developed to compensate for coherent background noise. Validation was achieved using artificial and real pitting corrosion samples.
An inversion method for cometary atmospheres
NASA Astrophysics Data System (ADS)
Hubert, B.; Opitom, C.; Hutsemékers, D.; Jehin, E.; Munhoven, G.; Manfroid, J.; Bisikalo, D. V.; Shematovich, V. I.
2016-10-01
Remote observation of cometary atmospheres produces a measurement of the cometary emissions integrated along the line of sight. This integration is the so-called Abel transform of the local emission rate. The observation is generally interpreted under the hypothesis of spherical symmetry of the coma. Under that hypothesis, the Abel transform can be inverted. We derive a numerical inversion method adapted to cometary atmospheres using both analytical results and least squares fitting techniques. This method, derived under the usual hypothesis of spherical symmetry, allows us to retrieve the radial distribution of the emission rate of any unabsorbed emission, which is the fundamental, physically meaningful quantity governing the observation. A Tikhonov regularization technique is also applied to reduce the possibly deleterious effects of the noise present in the observation and to warrant that the problem remains well posed. Standard error propagation techniques are included in order to estimate the uncertainties affecting the retrieved emission rate. Several theoretical tests of the inversion techniques are carried out to show its validity and robustness. In particular, we show that the Abel inversion of real data is only weakly sensitive to an offset applied to the input flux, which implies that the method, applied to the study of a cometary atmosphere, is only weakly dependent on uncertainties on the sky background which has to be subtracted from the raw observations of the coma. We apply the method to observations of three different comets observed using the TRAPPIST telescope: 103P/ Hartley 2, F6/ Lemmon and A1/ Siding Spring. We show that the method retrieves realistic emission rates, and that characteristic lengths and production rates can be derived from the emission rate for both CN and C2 molecules. We show that the retrieved characteristic lengths can differ from those obtained from a direct least squares fitting over the observed flux of radiation, and
Wake Vortex Inverse Model User's Guide
NASA Technical Reports Server (NTRS)
Lai, David; Delisi, Donald
2008-01-01
NorthWest Research Associates (NWRA) has developed an inverse model for inverting landing aircraft vortex data. The data used for the inversion are the time evolution of the lateral transport position and vertical position of both the port and starboard vortices. The inverse model performs iterative forward model runs using various estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Forward model predictions of lateral transport and altitude are then compared with the observed data. Differences between the data and model predictions guide the choice of vortex parameter values, crosswind profile and circulation evolution in the next iteration. Iterations are performed until a user-defined criterion is satisfied. Currently, the inverse model is set to stop when the improvement in the rms deviation between the data and model predictions is less than 1 percent for two consecutive iterations. The forward model used in this inverse model is a modified version of the Shear-APA model. A detailed description of this forward model, the inverse model, and its validation are presented in a different report (Lai, Mellman, Robins, and Delisi, 2007). This document is a User's Guide for the Wake Vortex Inverse Model. Section 2 presents an overview of the inverse model program. Execution of the inverse model is described in Section 3. When executing the inverse model, a user is requested to provide the name of an input file which contains the inverse model parameters, the various datasets, and directories needed for the inversion. A detailed description of the list of parameters in the inversion input file is presented in Section 4. A user has an option to save the inversion results of each lidar track in a mat-file (a condensed data file in Matlab format). These saved mat-files can be used for post-inversion analysis. A description of the contents of the saved files is given in Section 5. An example of an inversion input
NASA Astrophysics Data System (ADS)
Healy, D.; Kusznir, N.
2004-05-01
Recent discoveries of depth-dependent stretching and mantle exhumation at rifted continental margins require new models of margin formation. A two-dimensional coupled fluid mechanics/thermal kinematic model of sea-floor spreading initiation has been developed to predict the deformational and thermal evolution of rifted continental margins through time. The model can also include the effects of pre-breakup pure-shear stretching of continental lithosphere. Rifted margin lithosphere thinning and thermal evolution is dependent on ocean-ridge spreading rate (Vx), the mantle upwelling velocity beneath the ridge axis (Vz), and the pre-breakup lithosphere stretching factor (a). The model predicts the thinning of the upper crust, lower crust and lithospheric mantle of the continental margin, and the history of rifted margin subsidence, water depths and top basement heat-flow. We apply inverse methods to this new forward model of rifted margin formation to explore how successfully model input parameters may be extracted from observational data at rifted margins. The ability of the inverse method to find a unique solution has been established using synthetic data from forward modelling. Output parameters from the inversion are the horizontal and vertical velocities of sea-floor spreading, their variation with time, and the initial pre-breakup lithosphere stretching factor. Initial inversion tests used forward model predictions of the stretching of the upper crust, the whole crust and the whole lithosphere. These model predictions control the variation of crustal thickness and lithosphere temperature beneath the thinned continental margin and adjacent ocean, which in turn control margin subsidence and gravity anomaly. For application of the inversion procedure to observed data on rifted margins, the input data used are measured bathymetry, sediment thickness, gravity anomaly and upper crustal stretching. The forward problem is characterised by a non-linear relationship between
Walter, Donald A.; LeBlanc, Denis R.
2008-01-01
Historical weapons testing and disposal activities at Camp Edwards, which is located on the Massachusetts Military Reservation, western Cape Cod, have resulted in the release of contaminants into an underlying sand and gravel aquifer that is the sole source of potable water to surrounding communities. Ground-water models have been used at the site to simulate advective transport in the aquifer in support of field investigations. Reasonable models developed by different groups and calibrated by trial and error often yield different predictions of advective transport, and the predictions lack quantitative measures of uncertainty. A recently (2004) developed regional model of western Cape Cod, modified to include the sensitivity and parameter-estimation capabilities of MODFLOW-2000, was used in this report to evaluate the utility of inverse (statistical) methods to (1) improve model calibration and (2) assess model-prediction uncertainty. Simulated heads and flows were most sensitive to recharge and to the horizontal hydraulic conductivity of the Buzzards Bay and Sandwich Moraines and the Buzzards Bay and northern parts of the Mashpee outwash plains. Conversely, simulated heads and flows were much less sensitive to vertical hydraulic conductivity. Parameter estimation (inverse calibration) improved the match to observed heads and flows; the absolute mean residual for heads improved by 0.32 feet and the absolute mean residual for streamflows improved by about 0.2 cubic feet per second. Advective-transport predictions in Camp Edwards generally were most sensitive to the parameters with the highest precision (lowest coefficients of variation), indicating that the numerical model is adequate for evaluating prediction uncertainties in and around Camp Edwards. The incorporation of an advective-transport observation, representing the leading edge of a contaminant plume that had been difficult to match by using trial-and-error calibration, improved the match between an
Abel inversion method for cometary atmospheres.
NASA Astrophysics Data System (ADS)
Hubert, Benoit; Opitom, Cyrielle; Hutsemekers, Damien; Jehin, Emmanuel; Munhoven, Guy; Manfroid, Jean; Bisikalo, Dmitry V.; Shematovich, Valery I.
2016-04-01
Remote observation of cometary atmospheres produces a measurement of the cometary emissions integrated along the line of sight joining the observing instrument and the gas of the coma. This integration is the so-called Abel transform of the local emission rate. We develop a method specifically adapted to the inversion of the Abel transform of cometary emissions, that retrieves the radial profile of the emission rate of any unabsorbed emission, under the hypothesis of spherical symmetry of the coma. The method uses weighted least squares fitting and analytical results. A Tikhonov regularization technique is applied to reduce the possible effects of noise and ill-conditioning, and standard error propagation techniques are implemented. Several theoretical tests of the inversion techniques are carried out to show its validity and robustness, and show that the method is only weakly dependent on any constant offset added to the data, which reduces the dependence of the retrieved emission rate on the background subtraction. We apply the method to observations of three different comets observed using the TRAPPIST instrument: 103P/ Hartley 2, F6/ Lemmon and A1/ Siding spring. We show that the method retrieves realistic emission rates, and that characteristic lengths and production rates can be derived from the emission rate for both CN and C2 molecules. We show that the emission rate derived from the observed flux of CN emission at 387 nm and from the C2 emission at 514.1 nm of comet Siding Spring both present an easily-identifiable shoulder that corresponds to the separation between pre- and post-outburst gas. As a general result, we show that diagnosing properties and features of the coma using the emission rate is easier than directly using the observed flux. We also determine the parameters of a Haser model fitting the inverted data and fitting the line-of-sight integrated observation, for which we provide the exact analytical expression of the line-of-sight integration
NASA Astrophysics Data System (ADS)
Loubet, Benjamin; Carozzi, Marco
2015-04-01
Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28
NASA Technical Reports Server (NTRS)
Fleming, H. E.
1977-01-01
Linear numerical inversion methods applied to atmospheric remote sounding generally can be categorized in two ways: (1) iterative, and (2) inverse matrix methods. However, these two categories are not unrelated; a duality exists between them. In other words, given an iterative scheme, a corresponding inverse matrix method exists, and conversely. This duality concept is developed for the more familiar linear methods. The iterative duals are compared with the classical linear iterative approaches and their differences analyzed. The importance of the initial profile in all methods is stressed. Calculations using simulated data are made to compare accuracies and to examine the dependence of the solution on the initial profile.
NASA Inverse Methods/Data Assimilation
NASA Technical Reports Server (NTRS)
Bennett, Andrew
2003-01-01
An overview of NASA's Third International Summer School on Inverse Methods and Data Assimilation which was conducted at Oregon State University from July 22 to August 2, 2002, is presented. Items listed include: a roster of attendees, a description of course content and talks given.
NASA Astrophysics Data System (ADS)
Xue, Haile; Shen, Xueshun; Chou, Jifan
2015-11-01
An online systematic error correction is presented and examined as a technique to improve the accuracy of real-time numerical weather prediction, based on the dataset of model errors (MEs) in past intervals. Given the analyses, the ME in each interval (6 h) between two analyses can be iteratively obtained by introducing an unknown tendency term into the prediction equation, shown in Part I of this two-paper series. In this part, after analyzing the 5-year (2001-2005) GRAPES-GFS (Global Forecast System of the Global and Regional Assimilation and Prediction System) error patterns and evolution, a systematic model error correction is given based on the least-squares approach by firstly using the past MEs. To test the correction, we applied the approach in GRAPES-GFS for July 2009 and January 2010. The datasets associated with the initial condition and SST used in this study were based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results indicated that the Northern Hemispheric systematically underestimated equator-to-pole geopotential gradient and westerly wind of GRAPES-GFS were largely enhanced, and the biases of temperature and wind in the tropics were strongly reduced. Therefore, the correction results in a more skillful forecast with lower mean bias and root-mean-square error and higher anomaly correlation coefficient.
Inverse hydrochemical models of aqueous extracts tests
Zheng, L.; Samper, J.; Montenegro, L.
2008-10-10
Aqueous extract test is a laboratory technique commonly used to measure the amount of soluble salts of a soil sample after adding a known mass of distilled water. Measured aqueous extract data have to be re-interpreted in order to infer porewater chemical composition of the sample because porewater chemistry changes significantly due to dilution and chemical reactions which take place during extraction. Here we present an inverse hydrochemical model to estimate porewater chemical composition from measured water content, aqueous extract, and mineralogical data. The model accounts for acid-base, redox, aqueous complexation, mineral dissolution/precipitation, gas dissolution/ex-solution, cation exchange and surface complexation reactions, of which are assumed to take place at local equilibrium. It has been solved with INVERSE-CORE{sup 2D} and been tested with bentonite samples taken from FEBEX (Full-scale Engineered Barrier EXperiment) in situ test. The inverse model reproduces most of the measured aqueous data except bicarbonate and provides an effective, flexible and comprehensive method to estimate porewater chemical composition of clays. Main uncertainties are related to kinetic calcite dissolution and variations in CO2(g) pressure.
Geoacoustic model inversion using artificial neural networks
NASA Astrophysics Data System (ADS)
Benson, Jeremy; Chapman, N. Ross; Antoniou, Andreas
2000-12-01
An inversion technique using artificial neural networks (ANNs) is described for estimating geoacoustic model parameters of the ocean bottom and information about the sound source from acoustic field data. The method is applied to transmission loss data from the TRIAL SABLE experiment that was carried out in shallow water off Nova Scotia. The inversion is designed to incorporate the a priori information available for the site in order to improve the estimation accuracy. The inversion scheme involves training feedforward ANNs to estimate the geoacoustic and geometric parameters using simulated input/output training pairs generated with a forward acoustic propagation model. The inputs to the ANNs are the spectral components of the transmission loss at each sensor of a vertical hydrophone array for the two lowest frequencies that were transmitted in the experiment, 35 and 55 Hz. The output is the set of environmental model parameters, both geometric and geoacoustic, corresponding to the received field. In order to decrease the training time, a separate network was trained for each parameter. The errors for the parallel estimation are 10% lower than for those obtained using a single network to estimate all the parameters simultaneously, and the training time is decreased by a factor of six. When the experimental data are presented to the ANNs the geometric parameters, such as source range and depth, are estimated with a high accuracy. Geoacoustic parameters, such as the compressional speed in the sediment and the sediment thickness, are found with a moderate accuracy.
Geophysical Inversion With Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
We are investigating the use of Pareto multi-objective global optimization (PMOGO) methods to solve numerically complicated geophysical inverse problems. PMOGO methods can be applied to highly nonlinear inverse problems, to those where derivatives are discontinuous or simply not obtainable, and to those were multiple minima exist in the problem space. PMOGO methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. This allows a more complete assessment of the possibilities and provides opportunities to calculate statistics regarding the likelihood of particular model features. We are applying PMOGO methods to four classes of inverse problems. The first are discrete-body problems where the inversion determines values of several parameters that define the location, orientation, size and physical properties of an anomalous body represented by a simple shape, for example a sphere, ellipsoid, cylinder or cuboid. A PMOGO approach can determine not only the optimal shape parameters for the anomalous body but also the optimal shape itself. Furthermore, when one expects several anomalous bodies in the subsurface, a PMOGO inversion approach can determine an optimal number of parameterized bodies. The second class of inverse problems are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The third class of problems are lithological inversions, which are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the fourth class, surface geometry inversions, we consider a fundamentally different type of problem in which a model comprises wireframe surfaces representing contacts between rock units. The physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. Surface geometry inversion can be
Current methods of radio occultation data inversion
NASA Technical Reports Server (NTRS)
Kliore, A. J.
1972-01-01
The methods of Abel integral transform and ray-tracing inversion have been applied to data received from radio occultation experiments as a means of obtaining refractive index profiles of the ionospheres and atmospheres of Mars and Venus. In the case of Mars, certain simplifications are introduced by the assumption of small refractive bending in the atmosphere. General inversion methods, independent of the thin atmosphere approximation, have been used to invert the data obtained from the radio occultation of Mariner 5 by Venus; similar methods will be used to analyze data obtained from Jupiter with Pioneers F and G, as well as from the other outer planets in the Outer Planet Grand Tour Missions.
An efficient method for inverse problems
NASA Technical Reports Server (NTRS)
Daripa, Prabir
1987-01-01
A new inverse method for aerodynamic design of subcritical airfoils is presented. The pressure distribution in this method can be prescribed in a natural way, i.e. as a function of arclength of the as yet unknown body. This inverse problem is shown to be mathematically equivalent to solving a single nonlinear boundary value problem subject to known Dirichlet data on the boundary. The solution to this problem determines the airfoil, the free stream Mach number M(sub x) and the upstream flow direction theta(sub x). The existence of a solution for any given pressure distribution is discussed. The method is easy to implement and extremely efficient. We present a series of results for which comparisons are made with the known airfoils.
NASA Astrophysics Data System (ADS)
Martin, Roland; Chevrot, Sébastien; Komatitsch, Dimitri; Seoane, Lucia; Spangenberg, Hannah; Wang, Yi; Dufréchou, Grégory; Bonvalot, Sylvain; Bruinsma, Sean
2017-01-01
We image the internal density structure of the Pyrenees by inverting gravity data using an a priori density model derived by scaling a Vp model obtained by full waveform inversion of teleseismic P-waves. Gravity anomalies are computed via a 3D high-order finite-element integration in the same high-order spectral-element grid as the one used to solve the wave equation and thus to obtain the velocity model. The curvature of the Earth and surface topography are taken into account in order to obtain a density model as accurate as possible. The method is validated through comparisons with exact semi-analytical solutions. We show that the spectral element method drastically accelerates the computations when compared to other more classical methods. Different scaling relations between compressional velocity and density are tested, and the Nafe-Drake relation is the one that leads to the best agreement between computed and observed gravity anomalies. Gravity data inversion is then performed and the results allow us to put more constraints on the density structure of the shallow crust and on the deep architecture of the mountain range.
Regeneration of stochastic processes: an inverse method
NASA Astrophysics Data System (ADS)
Ghasemi, F.; Peinke, J.; Sahimi, M.; Rahimi Tabar, M. R.
2005-10-01
We propose a novel inverse method that utilizes a set of data to construct a simple equation that governs the stochastic process for which the data have been measured, hence enabling us to reconstruct the stochastic process. As an example, we analyze the stochasticity in the beat-to-beat fluctuations in the heart rates of healthy subjects as well as those with congestive heart failure. The inverse method provides a novel technique for distinguishing the two classes of subjects in terms of a drift and a diffusion coefficients which behave completely differently for the two classes of subjects, hence potentially providing a novel diagnostic tool for distinguishing healthy subjects from those with congestive heart failure, even at the early stages of the disease development.
Improved hybrid iterative optimization method for seismic full waveform inversion
NASA Astrophysics Data System (ADS)
Wang, Yi; Dong, Liang-Guo; Liu, Yu-Zhu
2013-06-01
In full waveform inversion (FWI), Hessian information of the misfit function is of vital importance for accelerating the convergence of the inversion; however, it usually is not feasible to directly calculate the Hessian matrix and its inverse. Although the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) or Hessian-free inexact Newton (HFN) methods are able to use approximate Hessian information, the information they collect is limited. The two methods can be interlaced because they are able to provide Hessian information for each other; however, the performance of the hybrid iterative method is dependent on the effective switch between the two methods. We have designed a new scheme to realize the dynamic switch between the two methods based on the decrease ratio (DR) of the misfit function (objective function), and we propose a modified hybrid iterative optimization method. In the new scheme, we compare the DR of the two methods for a given computational cost, and choose the method with a faster DR. Using these steps, the modified method always implements the most efficient method. The results of Marmousi and over thrust model testings indicate that the convergence with our modified method is significantly faster than that in the L-BFGS method with no loss of inversion quality. Moreover, our modified outperforms the enriched method by a little speedup of the convergence. It also exhibits better efficiency than the HFN method.
Radiation Source Mapping with Bayesian Inverse Methods
NASA Astrophysics Data System (ADS)
Hykes, Joshua Michael
We present a method to map the spectral and spatial distributions of radioactive sources using a small number of detectors. Locating and identifying radioactive materials is important for border monitoring, accounting for special nuclear material in processing facilities, and in clean-up operations. Most methods to analyze these problems make restrictive assumptions about the distribution of the source. In contrast, the source-mapping method presented here allows an arbitrary three-dimensional distribution in space and a flexible group and gamma peak distribution in energy. To apply the method, the system's geometry and materials must be known. A probabilistic Bayesian approach is used to solve the resulting inverse problem (IP) since the system of equations is ill-posed. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint flux, discrete ordinates solutions, obtained in this work by the Denovo code, are required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes are then used to form the linear model to map the state space to the response space. The test for the method is simultaneously locating a set of 137Cs and 60Co gamma sources in an empty room. This test problem is solved using synthetic measurements generated by a Monte Carlo (MCNP) model and using experimental measurements that we collected for this purpose. With the synthetic data, the predicted source distributions identified the locations of the sources to within tens of centimeters, in a room with an approximately four-by-four meter floor plan. Most of the predicted source intensities were within a factor of ten of their true value. The chi-square value of the predicted source was within a factor of five from the expected value based on the number of measurements employed. With a favorable uniform initial guess, the predicted source map was nearly identical to the true distribution
Tissue elasticity measurement method using forward and inversion algorithms
NASA Astrophysics Data System (ADS)
Lee, Jong-Ha; Won, Chang-Hee; Park, Hee-Jun; Ku, Jeonghun; Heo, Yun Seok; Kim, Yoon-Nyun
2013-03-01
Elasticity is an important indicator of tissue health, with increased stiffness pointing to an increased risk of cancer. We investigated a tissue elasticity measurement method using forward and inversion algorithms for the application of early breast tumor identification. An optical based elasticity measurement system is developed to capture images of the embedded lesions using total internal reflection principle. From elasticity images, we developed a novel method to estimate the elasticity of the embedded lesion using 3-D finite-element-model-based forward algorithm, and neural-network-based inversion algorithm. The experimental results showed that the proposed characterization method can be diffierentiate the benign and malignant breast lesions.
Geostatistical joint inversion of seismic and potential field methods
NASA Astrophysics Data System (ADS)
Shamsipour, Pejman; Chouteau, Michel; Giroux, Bernard
2016-04-01
Interpretation of geophysical data needs to integrate different types of information to make the proposed model geologically realistic. Multiple data sets can reduce uncertainty and non-uniqueness present in separate geophysical data inversions. Seismic data can play an important role in mineral exploration, however processing and interpretation of seismic data is difficult due to complexity of hard-rock geology. On the other hand, the recovered model from potential field methods is affected by inherent non uniqueness caused by the nature of the physics and by underdetermination of the problem. Joint inversion of seismic and potential field data can mitigate weakness of separate inversion of these methods. A stochastic joint inversion method based on geostatistical techniques is applied to estimate density and velocity distributions from gravity and travel time data. The method fully integrates the physical relations between density-gravity, on one hand, and slowness-travel time, on the other hand. As a consequence, when the data are considered noise-free, the responses from the inverted slowness and density data exactly reproduce the observed data. The required density and velocity auto- and cross-covariance are assumed to follow a linear model of coregionalization (LCM). The recent development of nonlinear model of coregionalization could also be applied if needed. The kernel function for the gravity method is obtained by the closed form formulation. For ray tracing, we use the shortest-path methods (SPM) to calculate the operation matrix. The jointed inversion is performed on structured grid; however, it is possible to extend it to use unstructured grid. The method is tested on two synthetic models: a model consisting of two objects buried in a homogeneous background and a model with stochastic distribution of parameters. The results illustrate the capability of the method to improve the inverted model compared to the separate inverted models with either gravity
Jinnai, H; Nishikawa, Y; Chen, S H; Koizumi, S; Hashimoto, T
2000-06-01
A method is proposed to determine the spectral function of the clipped-random-wave (CRW) model directly from scattering data. The spectral function f(k) (k is a wave number) gives the distribution of the magnitude of wave vectors of the sinusoidal waves that describes the essential features of the two-phase morphology. The proposed method involves "inverse clipping" of a correlation function to obtain f(k) and does not require any a priori assumptions for f(k). A critical test of the applicability of the inverse-clipping method was carried out by using three-component bicontinuous microemulsions. The method was then used to determine f(k) of the bicontinuous structure of a phase-separating polymer blend. f(k) for the polymer blend turned out to be a multipeaked function, while f(k) for the microemulsions exhibits a single broad maximum representing periodicity of the morphology. These results indicate the presence of the long-range regularity in the morphology of the polymer blend. Three-dimensional (3D) morphology corresponding to the scattering data of the polymer blend was generated using the CRW model together with the multipeaked f(k). Interface curvatures of the 3D morphology calculated from f(k) were measured and compared with those experimentally determined directly from the laser scanning confocal microscopy in the same blend.
NASA Astrophysics Data System (ADS)
Jinnai, Hiroshi; Nishikawa, Yukihiro; Chen, Sow-Hsin; Koizumi, Satoshi; Hashimoto, Takeji
2000-06-01
A method is proposed to determine the spectral function of the clipped-random-wave (CRW) model directly from scattering data. The spectral function f(k) (k is a wave number) gives the distribution of the magnitude of wave vectors of the sinusoidal waves that describes the essential features of the two-phase morphology. The proposed method involves ``inverse clipping'' of a correlation function to obtain f(k) and does not require any a priori assumptions for f(k). A critical test of the applicability of the inverse-clipping method was carried out by using three-component bicontinuous microemulsions. The method was then used to determine f(k) of the bicontinuous structure of a phase-separating polymer blend. f(k) for the polymer blend turned out to be a multipeaked function, while f(k) for the microemulsions exhibits a single broad maximum representing periodicity of the morphology. These results indicate the presence of the long-range regularity in the morphology of the polymer blend. Three-dimensional (3D) morphology corresponding to the scattering data of the polymer blend was generated using the CRW model together with the multipeaked f(k). Interface curvatures of the 3D morphology calculated from f(k) were measured and compared with those experimentally determined directly from the laser scanning confocal microscopy in the same blend.
Forward and inverse modelling of post-seismic deformation
NASA Astrophysics Data System (ADS)
Crawford, Ophelia; Al-Attar, David; Tromp, Jeroen; Mitrovica, Jerry X.
2016-11-01
We consider a new approach to both the forward and inverse problems in post-seismic deformation. We present a method for forward modelling post-seismic deformation in a self-gravitating, heterogeneous and compressible earth with a variety of linear and non-linear rheologies. We further demonstrate how the adjoint method can be applied to the inverse problem both to invert for rheological structure and to calculate the sensitivity of a given surface measurement to changes in rheology or time-dependence of the source. Both the forward and inverse aspects are illustrated with several numerical examples implemented in a spherically symmetric earth model.
Forward and inverse modelling of post-seismic deformation
NASA Astrophysics Data System (ADS)
Crawford, Ophelia; Al-Attar, David; Tromp, Jeroen; Mitrovica, Jerry X.
2017-02-01
We consider a new approach to both the forward and inverse problems in post-seismic deformation. We present a method for forward modelling post-seismic deformation in a self-gravitating, heterogeneous and compressible earth with a variety of linear and nonlinear rheologies. We further demonstrate how the adjoint method can be applied to the inverse problem both to invert for rheological structure and to calculate the sensitivity of a given surface measurement to changes in rheology or time-dependence of the source. Both the forward and inverse aspects are illustrated with several numerical examples implemented in a spherically symmetric earth model.
Putz, Ana-Maria; Putz, Mihai V.
2012-01-01
The present work advances the inverse quantum (IQ) structural criterion for ordering and characterizing the porosity of the mesosystems based on the recently advanced ratio of the particle-to-wave nature of quantum objects within the extended Heisenberg uncertainty relationship through employing the quantum fluctuation, both for free and observed quantum scattering information, as computed upon spectral identification of the wave-numbers specific to the maximum of absorption intensity record, and to left-, right- and full-width at the half maximum (FWHM) of the concerned bands of a given compound. It furnishes the hierarchy for classifying the mesoporous systems from more particle-related (porous, tight or ionic bindings) to more wave behavior (free or covalent bindings). This so-called spectral inverse quantum (Spectral-IQ) particle-to-wave assignment was illustrated on spectral measurement of FT-IR (bonding) bands’ assignment for samples synthesized within different basic environment and different thermal treatment on mesoporous materials obtained by sol-gel technique with n-dodecyl trimethyl ammonium bromide (DTAB) and cetyltrimethylammonium bromide (CTAB) and of their combination as cosolvents. The results were analyzed in the light of the so-called residual inverse quantum information, accounting for the free binding potency of analyzed samples at drying temperature, and were checked by cross-validation with thermal decomposition techniques by endo-exo thermo correlations at a higher temperature. PMID:23443102
The method of common search direction of joint inversion
NASA Astrophysics Data System (ADS)
Zhao, C.; Tang, R.
2013-12-01
In geophysical inversion, the first step is to construct an objective function. The second step is using the optimization algorithm to minimize the objective function, such as the gradient method and the conjugate gradient method. Compared with the former, the conjugate gradient method can find a better direction to make the error decreasing faster and has been widely used for a long time. At present, the joint inversion is generally using the conjugate gradient method. The most important thing of joint inversion is to construct the partial derivative matrix with respect to different physical properties. Then we should add the constraints among different physical properties into the integrated matrix and also use the cross gradient as constrained of joint inversion. There are two ways to apply the cross gradient into inverse process that can be added to the data function or the model function. One way is adding the cross gradient into data function. The partial derivative matrix will grow two times, meanwhile it's also requested to calculate the cross gradient of each grid and bring great computation cost.
Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems
Benzi, M.; Tuma, M.
1996-12-31
A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.
Inverse polynomial reconstruction method in DCT domain
NASA Astrophysics Data System (ADS)
Dadkhahi, Hamid; Gotchev, Atanas; Egiazarian, Karen
2012-12-01
The discrete cosine transform (DCT) offers superior energy compaction properties for a large class of functions and has been employed as a standard tool in many signal and image processing applications. However, it suffers from spurious behavior in the vicinity of edge discontinuities in piecewise smooth signals. To leverage the sparse representation provided by the DCT, in this article, we derive a framework for the inverse polynomial reconstruction in the DCT expansion. It yields the expansion of a piecewise smooth signal in terms of polynomial coefficients, obtained from the DCT representation of the same signal. Taking advantage of this framework, we show that it is feasible to recover piecewise smooth signals from a relatively small number of DCT coefficients with high accuracy. Furthermore, automatic methods based on minimum description length principle and cross-validation are devised to select the polynomial orders, as a requirement of the inverse polynomial reconstruction method in practical applications. The developed framework can considerably enhance the performance of the DCT in sparse representation of piecewise smooth signals. Numerical results show that denoising and image approximation algorithms based on the proposed framework indicate significant improvements over wavelet counterparts for this class of signals.
Joint Geophysical Inversion With Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelievre, P. G.; Bijani, R.; Farquharson, C. G.
2015-12-01
Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.
Stochastic inverse problems: Models and metrics
Sabbagh, Elias H.; Sabbagh, Harold A.; Murphy, R. Kim; Aldrin, John C.; Annis, Charles; Knopp, Jeremy S.
2015-03-31
In past work, we introduced model-based inverse methods, and applied them to problems in which the anomaly could be reasonably modeled by simple canonical shapes, such as rectangular solids. In these cases the parameters to be inverted would be length, width and height, as well as the occasional probe lift-off or rotation. We are now developing a formulation that allows more flexibility in modeling complex flaws. The idea consists of expanding the flaw in a sequence of basis functions, and then solving for the expansion coefficients of this sequence, which are modeled as independent random variables, uniformly distributed over their range of values. There are a number of applications of such modeling: 1. Connected cracks and multiple half-moons, which we have noted in a POD set. Ideally we would like to distinguish connected cracks from one long shallow crack. 2. Cracks of irregular profile and shape which have appeared in cold work holes during bolt-hole eddy-current inspection. One side of such cracks is much deeper than other. 3. L or C shaped crack profiles at the surface, examples of which have been seen in bolt-hole cracks. By formulating problems in a stochastic sense, we are able to leverage the stochastic global optimization algorithms in NLSE, which is resident in VIC-3D®, to answer questions of global minimization and to compute confidence bounds using the sensitivity coefficient that we get from NLSE. We will also address the issue of surrogate functions which are used during the inversion process, and how they contribute to the quality of the estimation of the bounds.
Inverse groundwater modeling with emphasis on model parameterization
NASA Astrophysics Data System (ADS)
Kourakos, George; Mantoglou, Aristotelis
2012-05-01
This study develops an inverse method aiming to circumvent the subjective decision regarding model parameterization and complexity in inverse groundwater modeling. The number of parameters is included as a decision variable along with parameter values. A parameterization based on B-spline surfaces (BSS) is selected to approximate transmissivity, and genetic algorithms were selected to perform error minimization. A transform based on linear least squares (LLS) is developed, so that different parameterizations may be combined by standard genetic algorithm operators. First, three applications, with isotropic, anisotropic, and zoned aquifer parameters, are examined in a single objective optimization problem and the estimated transmissivity is found to be near the true one. Interestingly, in the anisotropic case, the algorithm converged to a solution with an anisotropic distribution of control points. Next, a single objective optimization with regularization, penalizing complex models, is considered, and last, the problem is expressed in a multiobjective optimization framework (MOO), where the goals are simultaneous minimization of calibration error and model complexity. The result of MOO is a Pareto set of potential solutions where the user can examine the tradeoffs between calibration error and model complexity and select the most suitable model. By comparing calibration with prediction errors, it appears, that the most promising models are the ones near a region where the rate of decrease of calibration error as model complexity increases drops (bend of error curve). This is a useful result of practical interest in real inverse modeling applications.
NASA Astrophysics Data System (ADS)
Jang, Hangilro; Kim, Hee Joon
2015-12-01
In transient electromagnetic (TEM) measurements, secondary fields that contain information on conductive targets such as hydrothermal mineral deposits in the seafloor can be measured in the absence of strong primary fields. A TEM system using a loop source is useful to the development of compact, autonomous instruments, which are well suited to submersible-based surveys. In this paper, we investigate the possibility of applying an in-loop TEM system to the detection of marine hydrothermal deposits through a one-dimensional modeling and inversion study. We examine step-off responses for a layered model and compare the characteristics of horizontal and vertical loop systems for detecting hydrothermal deposits. The feasibility study shows that TEM responses are very sensitive to a highly conductive layer. Time-domain target responses are larger and appear earlier in horizontal magnetic fields than in vertical ones, although the vertical field has 2-3 times larger magnitude than the horizontal one. An inverse problem is formulated with the Gauss-Newton method and solved with the damped and smoothness-constrained least-squares approach. The test example for a marine hydrothermal TEM survey demonstrated that the depth extent, conductivity and thickness of the highly conductive layer are well resolved.
An inverse problem by boundary element method
Tran-Cong, T.; Nguyen-Thien, T.; Graham, A.L.
1996-02-01
Boundary Element Methods (BEM) have been established as useful and powerful tools in a wide range of engineering applications, e.g. Brebbia et al. In this paper, we report a particular three dimensional implementation of a direct boundary integral equation (BIE) formulation and its application to numerical simulations of practical polymer processing operations. In particular, we will focus on the application of the present boundary element technology to simulate an inverse problem in plastics processing.by extrusion. The task is to design profile extrusion dies for plastics. The problem is highly non-linear due to material viscoelastic behaviours as well as unknown free surface conditions. As an example, the technique is shown to be effective in obtaining the die profiles corresponding to a square viscoelastic extrudate under different processing conditions. To further illustrate the capability of the method, examples of other non-trivial extrudate profiles and processing conditions are also given.
A reduced basis Landweber method for nonlinear inverse problems
NASA Astrophysics Data System (ADS)
Garmatter, Dominik; Haasdonk, Bernard; Harrach, Bastian
2016-03-01
We consider parameter identification problems in parametrized partial differential equations (PDEs). These lead to nonlinear ill-posed inverse problems. One way of solving them is using iterative regularization methods, which typically require numerous amounts of forward solutions during the solution process. In this article we consider the nonlinear Landweber method and couple it with the reduced basis method as a model order reduction technique in order to reduce the overall computational time. In particular, we consider PDEs with a high-dimensional parameter space, which are known to pose difficulties in the context of reduced basis methods. We present a new method that is able to handle such high-dimensional parameter spaces by combining the nonlinear Landweber method with adaptive online reduced basis updates. It is then applied to the inverse problem of reconstructing the conductivity in the stationary heat equation.
A system model and inversion for synthetic aperture radar imaging.
Soumekh, M
1992-01-01
A system model and its corresponding inversion for synthetic aperture radar (SAR) imaging are presented. The system model incorporates the spherical nature of a radar's radiation pattern at far field. The inverse method based on this model performs a spatial Fourier transform (Doppler processing) on the recorded signals with respect to the available coordinates of a translational radar (SAR) or target (inverse SAR). It is shown that the transformed data provide samples of the spatial Fourier transform of the target's reflectivity function. The inverse method can be modified to incorporate deviations of the radar's motion from its prescribed straight line path. The effects of finite aperture on resolution, reconstruction, and sampling constraints for the imaging problem are discussed.
A statistical mechanical model for inverse melting
NASA Astrophysics Data System (ADS)
Feeney, Melissa R.; Debenedetti, Pablo G.; Stillinger, Frank H.
2003-08-01
Inverse melting is the situation in which a liquid freezes when it is heated isobarically. Both helium isotopes exhibit intervals of inverse melting at low temperature, and published data suggests that isotactic poly (4-methylpentene-1) also displays this unusual phase behavior. Here we propose a statistical mechanical model for inverse melting. It is a decorated modification of the Gaussian core model, in which particles possess a spectrum of thermally activated internal states. Excitation leads to a change in a particle's Gaussian interaction parameters, and this can result in a spatially periodic crystal possessing a higher entropy than the fluid with which it coexists. Numerical solution of the model, using integral equations and the hypernetted chain closure for the fluid phase, and the Einstein model for the solid phases, identifies two types of inverse melting. One mimics the behavior of the helium isotopes, for which the higher-entropy crystal is denser than the liquid. The other corresponds to inverse melting in poly(4-methylpentene-1), where the high-entropy crystal is less dense than the liquid with which it coexists.
A simple inverse design method for pump turbine
NASA Astrophysics Data System (ADS)
Yin, Junlian; Li, Jingjing; Wang, Dezhong; Wei, Xianzhu
2014-03-01
In this paper, a simple inverse design method is proposed for pump turbine. The main point of this method is that the blade loading distribution is first extracted from an existing model and then applied in the new design. As an example, the blade loading distribution of the runner designed with head 200m, was analyzed. And then, the combination of the extracted blade loading and a meridional passage suitable for 500m head is applied to design a new runner project. After CFD and model test, it is shown that the new runner performs very well in terms of efficiency and cavitation. Therefore, as an alternative, the inverse design method can be extended to other design applications.
Inverse Modeling of Coastal Tides
1999-09-30
data in the tidal band. We have concluded that understanding this discrepancy and developing assimilation methods for baroclinic tides will require...Alexandre Kurapov to develop practical assimilation methods for coastal HF radar data. REFERENCES Bennett, A.F., B.S. Chua, and L.M. Leslie, Generalized
Inverse Modeling of Coastal Tides
1998-01-01
produced. We are also working with Profs. J. Allen and R. Miller on developing practical assimilation methods for the coastal problem. REFERENCES...40, 81--108, 1997. Egbert, G.D. and A.F. Bennett, Data assimilation methods for ocean tides, in Modern approaches to data assimilation in ocean
Matrix methods for reflective inverse diffusion
NASA Astrophysics Data System (ADS)
Burgi, Kenneth W.; Marciniak, Michael A.; Nauyoks, Stephen E.; Oxley, Mark E.
2016-09-01
Reflective inverse diffusion is a method of refocusing light scattered by a rough surface. An SLM is used to shape the wavefront of a HeNe laser at 632.8-nm wavelength to produce a converging phase front after reflection. Iterative methods previously demonstrated intensity enhancements of the focused spot over 100 times greater than the surrounding background speckle. This proof-of-concept method was very time consuming and the algorithm started over each time the desired location of the focus spot in the observation plane was moved. Transmission matrices have been developed to control light scattered by transmission through a turbid media. Time varying phase maps are applied to an SLM and used to interrogate the phase scattering properties of the material. For each phase map, the resultant speckle intensity pattern is recorded less than 1 mm from the material surface and represents an observation plane of less than 0.02 mm2. Fourier transforms are used to extract the phase scattering properties of the material from the intensity measurements. We investigate the effectiveness this method for constructing the reflection matrix (RM) of a diffuse reflecting medium where the propagation distances and observation plane are almost 1,000 times greater than the previous work based on transmissive scatter. The RM performance is based on its ability to refocus reflectively scattered light to a single focused spot or multiple foci in the observation plane. Diffraction-based simulations are used to corroborate experimental results.
Inverse modeling of human contrast response.
Katkov, Mikhail; Tsodyks, Misha; Sagi, Dov
2007-10-01
Mathematical singularities found in the Signal Detection Theory (SDT) based analysis of the 2-Alternative-Forced-Choice (2AFC) method [Katkov, M., Tsodyks, M., & Sagi, D. (2006a). Analysis of two-alternative force-choice Signal Detection Theory model. Journal of Mathematical Psychology, 50, 411-420; Katkov, M., Tsodyks, M., & Sagi, D. (2006b). Singularities in the inverse modeling of 2AFC contrast discrimination data. Vision Research, 46, 256-266; Katkov, M., Tsodyks, M., & Sagi, D. (2007). Singularities explained: Response to Klein. Vision Research, doi:10.1016/j.visres.2006.10.030] imply that contrast discrimination data obtained with the 2AFC method cannot always be used to reliably estimate the parameters of the underlying model (internal response and noise functions) with a reasonable number of trials. Here we bypass this problem with the Identification Task (IT) where observers identify one of N contrasts. We have found that identification data varies significantly between experimental sessions. Stable estimates using individual session data showed Contrast Response Functions (CRF) with high gain in the low contrast regime and low gain in the high contrast regime. Noise Amplitudes (NA) followed a decreasing function of contrast at low contrast levels, and were practically constant above some contrast level. The transition between these two regimes corresponded approximately to the position of the dipper in the Threshold versus Contrast (TvC) curves that were computed using the estimated parameters and independently measured using 2AFC.
NASA Astrophysics Data System (ADS)
Saunier, O.; Mathieu, A.; Didier, D.; Tombette, M.; Quélo, D.; Winiarek, V.; Bocquet, M.
2013-06-01
The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term including the time evolution of the release rate and its distribution between radioisotopes. Inverse modeling methods, which combine environmental measurements and atmospheric dispersion models, have proven efficient in assessing source term due to an accidental situation (Gudiksen, 1989; Krysta and Bocquet, 2007; Stohl et al., 2012a; Winiarek et al., 2012). Most existing approaches are designed to use air sampling measurements (Winiarek et al., 2012) and some of them also use deposition measurements (Stohl et al., 2012a; Winiarek et al., 2013) but none of them uses dose rate measurements. However, it is the most widespread measurement system, and in the event of a nuclear accident, these data constitute the main source of measurements of the plume and radioactive fallout during releases. This paper proposes a method to use dose rate measurements as part of an inverse modeling approach to assess source terms. The method is proven efficient and reliable when applied to the accident at the Fukushima Daiichi nuclear power plant (FD-NPP). The emissions for the eight main isotopes 133Xe, 134Cs, 136Cs, 137Cs, 137mBa, 131I, 132I and 132Te have been assessed. Accordingly, 103 PBq of 131I, 35.5 PBq of 132I, 15.5 PBq of 137Cs and 12 100 PBq of noble gases were released. The events at FD-NPP (such as venting, explosions, etc.) known to have caused atmospheric releases are well identified in the retrieved source term. The estimated source term is validated by comparing simulations of atmospheric dispersion and deposition with environmental observations. The result is that the model-measurement agreement for all of the monitoring locations is correct for 80% of simulated dose rates that are within a factor of 2 of the observed values. Changes in dose rates over time have been overall properly reconstructed, especially
Significant uncertainty exists in the magnitude and variability of ammonia (NH3) emissions. NH3 emissions are needed as input for air quality modeling of aerosols and deposition of nitrogen compounds. Approximately 85% of NH3 emissions are estimated to come from agricultural ...
Jonsson, Ulf; Lindahl, Olof; Andersson, Britt
2014-12-01
To gain an understanding of the high-frequency elastic properties of silicone rubber, a finite element model of a cylindrical piezoelectric element, in contact with a silicone rubber disk, was constructed. The frequency-dependent elastic modulus of the silicone rubber was modeled by a fourparameter fractional derivative viscoelastic model in the 100 to 250 kHz frequency range. The calculations were carried out in the range of the first radial resonance frequency of the sensor. At the resonance, the hyperelastic effect of the silicone rubber was modeled by a hyperelastic compensating function. The calculated response was matched to the measured response by using the transitional peaks in the impedance spectrum that originates from the switching of standing Lamb wave modes in the silicone rubber. To validate the results, the impedance responses of three 5-mm-thick silicone rubber disks, with different radial lengths, were measured. The calculated and measured transitional frequencies have been compared in detail. The comparison showed very good agreement, with average relative differences of 0.7%, 0.6%, and 0.7% for the silicone rubber samples with radial lengths of 38.0, 21.4, and 11.0 mm, respectively. The average complex elastic moduli of the samples were (0.97 + 0.009i) GPa at 100 kHz and (0.97 + 0.005i) GPa at 250 kHz.
Yao, Jie; Lesage, Anne-Cécile; Hussain, Fazle; Bodmann, Bernhard G.; Kouri, Donald J.
2014-12-15
The reversion of the Born-Neumann series of the Lippmann-Schwinger equation is one of the standard ways to solve the inverse acoustic scattering problem. One limitation of the current inversion methods based on the reversion of the Born-Neumann series is that the velocity potential should have compact support. However, this assumption cannot be satisfied in certain cases, especially in seismic inversion. Based on the idea of distorted wave scattering, we explore an inverse scattering method for velocity potentials without compact support. The strategy is to decompose the actual medium as a known single interface reference medium, which has the same asymptotic form as the actual medium and a perturbative scattering potential with compact support. After introducing the method to calculate the Green’s function for the known reference potential, the inverse scattering series and Volterra inverse scattering series are derived for the perturbative potential. Analytical and numerical examples demonstrate the feasibility and effectiveness of this method. Besides, to ensure stability of the numerical computation, the Lanczos averaging method is employed as a filter to reduce the Gibbs oscillations for the truncated discrete inverse Fourier transform of each order. Our method provides a rigorous mathematical framework for inverse acoustic scattering with a non-compact support velocity potential.
Xia, J.; Miller, R.D.; Xu, Y.
2008-01-01
Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (>2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. We employed a data-resolution matrix to select data that would be well predicted and we find that there are advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher-mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher-mode data are normally more accurately predicted than fundamental-mode data because of restrictions on the data kernel for the inversion system. We used synthetic and real-world examples to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher-mode data in inversion can provide better results. We also calculated model-resolution matrices in these examples to show the potential of increasing model resolution with selected surface-wave data. ?? Birkhaueser 2008.
An Efficient Inverse Aerodynamic Design Method For Subsonic Flows
NASA Technical Reports Server (NTRS)
Milholen, William E., II
2000-01-01
Computational Fluid Dynamics based design methods are maturing to the point that they are beginning to be used in the aircraft design process. Many design methods however have demonstrated deficiencies in the leading edge region of airfoil sections. The objective of the present research is to develop an efficient inverse design method which is valid in the leading edge region. The new design method is a streamline curvature method, and a new technique is presented for modeling the variation of the streamline curvature normal to the surface. The new design method allows the surface coordinates to move normal to the surface, and has been incorporated into the Constrained Direct Iterative Surface Curvature (CDISC) design method. The accuracy and efficiency of the design method is demonstrated using both two-dimensional and three-dimensional design cases.
Inverse method for estimating respiration rates from decay time series
NASA Astrophysics Data System (ADS)
Forney, D. C.; Rothman, D. H.
2012-09-01
Long-term organic matter decomposition experiments typically measure the mass lost from decaying organic matter as a function of time. These experiments can provide information about the dynamics of carbon dioxide input to the atmosphere and controls on natural respiration processes. Decay slows down with time, suggesting that organic matter is composed of components (pools) with varied lability. Yet it is unclear how the appropriate rates, sizes, and number of pools vary with organic matter type, climate, and ecosystem. To better understand these relations, it is necessary to properly extract the decay rates from decomposition data. Here we present a regularized inverse method to identify an optimally-fitting distribution of decay rates associated with a decay time series. We motivate our study by first evaluating a standard, direct inversion of the data. The direct inversion identifies a discrete distribution of decay rates, where mass is concentrated in just a small number of discrete pools. It is consistent with identifying the best fitting "multi-pool" model, without prior assumption of the number of pools. However we find these multi-pool solutions are not robust to noise and are over-parametrized. We therefore introduce a method of regularized inversion, which identifies the solution which best fits the data but not the noise. This method shows that the data are described by a continuous distribution of rates, which we find is well approximated by a lognormal distribution, and consistent with the idea that decomposition results from a continuum of processes at different rates. The ubiquity of the lognormal distribution suggest that decay may be simply described by just two parameters: a mean and a variance of log rates. We conclude by describing a procedure that estimates these two lognormal parameters from decay data. Matlab codes for all numerical methods and procedures are provided.
Inverse method for estimating respiration rates from decay time series
NASA Astrophysics Data System (ADS)
Forney, D. C.; Rothman, D. H.
2012-03-01
Long-term organic matter decomposition experiments typically measure the mass lost from decaying organic matter as a function of time. These experiments can provide information about the dynamics of carbon dioxide input to the atmosphere and controls on natural respiration processes. Decay slows down with time, suggesting that organic matter is composed of components (pools) with varied lability. Yet it is unclear how the appropriate rates, sizes, and number of pools vary with organic matter type, climate, and ecosystem. To better understand these relations, it is necessary to properly extract the decay rates from decomposition data. Here we present a regularized inverse method to identify an optimally-fitting distribution of decay rates associated with a decay time series. We motivate our study by first evaluating a standard, direct inversion of the data. The direct inversion identifies a discrete distribution of decay rates, where mass is concentrated in just a small number of discrete pools. It is consistent with identifying the best fitting "multi-pool" model, without prior assumption of the number of pools. However we find these multi-pool solutions are not robust to noise and are over-parametrized. We therefore introduce a method of regularized inversion, which identifies the solution which best fits the data but not the noise. This method shows that the data are described by a continuous distribution of rates which we find is well approximated by a lognormal distribution, and consistent with the idea that decomposition results from a continuum of processes at different rates. The ubiquity of the lognormal distribution suggest that decay may be simply described by just two parameters; a mean and a variance of log rates. We conclude by describing a procedure that estimates these two lognormal parameters from decay data. Matlab codes for all numerical methods and procedures are provided.
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; Tartakovsky, Daniel M.
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulic head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.
Hybrid Adaptive Flight Control with Model Inversion Adaptation
NASA Technical Reports Server (NTRS)
Nguyen, Nhan
2011-01-01
This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.
Nonlinear inversion of pre-stack seismic data using variable metric method
NASA Astrophysics Data System (ADS)
Zhang, Fanchang; Dai, Ronghuo
2016-06-01
At present, the routine method to perform AVA (Amplitude Variation with incident Angle) inversion is based on the assumption that the ratio of S-wave velocity to P-wave velocity γ is a constant. However, this simplified assumption does not always hold, and it is necessary to use nonlinear inversion method to solve it. Based on Bayesian theory, the objective function for nonlinear AVA inversion is established and γ is considered as an unknown model parameter. Then, variable metric method with a strategy of periodically variational starting point is used to solve the nonlinear AVA inverse problem. The proposed method can keep the inverted reservoir parameters approach to the actual solution and has been performed on both synthetic and real data. The inversion results suggest that the proposed method can solve the nonlinear inverse problem and get accurate solutions even without the knowledge of γ.
NASA Astrophysics Data System (ADS)
Saunier, O.; Mathieu, A.; Didier, D.; Tombette, M.; Quélo, D.; Winiarek, V.; Bocquet, M.
2013-11-01
The Chernobyl nuclear accident, and more recently the Fukushima accident, highlighted that the largest source of error on consequences assessment is the source term, including the time evolution of the release rate and its distribution between radioisotopes. Inverse modeling methods, which combine environmental measurements and atmospheric dispersion models, have proven efficient in assessing source term due to an accidental situation (Gudiksen, 1989; Krysta and Bocquet, 2007; Stohl et al., 2012a; Winiarek et al., 2012). Most existing approaches are designed to use air sampling measurements (Winiarek et al., 2012) and some of them also use deposition measurements (Stohl et al., 2012a; Winiarek et al., 2014). Some studies have been performed to use dose rate measurements (Duranova et al., 1999; Astrup et al., 2004; Drews et al., 2004; Tsiouri et al., 2012) but none of the developed methods were carried out to assess the complex source term of a real accident situation like the Fukushima accident. However, dose rate measurements are generated by the most widespread measurement system, and in the event of a nuclear accident, these data constitute the main source of measurements of the plume and radioactive fallout during releases. This paper proposes a method to use dose rate measurements as part of an inverse modeling approach to assess source terms. The method is proven efficient and reliable when applied to the accident at the Fukushima Daiichi Nuclear Power Plant (FD-NPP). The emissions for the eight main isotopes 133Xe, 134Cs, 136Cs, 137Cs, 137mBa, 131I, 132I and 132Te have been assessed. Accordingly, 105.9 PBq of 131I, 35.8 PBq of 132I, 15.5 PBq of 137Cs and 12 134 PBq of noble gases were released. The events at FD-NPP (such as venting, explosions, etc.) known to have caused atmospheric releases are well identified in the retrieved source term. The estimated source term is validated by comparing simulations of atmospheric dispersion and deposition with
Kong, Jude D; Jin, Chaochao; Wang, Hao
2015-12-01
In this paper, we improve the classic SEIR model by separating the juvenile group and the adult group to better describe the dynamics of childhood infectious diseases. We perform stability analysis to study the asymptotic dynamics of the new model, and perform sensitivity analysis to uncover the relative importance of the parameters on infection. The transmission rate is a key parameter in controlling the spread of an infectious disease as it directly determines the disease incidence. However, it is essentially impossible to measure the transmission rate for certain infectious diseases. We introduce an inverse method for our new model, which can extract the time-dependent transmission rate from either prevalence data or incidence data in existing open databases. Pre- and post-vaccination measles data sets from Liverpool and London are applied to estimate the time-varying transmission rate. From the Fourier transform of the transmission rate of Liverpool and London, we observe two spectral peaks with frequencies 1/year and 3/year. These dominant frequencies are robust with respect to different initial values. The dominant 1/year frequency is consistent with common belief that measles is driven by seasonal factors such as environmental changes and immune system changes and the 3/year frequency indicates the superiority of school contacts in driving measles transmission over other seasonal factors. Our results show that in coastal cities, the main modulator of the transmission of measles virus, paramyxovirus, is school seasons. On the other hand, in landlocked cities, both weather and school seasons have almost the same influence on paramyxovirus transmission.
Full Waveform Inversion Using the Adjoint Method for Earthquake Kinematics Inversion
NASA Astrophysics Data System (ADS)
Tago Pacheco, J.; Metivier, L.; Brossier, R.; Virieux, J.
2014-12-01
Extracting the information contained in seismograms for better description of the Earth structure and evolution is often based on only selected attributes of these signals. Exploiting the entire seismogram, Full Wave Inversion based on an adjoint estimation of the gradient and Hessian operators, has been recognized as a high-resolution imaging technique. Most of earthquake kinematics inversion are still based on the estimation of the Frechet derivatives for the gradient operator computation in linearized optimization. One may wonder the benefit of the adjoint formulation which avoids the estimation of these derivatives for the gradient estimation. Recently, Somala et al. (submitted) have detailed the adjoint method for earthquake kinematics inversion starting from the second-order wave equation in 3D media. They have used a conjugate gradient method for the optimization procedure. We explore a similar adjoint formulation based on the first-order wave equations while using different optimization schemes. Indeed, for earthquake kinematics inversion, the model space is the slip-rate spatio-temporal history over the fault. Seismograms obtained from a dislocation rupture simulation are linearly linked to this slip-rate distribution. Therefore, we introduce a simple systematic procedure based on Lagrangian formulation of the adjoint method in the linear problem of earthquake kinematics. We have developed both the gradient estimation using the adjoint formulation and the Hessian influence using the second-order adjoint formulation (Metivier et al, 2013, 2014). Since the earthquake kinematics is a linear problem, the minimization problem is quadratic, henceforth, only one solution of the Newton equations is needed with the Hessian impact. Moreover, the formal uncertainty estimation over slip-rate distribution could be deduced from this Hessian analysis. On simple synthetic examples for antiplane kinematic rupture configuration in 2D medium, we illustrate the properties of
Comparative study of inversion methods of three-dimensional NMR and sensitivity to fluids
NASA Astrophysics Data System (ADS)
Tan, Maojin; Wang, Peng; Mao, Keyu
2014-04-01
Three-dimensional nuclear magnetic resonance (3D NMR) logging can simultaneously measure transverse relaxation time (T2), longitudinal relaxation time (T1), and diffusion coefficient (D). These parameters can be used to distinguish fluids in the porous reservoirs. For 3D NMR logging, the relaxation mechanism and mathematical model, Fredholm equation, are introduced, and the inversion methods including Singular Value Decomposition (SVD), Butler-Reeds-Dawson (BRD), and Global Inversion (GI) methods are studied in detail, respectively. During one simulation test, multi-echo CPMG sequence activation is designed firstly, echo trains of the ideal fluid models are synthesized, then an inversion algorithm is carried on these synthetic echo trains, and finally T2-T1-D map is built. Futhermore, SVD, BRD, and GI methods are respectively applied into a same fluid model, and the computing speed and inversion accuracy are compared and analyzed. When the optimal inversion method and matrix dimention are applied, the inversion results are in good aggreement with the supposed fluid model, which indicates that the inversion method of 3D NMR is applieable for fluid typing of oil and gas reservoirs. Additionally, the forward modeling and inversion tests are made in oil-water and gas-water models, respectively, the sensitivity to the fluids in different magnetic field gradients is also examined in detail. The effect of magnetic gradient on fluid typing in 3D NMR logging is stuied and the optimal manetic gradient is choosen.
An inverse dynamic method yielding flexible manipulator state trajectories
NASA Technical Reports Server (NTRS)
Kwon, Dong-Soo; Book, Wayne J.
1990-01-01
An inverse dynamic equation for a flexible manipulator is derived in a state form. By dividing the inverse system into the causal part and the anticausal part, torque is calculated in the time domain for a certain end point trajectory, as well as trajectories of all state variables. The open loop control of the inverse dynamic method shows an excellent result in simulation. For practical applications, a control strategy adapting feedback tracking control to the inverse dynamic feedforward control is illustrated, and its good experimental result is presented.
Inverse method for estimating shear stress in machining
NASA Astrophysics Data System (ADS)
Burns, T. J.; Mates, S. P.; Rhorer, R. L.; Whitenton, E. P.; Basak, D.
2016-01-01
An inverse method is presented for estimating shear stress in the work material in the region of chip-tool contact along the rake face of the tool during orthogonal machining. The method is motivated by a model of heat generation in the chip, which is based on a two-zone contact model for friction along the rake face, and an estimate of the steady-state flow of heat into the cutting tool. Given an experimentally determined discrete set of steady-state temperature measurements along the rake face of the tool, it is shown how to estimate the corresponding shear stress distribution on the rake face, even when no friction model is specified.
Determination of transient fluid temperature using the inverse method
NASA Astrophysics Data System (ADS)
Jaremkiewicz, Magdalena
2014-03-01
This paper proposes an inverse method to obtain accurate measurements of the transient temperature of fluid. A method for unit step and linear rise of temperature is presented. For this purpose, the thermometer housing is modelled as a full cylindrical element (with no inner hole), divided into four control volumes. Using the control volume method, the heat balance equations can be written for each of the nodes for each of the control volumes. Thus, for a known temperature in the middle of the cylindrical element, the distribution of temperature in three nodes and heat flux at the outer surface were obtained. For a known value of the heat transfer coefficient the temperature of the fluid can be calculated using the boundary condition. Additionally, results of experimental research are presented. The research was carried out during the start-up of an experimental installation, which comprises: a steam generator unit, an installation for boiler feed water treatment, a tray-type deaerator, a blow down flashvessel for heat recovery, a steam pressure reduction station, a boiler control system and a steam header made of martensitic high alloy P91 steel. Based on temperature measurements made in the steam header using the inverse method, accurate measurements of the transient temperature of the steam were obtained. The results of the calculations are compared with the real temperature of the steam, which can be determined for a known pressure and enthalpy.
An inverse method with regularity condition for transonic airfoil design
NASA Technical Reports Server (NTRS)
Zhu, Ziqiang; Xia, Zhixun; Wu, Liyi
1991-01-01
It is known from Lighthill's exact solution of the incompressible inverse problem that in the inverse design problem, the surface pressure distribution and the free stream speed cannot both be prescribed independently. This implies the existence of a constraint on the prescribed pressure distribution. The same constraint exists at compressible speeds. Presented here is an inverse design method for transonic airfoils. In this method, the target pressure distribution contains a free parameter that is adjusted during the computation to satisfy the regularity condition. Some design results are presented in order to demonstrate the capabilities of the method.
Evaluation of simplified evaporation duct refractivity models for inversion problems
NASA Astrophysics Data System (ADS)
Saeger, J. T.; Grimes, N. G.; Rickard, H. E.; Hackett, E. E.
2015-10-01
To assess a radar system's instantaneous performance on any given day, detailed knowledge of the meteorological conditions is required due to the dependency of atmospheric refractivity on thermodynamic properties such as temperature, water vapor, and pressure. Because of the significant challenges involved in obtaining these data, recent efforts have focused on development of methods to obtain the refractivity structure inversely using radar measurements and radar wave propagation models. Such inversion techniques generally use simplified refractivity models in order to reduce the parameter space of the solution. Here the accuracy of three simple refractivity models is examined for the case of an evaporation duct. The models utilize the basic log linear shape classically associated with evaporation ducts, but each model depends on various parameters that affect different aspects of the profile, such as its shape and duct height. The model parameters are optimized using radiosonde data, and their performance is compared to these atmospheric measurements. The optimized models and data are also used to predict propagation using a parabolic equation code with the refractivity prescribed by the models and measured data, and the resulting propagation patterns are compared. The results of this study suggest that the best log linear model formulation for an inversion problem would be a two-layer model that contains at least three parameters: duct height, duct curvature, and mixed layer slope. This functional form permits a reasonably accurate fit to atmospheric measurements as well as embodies key features of the profile required for correct propagation prediction with as few parameters as possible.
Comparison of iterative inverse coarse-graining methods
NASA Astrophysics Data System (ADS)
Rosenberger, David; Hanke, Martin; van der Vegt, Nico F. A.
2016-10-01
Deriving potentials for coarse-grained Molecular Dynamics (MD) simulations is frequently done by solving an inverse problem. Methods like Iterative Boltzmann Inversion (IBI) or Inverse Monte Carlo (IMC) have been widely used to solve this problem. The solution obtained by application of these methods guarantees a match in the radial distribution function (RDF) between the underlying fine-grained system and the derived coarse-grained system. However, these methods often fail in reproducing thermodynamic properties. To overcome this deficiency, additional thermodynamic constraints such as pressure or Kirkwood-Buff integrals (KBI) may be added to these methods. In this communication we test the ability of these methods to converge to a known solution of the inverse problem. With this goal in mind we have studied a binary mixture of two simple Lennard-Jones (LJ) fluids, in which no actual coarse-graining is performed. We further discuss whether full convergence is actually needed to achieve thermodynamic representability.
Comparison of Optimal Design Methods in Inverse Problems.
Banks, H T; Holm, Kathleen; Kappel, Franz
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29].
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; ...
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less
Geological realism in hydrogeological and geophysical inverse modeling: A review
NASA Astrophysics Data System (ADS)
Linde, Niklas; Renard, Philippe; Mukerji, Tapan; Caers, Jef
2015-12-01
Scientific curiosity, exploration of georesources and environmental concerns are pushing the geoscientific research community toward subsurface investigations of ever-increasing complexity. This review explores various approaches to formulate and solve inverse problems in ways that effectively integrate geological concepts with geophysical and hydrogeological data. Modern geostatistical simulation algorithms can produce multiple subsurface realizations that are in agreement with conceptual geological models and statistical rock physics can be used to map these realizations into physical properties that are sensed by the geophysical or hydrogeological data. The inverse problem consists of finding one or an ensemble of such subsurface realizations that are in agreement with the data. The most general inversion frameworks are presently often computationally intractable when applied to large-scale problems and it is necessary to better understand the implications of simplifying (1) the conceptual geological model (e.g., using model compression); (2) the physical forward problem (e.g., using proxy models); and (3) the algorithm used to solve the inverse problem (e.g., Markov chain Monte Carlo or local optimization methods) to reach practical and robust solutions given today's computer resources and knowledge. We also highlight the need to not only use geophysical and hydrogeological data for parameter estimation purposes, but also to use them to falsify or corroborate alternative geological scenarios.
An adaptive subspace trust-region method for frequency-domain seismic full waveform inversion
NASA Astrophysics Data System (ADS)
Zhang, Huan; Li, Xiaofan; Song, Hanjie; Liu, Shaolin
2015-05-01
Full waveform inversion is currently considered as a promising seismic imaging method to obtain high-resolution and quantitative images of the subsurface. It is a nonlinear ill-posed inverse problem, the main difficulty of which that prevents the full waveform inversion from widespread applying to real data is the sensitivity to incorrect initial models and noisy data. Local optimization theories including Newton's method and gradient method always lead the convergence to local minima, while global optimization algorithms such as simulated annealing are computationally costly. To confront this issue, in this paper we investigate the possibility of applying the trust-region method to the full waveform inversion problem. Different from line search methods, trust-region methods force the new trial step within a certain neighborhood of the current iterate point. Theoretically, the trust-region methods are reliable and robust, and they have very strong convergence properties. The capability of this inversion technique is tested with the synthetic Marmousi velocity model and the SEG/EAGE Salt model. Numerical examples demonstrate that the adaptive subspace trust-region method can provide solutions closer to the global minima compared to the conventional Approximate Hessian approach and the L-BFGS method with a higher convergence rate. In addition, the match between the inverted model and the true model is still excellent even when the initial model deviates far from the true model. Inversion results with noisy data also exhibit the remarkable capability of the adaptive subspace trust-region method for low signal-to-noise data inversions. Promising numerical results suggest this adaptive subspace trust-region method is suitable for full waveform inversion, as it has stronger convergence and higher convergence rate.
Matrix-inversion method: Applications to Möbius inversion adn deconvolution
NASA Astrophysics Data System (ADS)
Xie, Qian; Chen, Nan-Xian
1995-12-01
The purpose of this paper is threefold. The first is to show the matrix inversion method as a joint basis for the inversion of two important transforms: the Möbius and Laplace transforms. It is found that the Möbius transform is related to a multiplicative operator while the Laplace transform is related to an additive operator. The second is to show that the matrix inverison method is a useful tool for inverse problems not only in statistical physics but also in applied physics by means of adding two other applications, one the derivation of the Fuoss-Kirkwood formulas for relaxation spectra in studies of anelasticity and dielectrics and the other the reconstruction of real signal in signal processing. The third is to indicate the potentiality of the matrix inversion method as a rough algorithm for numerical solution of the convolution integral equation. The numerical examples given include the inversion of Laplace transform and the signal reconstruction with a Gaussian point spread kernel. (c) 1995 The American Physical Society
Fast 3D inversion of airborne gravity-gradiometry data using Lanczos bidiagonalization method
NASA Astrophysics Data System (ADS)
Meng, Zhaohai; Li, Fengting; Zhang, Dailei; Xu, Xuechun; Huang, Danian
2016-09-01
We developed a new fast inversion method for to process and interpret airborne gravity gradiometry data, which was based on Lanczos bidiagonalization algorithm. Here, we describe the application of this new 3D gravity gradiometry inversion method to recover a subsurface density distribution model from the airborne measured gravity gradiometry anomalies. For this purpose, the survey area is divided into a large number of rectangular cells with each cell possessing a constant unknown density. It is well known that the solution of large linear gravity gradiometry is an ill-posed problem since using the smoothest inversion method is considerably time consuming. We demonstrate that the Lanczos bidiagonalization method can be an appropriate algorithm to solve a Tikhonov solver time cost function for resolving the large equations within a short time. Lanczos bidiagonalization is designed to make the very large gravity gradiometry forward modeling matrices to become low-rank, which will considerably reduce the running time of the inversion method. We also use a weighted generalized cross validation method to choose the appropriate Tikhonov parameter to improve inversion results. The inversion incorporates a model norm that allows us to attain the smoothing and depth of the solution; in addition, the model norm counteracts the natural decay of the kernels, which concentrate at shallow depths. The method is applied on noise-contaminated synthetic gravity gradiometry data to demonstrate its suitability for large 3D gravity gradiometry data inversion. The airborne gravity gradiometry data from the Vinton Salt Dome, USE, were considered as a case study. The validity of the new method on real data is discussed with reference to the Vinton Dome inversion result. The intermediate density values in the constructed model coincide well with previous results and geological information. This demonstrates the validity of the gravity gradiometry inversion method.
Non-cavitating propeller noise modeling and inversion
NASA Astrophysics Data System (ADS)
Kim, Dongho; Lee, Keunhwa; Seong, Woojae
2014-12-01
Marine propeller is the dominant exciter of the hull surface above it causing high level of noise and vibration in the ship structure. Recent successful developments have led to non-cavitating propeller designs and thus present focus is the non-cavitating characteristics of propeller such as hydrodynamic noise and its induced hull excitation. In this paper, analytic source model of propeller non-cavitating noise, described by longitudinal quadrupoles and dipoles, is suggested based on the propeller hydrodynamics. To find the source unknown parameters, the multi-parameter inversion technique is adopted using the pressure data obtained from the model scale experiment and pressure field replicas calculated by boundary element method. The inversion results show that the proposed source model is appropriate in modeling non-cavitating propeller noise. The result of this study can be utilized in the prediction of propeller non-cavitating noise and hull excitation at various stages in design and analysis.
The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method
NASA Astrophysics Data System (ADS)
Voronina, T. A.; Romanenko, A. A.
2016-12-01
Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.
Inverse problems of ultrasound tomography in models with attenuation.
Goncharsky, Alexander V; Romanov, Sergey Y
2014-04-21
We develop efficient methods for solving inverse problems of ultrasound tomography in models with attenuation. We treat the inverse problem as a coefficient inverse problem for unknown coordinate-dependent functions that characterize both the speed cross section and the coefficients of the wave equation describing attenuation in the diagnosed region. We derive exact formulas for the gradient of the residual functional in models with attenuation, and develop efficient algorithms for minimizing the gradient of the residual by solving the conjugate problem. These algorithms are easy to parallelize when implemented on supercomputers, allowing the computation time to be reduced by a factor of several hundred compared to a PC. The numerical analysis of model problems shows that it is possible to reconstruct not only the speed cross section, but also the properties of the attenuating medium. We investigate the choice of the initial approximation for iterative algorithms used to solve inverse problems. The algorithms considered are primarily meant for the development of ultrasound tomographs for differential diagnosis of breast cancer.
Simple method for the synthesis of inverse patchy colloids
NASA Astrophysics Data System (ADS)
van Oostrum, P. D. J.; Hejazifar, M.; Niedermayer, C.; Reimhult, E.
2015-06-01
Inverse patchy colloids (IPC's) have recently been introduced as a conceptually simple model to study the phase-behavior of heterogeneously charged units. This class of patchy particles is referred to as inverse to highlight that the patches repel each other in contrast to the attractive interactions of conventional patches. IPCs demonstrate a complex interplay between attractions and repulsions that depend on their patch size and charge, their relative orientations as well as on charge of the substrate below; the resulting wide array of different types of aggregates that can be formed motivates their fabrication and use as model system. We present a novel method that does not rely on clean-room facilities and that is easily scalable to modify the surface of colloidal particles to create two polar regions with the opposite charge with respect to that of the equatorial region. The patch size is characterized by electron microscopy and fluorescently labeled to facilitate using confocal microscopy to study their phase behavior. We show that the pH can be used to tune the charges of the IPCs thus offering a tool to steer the self assembly.
Methodology for comparison of inverse heat conduction methods
NASA Astrophysics Data System (ADS)
Raynaud, M.; Beck, J. V.
1988-02-01
The inverse heat conduction problem involves the calculation of the surface heat flux from transient measured temperatures inside solids. The deviation of the estimated heat flux from the true heat flux due to stabilization procedures is called the deterministic bias. This paper defines two test problems that show the tradeoff between deterministic bias and sensitivity to measurement errors of inverse methods. For a linear problem, with the statistical assumptions of additive and uncorrelated errors having constant variance and zero mean, the second test case gives the standard deviation of the estimated heat flux. A methodology for the quantitative comparison of deterministic bias and standard deviation of inverse methods is proposed. Four numerical inverse methods are compared.
ERIC Educational Resources Information Center
Brown, Malcolm
2009-01-01
Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…
Inversion is the Solution to Dispersion: Modeling Tephra Fallout
NASA Astrophysics Data System (ADS)
Connor, C.
2005-12-01
Volcanologists increasingly rely on numerical simulations to understand the dynamics of erupting volcanoes. Mathematical models are often used to explain the geologic processes responsible for eruption deposits found in the geologic record, and to better characterize possible hazards from future volcanic activity. We wish to estimate parameters related to the dynamics of volcanic activity directly from field observations. For example, how well can we estimate the magnitude of an eruption from measurements of tephra deposits? One solution lies in coupling our numerical simulations of volcanic eruption phenomena to inversion methods that search for an optimal set of parameters that explains our observations. Here we use observations of tephra thickness and granulometry from the 1992 eruption of Cerro Negro volcano, Nicaragua, to test the performance of a numerical simulation of tephra fallout. The downhill simplex inversion method is used to search for optimal parameters, including the eruption column height, eruption mass, and wind velocity as a function of elevation about the volcanic vent, that produce deposits that best fit the thickness and grainsize variations observed on the tephra deposit. The computational efficiency of the model is greatly enhanced by parallelizing the numerical model. Through inversion, we estimate the column height and total mass of the eruption as 6500m +/- 750m and 3.1 x 1010 kg +/- 2.9 x 109 kg respectively. These parameter ranges agree well with observations made during the 1992 Cerro Negro eruption: 7000-7500 m maximum column height and 2.3 x 1010 kg mass erupted. Parameter uncertainty, reported as one standard deviation from the mean, is estimated using a Monte Carlo method. Inversion techniques such as the downhill simplex method provide an unbiased method for utilizing volcanological observations to evaluate and improve numerical simulations of volcanic activity. Such an approach is essential for evaluating numerical models used
Inverse Modelling of the Kawerau Geothermal Reservoir, NZ
White, S.P.
1995-01-01
In this paper we describe an existing model of the Kawerau geothermal field and attempts to improve this model using inverse modeling techniques. A match of model results to natural state temperatures and pressures at three reference depths are presented. These are used to form and ''objective function'' to be minimized by inverse modeling.
Noncoherent matrix inversion methods for Scansar processing
NASA Astrophysics Data System (ADS)
Dendal, Didier
1995-11-01
The aim of this work is to develop some algebraic reconstruction techniques for low resolution power SAR imagery, as in the Scansar or QUICKLOOK imaging modes. The traditional reconstruction algorithms are indeed not well fit to low resolution power purposes, since Fourier constraints impose a computational load of the same order as the one of the usual SAR azimuthal resolution. Furthermore, the range migration balancing is superfluous, as it does not cover a tenth of the resolution cell in the less favorable situations. There are several possibilities for using matrices in the azimuthal direction. The most direct alternative leads to a matrix inversion. Unfortunately, the numerical conditioning of the problem is far from being excellent, since each line of the matrix is an image of the antenna radiating pattern with a shift between two successive lines corresponding to the distance covered by the SAR between two pulses transmission (a few meters for satellite ERS1). We'll show how it is possible to turn a very ill conditioned problem into an equivalent one, but without any divergence risk, by a technique of successive decimation by two (resolution power increased by two at each step). This technique leads to very small square matrices (two lines and two columns), the good numeric conditioning of which is certified by a well-known theorem of numerical analysis. The convergence rate of the process depends on the circumstances (mainly the distance between two impulses transmissions) and on the required accuracy, but five or six iterations already give excellent results. The process is applicable at four or five levels (number of decimations) which corresponds to initial matrices of 16 by 16 or 32 by 32. The azimuth processing is performed on the basis of the projection function concept (tomographic analogy of radar principles). This integrated information results from classical coherent range compression. The aperture synthesis is obtained by non-coherent processing
Inverse models: A necessary next step in ground-water modeling
Poeter, E.P.; Hill, M.C.
1997-01-01
Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.
Updated Results for the Wake Vortex Inverse Model
NASA Technical Reports Server (NTRS)
Robins, Robert E.; Lai, David Y.; Delisi, Donald P.; Mellman, George R.
2008-01-01
NorthWest Research Associates (NWRA) has developed an Inverse Model for inverting aircraft wake vortex data. The objective of the inverse modeling is to obtain estimates of the vortex circulation decay and crosswind vertical profiles, using time history measurements of the lateral and vertical position of aircraft vortices. The Inverse Model performs iterative forward model runs using estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Iterations are performed until a user-defined criterion is satisfied. Outputs from an Inverse Model run are the best estimates of the time history of the vortex circulation derived from the observed data, the vertical crosswind profile, and several vortex parameters. The forward model, named SHRAPA, used in this inverse modeling is a modified version of the Shear-APA model, and it is described in Section 2 of this document. Details of the Inverse Model are presented in Section 3. The Inverse Model was applied to lidar-observed vortex data at three airports: FAA acquired data from San Francisco International Airport (SFO) and Denver International Airport (DEN), and NASA acquired data from Memphis International Airport (MEM). The results are compared with observed data. This Inverse Model validation is documented in Section 4. A summary is given in Section 5. A user's guide for the inverse wake vortex model is presented in a separate NorthWest Research Associates technical report (Lai and Delisi, 2007a).
Stress inversion method and analysis of GPS array data
NASA Astrophysics Data System (ADS)
Hori, Muneo; Iinuma, Takeshi; Kato, Teruyuki
2008-01-01
The stress inversion method is developed to find a stress field which satisfies the equation of equilibrium for a body in a state of plane stress. When one stress-strain relation is known and data on the strain distribution on the body and traction along the boundary are provided, the method solves a well-posed problem, which is a linear boundary value problem for Airy's stress function, with the governing equation being the Poisson equation and the boundary conditions being of the Neumann type. The stress inversion method is applied to the Global Positioning System (GPS) array data of the Japanese Islands. The stress increment distribution, which is associated with the displacement increment measured by the GPS array, is computed, and it is found that the distribution is not uniform over the islands and that some regions have a relatively large increment. The elasticity inversion method is developed as an alternative to the stress inversion method; it is based on the assumption of linear elastic deformation with unknown elastic moduli and does not need boundary traction data, which are usually difficult to measure. This method is applied to the GPS array data of a small region in Japan to which the stress inversion method is not applicable. To cite this article: M. Hori et al., C. R. Mecanique 336 (2008).
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
Estimating surface acoustic impedance with the inverse method.
Piechowicz, Janusz
2011-01-01
Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.
Parallel full-waveform inversion in the frequency domain by the Gauss-Newton method
NASA Astrophysics Data System (ADS)
Zhang, Wensheng; Zhuang, Yuan
2016-06-01
In this paper, we investigate the full-waveform inversion in the frequency domain. We first test the inversion ability of three numerical optimization methods, i.e., the steepest-descent method, the Newton-CG method and the Gauss- Newton method, for a simple model. The results show that the Gauss-Newton method performs well and efficiently. Then numerical computations for a benchmark model named Marmousi model by the Gauss-Newton method are implemented. Parallel algorithm based on message passing interface (MPI) is applied as the inversion is a typical large-scale computational problem. Numerical computations show that the Gauss-Newton method has good ability to reconstruct the complex model.
The inverse gravimetric problem in gravity modelling
NASA Technical Reports Server (NTRS)
Sanso, F.; Tscherning, C. C.
1989-01-01
One of the main purposes of geodesy is to determine the gravity field of the Earth in the space outside its physical surface. This purpose can be pursued without any particular knowledge of the internal density even if the exact shape of the physical surface of the Earth is not known, though this seems to entangle the two domains, as it was in the old Stoke's theory before the appearance of Molodensky's approach. Nevertheless, even when large, dense and homogeneous data sets are available, it was always recognized that subtracting from the gravity field the effect of the outer layer of the masses (topographic effect) yields a much smoother field. This is obviously more important when a sparse data set is bad so that any smoothing of the gravity field helps in interpolating between the data without raising the modeling error, this approach is generally followed because it has become very cheap in terms of computing time since the appearance of spectral techniques. The mathematical description of the Inverse Gravimetric Problem (IGP) is dominated mainly by two principles, which in loose terms can be formulated as follows: the knowledge of the external gravity field determines mainly the lateral variations of the density; and the deeper the density anomaly giving rise to a gravity anomaly, the more improperly posed is the problem of recovering the former from the latter. The statistical relation between rho and n (and its inverse) is also investigated in its general form, proving that degree cross-covariances have to be introduced to describe the behavior of rho. The problem of the simultaneous estimate of a spherical anomalous potential and of the external, topographic masses is addressed criticizing the choice of the mixed collection approach.
Parameter Selection Methods in Inverse Problem Formulation
2010-11-03
therapy levels, with u(t) = 0 for fully off and u(t) = 1, for fully on. Although HIV treatment is nearly always administered as combination therapy...Davidian, and E.S. Rosenberg, Model fitting and predic- tion with HIV treatment interruption data, CRSC-TR05-40, NCSU, October 2005; Bull. Math
Voxel inversion of airborne electromagnetic data for improved model integration
NASA Astrophysics Data System (ADS)
Fiandaca, Gianluca; Auken, Esben; Kirkegaard, Casper; Vest Christiansen, Anders
2014-05-01
Inversion of electromagnetic data has migrated from single site interpretations to inversions including entire surveys using spatial constraints to obtain geologically reasonable results. Though, the model space is usually linked to the actual observation points. For airborne electromagnetic (AEM) surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space, and the geophysical information has to be relocated for integration in (hydro)geological models. We have developed a new geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the soil properties is computed everywhere by means of an interpolation function (e.g. inverse distance or kriging). Given this definition of the voxel model space, the 1D forward responses of the AEM data are computed as follows: 1) a 1D model subdivision, in terms of model thicknesses, is defined for each 1D data set, creating "virtual" layers. 2) the "virtual" 1D models at the sounding positions are finalized by interpolating the soil properties (the resistivity) in the center of the "virtual" layers. 3) the forward response is computed in 1D for each "virtual" model. We tested the new inversion scheme on an AEM survey carried out with the SkyTEM system close to Odder, in Denmark. The survey comprises 106054 dual mode AEM soundings, and covers an area of approximately 13 km X 16 km. The voxel inversion was carried out on a structured grid of 260 X 325 X 29 xyz nodes (50 m xy spacing), for a total of 2450500 inversion parameters. A classical spatially constrained inversion (SCI) was carried out on the same data set, using 106054
Computer modeling of inversion layer MOS solar cells and arrays
NASA Technical Reports Server (NTRS)
Ho, Fat Duen
1991-01-01
A two dimensional numerical model of the inversion layer metal insulator semiconductor (IL/MIS) solar cell is proposed by using the finite element method. The two-dimensional current flow in the device is taken into account in this model. The electrostatic potential distribution, the electron concentration distribution, and the hole concentration distribution for different terminal voltages are simulated. The results of simple calculation are presented. The existing problems for this model are addressed. Future work is proposed. The MIS structures are studied and some of the results are reported.
Forward and inverse modeling for jovian seismology
NASA Astrophysics Data System (ADS)
Jackiewicz, Jason; Nettelmann, Nadine; Marley, Mark; Fortney, Jonathan
2012-08-01
Jupiter is expected to pulsate in a spectrum of acoustic modes and recent re-analysis of a spectroscopic time series has identified a regular pattern in the spacing of the frequencies (Gaulme, P., Schmider, F.-X., Gay, J., Guillot, T., Jacob, C. [2011]. Astron. Astrophys. 531, A104). This exciting result can provide constraints on gross jovian properties and warrants a more in-depth theoretical study of the seismic structure of Jupiter. With current instrumentation, such as the SYMPA instrument (Schmider, F.X. [2007]. Astron. Astrophys. 474, 1073-1080) used for the Gaulme et al. (Gaulme, P., Schmider, F.-X., Gay, J., Guillot, T., Jacob, C. [2011]. Astron. Astrophys. 531, A104) analysis, we assume that, at minimum, a set of global frequencies extending up to angular degree ℓ=25 could be observed. In order to identify which modes would best constraining models of Jupiter's interior and thus help motivate the next generation of observations, we explore the sensitivity of derived parameters to this mode set. Three different models of the jovian interior are computed and the theoretical pulsation spectrum from these models for ℓ⩽25 is obtained. We compute sensitivity kernels and perform linear inversions to infer details of the expected discontinuities in the profiles in the jovian interior. We find that the amplitude of the sound-speed jump of a few percent in the inner/outer envelope boundary seen in two of the applied models should be reasonably inferred with these particular modes. Near the core boundary where models predict large density discontinuities, the location of such features can be accurately measured, while their amplitudes have more uncertainty. These results suggest that this mode set would be sufficient to infer the radial location and strength of expected discontinuities in Jupiter's interior, and place strong constraints on the core size and mass. We encourage new observations to detect these jovian oscillations.
Saturation-inversion-recovery: A method for T1 measurement
NASA Astrophysics Data System (ADS)
Wang, Hongzhi; Zhao, Ming; Ackerman, Jerome L.; Song, Yiqiao
2017-01-01
Spin-lattice relaxation (T1) has always been measured by inversion-recovery (IR), saturation-recovery (SR), or related methods. These existing methods share a common behavior in that the function describing T1 sensitivity is the exponential, e.g., exp(- τ /T1), where τ is the recovery time. In this paper, we describe a saturation-inversion-recovery (SIR) sequence for T1 measurement with considerably sharper T1-dependence than those of the IR and SR sequences, and demonstrate it experimentally. The SIR method could be useful in improving the contrast between regions of differing T1 in T1-weighted MRI.
INVERSE MODEL ESTIMATION AND EVALUATION OF SEASONAL NH 3 EMISSIONS
The presentation topic is inverse modeling for estimate and evaluation of emissions. The case study presented is the need for seasonal estimates of NH_{3} emissions for air quality modeling. The inverse modeling application approach is first described, and then the NH
NASA Astrophysics Data System (ADS)
Liu, B.; Li, S. C.; Nie, L. C.; Wang, J.; L, X.; Zhang, Q. S.
2012-12-01
Traditional inversion method is the most commonly used procedure for three-dimensional (3D) resistivity inversion, which usually takes the linearization of the problem and accomplish it by iterations. However, its accuracy is often dependent on the initial model, which can make the inversion trapped in local optima, even cause a bad result. Non-linear method is a feasible way to eliminate the dependence on the initial model. However, for large problems such as 3D resistivity inversion with inversion parameters exceeding a thousand, main challenges of non-linear method are premature and quite low search efficiency. To deal with these problems, we present an improved Genetic Algorithm (GA) method. In the improved GA method, smooth constraint and inequality constraint are both applied on the object function, by which the degree of non-uniqueness and ill-conditioning is decreased. Some measures are adopted from others by reference to maintain the diversity and stability of GA, e.g. real-coded method, and the adaptive adjustment of crossover and mutation probabilities. Then a generation method of approximately uniform initial population is proposed in this paper, with which uniformly distributed initial generation can be produced and the dependence on initial model can be eliminated. Further, a mutation direction control method is presented based on the joint algorithm, in which the linearization method is embedded in GA. The update vector produced by linearization method is used as mutation increment to maintain a better search direction compared with the traditional GA with non-controlled mutation operation. By this method, the mutation direction is optimized and the search efficiency is improved greatly. The performance of improved GA is evaluated by comparing with traditional inversion results in synthetic example or with drilling columnar sections in practical example. The synthetic and practical examples illustrate that with the improved GA method we can eliminate
Accuracy evaluation of both Wallace-Bott and BEM-based paleostress inversion methods
NASA Astrophysics Data System (ADS)
Lejri, Mostfa; Maerten, Frantz; Maerten, Laurent; Soliva, Roger
2017-01-01
Four decades after their introduction, the validity of fault slip inversion methods based on Wallace (1951) and Bott (1959) hypothesis, which states that the slip on each fault surface has the same direction and sense as the maximum resolved shear stress, is still a subject of debate. According to some authors, this hypothesis is questionable since fault mechanical interactions induce slip reorientations as confirmed by geomechanical models. This leads us to ask as to what extent the Wallace-Bott simplifications are reliable as a basis hypothesis for stress inversion from fault slip data. In this paper, we compare two inversion methods; the first is based on the Wallace-Bott hypothesis, and the second relies on geomechanics and mechanical effects on fault heterogeneous slip distribution. In that context, a multi-parametric stress inversion study covering (i) the friction coefficients (μ), (ii) the full range of Andersonian state of stress and (iii) slip data sampling along the faults is performed. For each tested parameter, the results of the mechanical stress inversion and the Wallace-Bott (WB) based stress inversion for slip are compared in order to understand their respective effects. The predicted discrepancy between the solutions of both stress inversion methods (based on WB and mechanics) will then be used to explain the stress inversions results for the chimney Rock case study. It is shown that a high solution discrepancy is not always correlated with the misfit angle (ω) and can be found under specific configurations (R-, θ, μ, geometry) invalidating the WB solutions. We conclude that in most cases the mechanical stress inversion and the WB based stress inversion are both valid and complementary depending on the fault friction. Some exceptions (i.e. low fault friction, simple fault geometry and pure regimes) that may lead to wrong WB based stress inversion solutions are highlighted.
Magnetic interface forward and inversion method based on Padé approximation
NASA Astrophysics Data System (ADS)
Zhang, Chong; Huang, Da-Nian; Zhang, Kai; Pu, Yi-Tao; Yu, Ping
2016-12-01
The magnetic interface forward and inversion method is realized using the Taylor series expansion to linearize the Fourier transform of the exponential function. With a large expansion step and unbounded neighborhood, the Taylor series is not convergent, and therefore, this paper presents the magnetic interface forward and inversion method based on Padé approximation instead of the Taylor series expansion. Compared with the Taylor series, Padé's expansion's convergence is more stable and its approximation more accurate. Model tests show the validity of the magnetic forward modeling and inversion of Padé approximation proposed in the paper, and when this inversion method is applied to the measured data of the Matagami area in Canada, a stable and reasonable distribution of underground interface is obtained.
Towards an optimal inversion method for remote atmospheric sensing
NASA Technical Reports Server (NTRS)
King, J. I. F.
1969-01-01
The inference of atmospheric structure from satellite radiometric observations requires an inversion algorithm. A variety of techniques was spawned to meet these demands. One class, the nonlinear inversion methods, copes with the problem of data noise. Unlike linear techniques which require a priori data smoothing, the nonlinear method can be applied directly to raw data. The algorithm discriminates the noise input by resolving the inferences into two types of solution, associating the real roots with atmospheric structure while ascribing the imaginary roots to noise.
Linearized Functional Minimization for Inverse Modeling
Wohlberg, Brendt; Tartakovsky, Daniel M.; Dentz, Marco
2012-06-21
Heterogeneous aquifers typically consist of multiple lithofacies, whose spatial arrangement significantly affects flow and transport. The estimation of these lithofacies is complicated by the scarcity of data and by the lack of a clear correlation between identifiable geologic indicators and attributes. We introduce a new inverse-modeling approach to estimate both the spatial extent of hydrofacies and their properties from sparse measurements of hydraulic conductivity and hydraulic head. Our approach is to minimize a functional defined on the vectors of values of hydraulic conductivity and hydraulic head fields defined on regular grids at a user-determined resolution. This functional is constructed to (i) enforce the relationship between conductivity and heads provided by the groundwater flow equation, (ii) penalize deviations of the reconstructed fields from measurements where they are available, and (iii) penalize reconstructed fields that are not piece-wise smooth. We develop an iterative solver for this functional that exploits a local linearization of the mapping from conductivity to head. This approach provides a computationally efficient algorithm that rapidly converges to a solution. A series of numerical experiments demonstrates the robustness of our approach.
a method of gravity and seismic sequential inversion and its GPU implementation
NASA Astrophysics Data System (ADS)
Liu, G.; Meng, X.
2011-12-01
In this abstract, we introduce a gravity and seismic sequential inversion method to invert for density and velocity together. For the gravity inversion, we use an iterative method based on correlation imaging algorithm; for the seismic inversion, we use the full waveform inversion. The link between the density and velocity is an empirical formula called Gardner equation, for large volumes of data, we use the GPU to accelerate the computation. For the gravity inversion method , we introduce a method based on correlation imaging algorithm,it is also a interative method, first we calculate the correlation imaging of the observed gravity anomaly, it is some value between -1 and +1, then we multiply this value with a little density ,this value become the initial density model. We get a forward reuslt with this initial model and also calculate the correaltion imaging of the misfit of observed data and the forward data, also multiply the correaltion imaging result a little density and add it to the initial model, then do the same procedure above , at last ,we can get a inversion density model. For the seismic inveron method ,we use a mothod base on the linearity of acoustic wave equation written in the frequency domain,with a intial velociy model, we can get a good velocity result. In the sequential inversion of gravity and seismic , we need a link formula to convert between density and velocity ,in our method , we use the Gardner equation. Driven by the insatiable market demand for real time, high-definition 3D images, the programmable NVIDIA Graphic Processing Unit (GPU) as co-processor of CPU has been developed for high performance computing. Compute Unified Device Architecture (CUDA) is a parallel programming model and software environment provided by NVIDIA designed to overcome the challenge of using traditional general purpose GPU while maintaining a low learn curve for programmers familiar with standard programming languages such as C. In our inversion processing
Adiabatic approximation for the Rabi model with broken inversion symmetry
NASA Astrophysics Data System (ADS)
Shen, Li-Tuo; Yang, Zhen-Biao; Wu, Huai-Zhi
2017-01-01
We study the properties and behavior of the Rabi model with broken inversion symmetry. Using an adiabatic approximation approach, we explore the high-frequency qubit and oscillator regimes, and obtain analytical solutions for the qubit-oscillator system. We demonstrate that, due to broken inversion symmetry, the positions of two potentials and zero-point energies in the oscillators become asymmetric and have a quadratic dependence on the mean dipole moments within the high-frequency oscillator regime. Furthermore, we find that there is a critical point above which the qubit-oscillator system becomes unstable, and the position of this critical point has a quadratic dependence on the mean dipole moments within the high-frequency qubit regime. Finally, we verify this critical point based on the method of semiclassical approximation.
Inverting geodetic time series with a principal component analysis-based inversion method
NASA Astrophysics Data System (ADS)
Kositsky, A. P.; Avouac, J.-P.
2010-03-01
The Global Positioning System (GPS) system now makes it possible to monitor deformation of the Earth's surface along plate boundaries with unprecedented accuracy. In theory, the spatiotemporal evolution of slip on the plate boundary at depth, associated with either seismic or aseismic slip, can be inferred from these measurements through some inversion procedure based on the theory of dislocations in an elastic half-space. We describe and test a principal component analysis-based inversion method (PCAIM), an inversion strategy that relies on principal component analysis of the surface displacement time series. We prove that the fault slip history can be recovered from the inversion of each principal component. Because PCAIM does not require externally imposed temporal filtering, it can deal with any kind of time variation of fault slip. We test the approach by applying the technique to synthetic geodetic time series to show that a complicated slip history combining coseismic, postseismic, and nonstationary interseismic slip can be retrieved from this approach. PCAIM produces slip models comparable to those obtained from standard inversion techniques with less computational complexity. We also compare an afterslip model derived from the PCAIM inversion of postseismic displacements following the 2005 8.6 Nias earthquake with another solution obtained from the extended network inversion filter (ENIF). We introduce several extensions of the algorithm to allow statistically rigorous integration of multiple data sources (e.g., both GPS and interferometric synthetic aperture radar time series) over multiple timescales. PCAIM can be generalized to any linear inversion algorithm.
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
TOPEX/POSEIDON tides estimated using a global inverse model
NASA Technical Reports Server (NTRS)
Egbert, Gary D.; Bennett, Andrew F.; Foreman, Michael G. G.
1994-01-01
Altimetric data from the TOPEX/POSEIDON mission will be used for studies of global ocean circulation and marine geophysics. However, it is first necessary to remove the ocean tides, which are aliased in the raw data. The tides are constrained by the two distinct types of information: the hydrodynamic equations which the tidal fields of elevations and velocities must satisfy, and direct observational data from tide gauges and satellite altimetry. Here we develop and apply a generalized inverse method, which allows us to combine rationally all of this information into global tidal fields best fitting both the data and the dynamics, in a least squares sense. The resulting inverse solution is a sum of the direct solution to the astronomically forced Laplace tidal equations and a linear combination of the representers for the data functionals. The representer functions (one for each datum) are determined by the dynamical equations, and by our prior estimates of the statistics or errors in these equations. Our major task is a direct numerical calculation of these representers. This task is computationally intensive, but well suited to massively parallel processing. By calculating the representers we reduce the full (infinite dimensional) problem to a relatively low-dimensional problem at the outset, allowing full control over the conditioning and hence the stability of the inverse solution. With the representers calculated we can easily update our model as additional TOPEX/POSEIDON data become available. As an initial illustration we invert harmonic constants from a set of 80 open-ocean tide gauges. We then present a practical scheme for direct inversion of TOPEX/POSEIDON crossover data. We apply this method to 38 cycles of geophysical data records (GDR) data, computing preliminary global estimates of the four principal tidal constituents, M(sub 2), S(sub 2), K(sub 1) and O(sub 1). The inverse solution yields tidal fields which are simultaneously smoother, and in better
The application of the pilot points in groundwater numerical inversion model
NASA Astrophysics Data System (ADS)
Hu, Bin; Teng, Yanguo; Cheng, Lirong
2015-04-01
Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4
NASA Astrophysics Data System (ADS)
Itahashi, S.; Yumimoto, K.; Uno, I.; Kim, S.
2012-12-01
Air quality studies based on the chemical transport model have been provided many important results for promoting our knowledge of air pollution phenomena, however, discrepancies between modeling results and observation data are still important issue to overcome. One of the concerning issue would be an over-prediction of summertime tropospheric ozone in remote area of Japan. This problem has been pointed out in the model comparison study of both regional scale (e.g., MICS-Asia) and global scale model (e.g., TH-FTAP). Several reasons for this issue can be listed as, (i) the modeled reproducibility on the penetration of clean oceanic air mass, (ii) correct estimation of the anthropogenic NOx / VOC emissions over East Asia, (iii) the chemical reaction scheme used in model simulation. In this study, we attempt to inverse estimation of some important chemical reactions based on the combining system of DDM (decoupled direct method) sensitivity analysis and modeled Green's function approach. The decoupled direct method (DDM) is an efficient and accurate way of performing sensitivity analysis to model inputs, calculates sensitivity coefficients representing the responsiveness of atmospheric chemical concentrations to perturbations in a model input or parameter. The inverse solutions with the Green's functions are given by a linear, least-squares method but are still robust against nonlinearities, To construct the response matrix (i.e., Green's functions), we can directly use the results of DDM sensitivity analysis. The solution of chemical reaction constants which have relatively large uncertainties are determined with constraints of observed ozone concentration data over the remote area in Japan. Our inversed estimation demonstrated that the underestimation of reaction constant to produce HNO3 (NO2 + OH + M → HNO3 + M) in SAPRC99 chemical scheme, and the inversed results indicated the +29.0 % increment to this reaction. This estimation has good agreement when compared
A method of inversion of satellite magnetic anomaly data
NASA Technical Reports Server (NTRS)
Mayhew, M. A.
1977-01-01
A method of finding a first approximation to a crustal magnetization distribution from inversion of satellite magnetic anomaly data is described. Magnetization is expressed as a Fourier Series in a segment of spherical shell. Input to this procedure is an equivalent source representation of the observed anomaly field. Instability of the inversion occurs when high frequency noise is present in the input data, or when the series is carried to an excessively high wave number. Preliminary results are given for the United States and adjacent areas.
Indium oxide inverse opal films synthesized by structure replication method
NASA Astrophysics Data System (ADS)
Amrehn, Sabrina; Berghoff, Daniel; Nikitin, Andreas; Reichelt, Matthias; Wu, Xia; Meier, Torsten; Wagner, Thorsten
2016-04-01
We present the synthesis of indium oxide (In2O3) inverse opal films with photonic stop bands in the visible range by a structure replication method. Artificial opal films made of poly(methyl methacrylate) (PMMA) spheres are utilized as template. The opal films are deposited via sedimentation facilitated by ultrasonication, and then impregnated by indium nitrate solution, which is thermally converted to In2O3 after drying. The quality of the resulting inverse opal film depends on many parameters; in this study the water content of the indium nitrate/PMMA composite after drying is investigated. Comparison of the reflectance spectra recorded by vis-spectroscopy with simulated data shows a good agreement between the peak position and calculated stop band positions for the inverse opals. This synthesis is less complex and highly efficient compared to most other techniques and is suitable for use in many applications.
FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)
NASA Astrophysics Data System (ADS)
2014-10-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the
FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems
NASA Astrophysics Data System (ADS)
Vourc'h, Eric; Rodet, Thomas
2015-11-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods
Nonlinear inversion for arbitrarily-oriented anisotropic models: Synthetic testing
NASA Astrophysics Data System (ADS)
Bremner, P. M.; Panning, M. P.
2010-12-01
We present an implementation of new 3-D finite-frequency kernels, based on the Born approximation, for inversion of a synthetic surface wave dataset. The kernels are formulated based on a hexagonal symmetry with an arbitrary orientation. Numerical tests are performed to achieve a robust inversion scheme. Nonlinear inversion schemes are examined for adequate recovery of three input models to include: isotropic, anisotropic, and both anisotropic and isotropic input models. Output models from inversions of calculated synthetic data are compared against these input models to test for accurate reproduction of input model features, and the resolution of those features. The focus of this study is on inverting for structure beneath western North America. The synthetic dataset consists of collected seismic waveforms of 128 earthquake mechanisms, of magnitude 6-7 from Dec 2006 to Feb 2009, from the IRIS database. Events were selected to correlate with USArray deployments, and to have as complete an azimuthal coverage as possible. The events occurred within a circular region of radius 150° centered about 44° lat, -110° lon (an arbitrary location within USArray coverage). The seismograms have been calculated within a simplified version of PREM in which the crust and 220 km discontinuity have been removed, dubbed PREM LIGHT, utilizing a spectral element code (SEM) coupled to a normal mode solution. The mesh consists of a 3-D heterogeneous outer shell, representing the upper mantle above 400 km depth, coupled to a spherically symmetric inner sphere. The SEM solves the weak formulation of the seismic wave equation in the outer shell, and uses normal mode summation methods for the inner sphere. To validate the results of the SEM, seismograms are benchmarked against seismograms calculated with a 1-D normal mode summation. From the synthetic dataset, multi-taper fundamental mode surface wave phase delay measurements are taken. The orthogonal 2.5π spheroidal wave function
Dispersion analysis with inverse dielectric function modelling.
Mayerhöfer, Thomas G; Ivanovski, Vladimir; Popp, Jürgen
2016-11-05
We investigate how dispersion analysis can profit from the use of a Lorentz-type description of the inverse dielectric function. In particular at higher angles of incidence, reflectance spectra using p-polarized light are dominated by bands from modes that have their transition moments perpendicular to the surface. Accordingly, the spectra increasingly resemble inverse dielectric functions. A corresponding description can therefore eliminate the complex dependencies of the dispersion parameters, allow their determination and facilitate a more accurate description of the optical properties of single crystals.
Aerosol Models for the CALIPSO Lidar Inversion Algorithms
NASA Technical Reports Server (NTRS)
Omar, Ali H.; Winker, David M.; Won, Jae-Gwang
2003-01-01
We use measurements and models to develop aerosol models for use in the inversion algorithms for the Cloud Aerosol Lidar and Imager Pathfinder Spaceborne Observations (CALIPSO). Radiance measurements and inversions of the AErosol RObotic NETwork (AERONET1, 2) are used to group global atmospheric aerosols using optical and microphysical parameters. This study uses more than 105 records of radiance measurements, aerosol size distributions, and complex refractive indices to generate the optical properties of the aerosol at more 200 sites worldwide. These properties together with the radiance measurements are then classified using classical clustering methods to group the sites according to the type of aerosol with the greatest frequency of occurrence at each site. Six significant clusters are identified: desert dust, biomass burning, urban industrial pollution, rural background, marine, and dirty pollution. Three of these are used in the CALIPSO aerosol models to characterize desert dust, biomass burning, and polluted continental aerosols. The CALIPSO aerosol model also uses the coarse mode of desert dust and the fine mode of biomass burning to build a polluted dust model. For marine aerosol, the CALIPSO aerosol model uses measurements from the SEAS experiment 3. In addition to categorizing the aerosol types, the cluster analysis provides all the column optical and microphysical properties for each cluster.
Balancing aggregation and smoothing errors in inverse models
Turner, A. J.; Jacob, D. J.
2015-06-30
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
NASA Astrophysics Data System (ADS)
Pan, Qi; Liu, De-Jun; Guo, Zhi-Yong; Fang, Hua-Feng; Feng, Mu-Qun
2016-06-01
In the model of a horizontal straight pipeline of finite length, the segmentation of the pipeline elements is a significant factor in the accuracy and rapidity of the forward modeling and inversion processes, but the existing pipeline segmentation method is very time-consuming. This paper proposes a section segmentation method to study the characteristics of pipeline magnetic anomalies—and the effect of model parameters on these magnetic anomalies—as a way to enhance computational performance and accelerate the convergence process of the inversion. Forward models using the piece segmentation method and section segmentation method based on magnetic dipole reconstruction (MDR) are established for comparison. The results show that the magnetic anomalies calculated by these two segmentation methods are almost the same regardless of different measuring heights and variations of the inclination and declination of the pipeline. In the optimized inversion procedure the results of the simulation data calculated by these two methods agree with the synthetic data from the original model, and the inversion accuracies of the burial depths of the two methods are approximately equal. The proposed method is more computationally efficient than the piece segmentation method—in other words, the section segmentation method can meet the requirements for precision in the detection of pipelines by magnetic anomalies and reduce the computation time of the whole process.
Kılıç, Emre Eibert, Thomas F.
2015-05-01
An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems. Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained.
Inviscid transonic wing design using inverse methods in curvilinear coordinates
NASA Technical Reports Server (NTRS)
Gally, Thomas A.; Carlson, Leland A.
1987-01-01
An inverse wing design method has been developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.
Propeller sheet cavitation noise source modeling and inversion
NASA Astrophysics Data System (ADS)
Lee, Keunhwa; Lee, Jaehyuk; Kim, Dongho; Kim, Kyungseop; Seong, Woojae
2014-02-01
Propeller sheet cavitation is the main contributor to high level of noise and vibration in the after body of a ship. Full measurement of the cavitation-induced hull pressure over the entire surface of the affected area is desired but not practical. Therefore, using a few measurements on the outer hull above the propeller in a cavitation tunnel, empirical or semi-empirical techniques based on physical model have been used to predict the hull-induced pressure (or hull-induced force). In this paper, with the analytic source model for sheet cavitation, a multi-parameter inversion scheme to find the positions of noise sources and their strengths is suggested. The inversion is posed as a nonlinear optimization problem, which is solved by the optimization algorithm based on the adaptive simplex simulated annealing algorithm. Then, the resulting hull pressure can be modeled with boundary element method from the inverted cavitation noise sources. The suggested approach is applied to the hull pressure data measured in a cavitation tunnel of the Samsung Heavy Industry. Two monopole sources are adequate to model the propeller sheet cavitation noise. The inverted source information is reasonable with the cavitation dynamics of the propeller and the modeled hull pressure shows good agreement with cavitation tunnel experimental data.
Gu, Guo-Ying; Yang, Mei-Ju; Zhu, Li-Min
2012-06-01
This paper presents a novel real-time inverse hysteresis compensation method for piezoelectric actuators exhibiting asymmetric hysteresis effect. The proposed method directly utilizes a modified Prandtl-Ishlinskii hysteresis model to characterize the inverse hysteresis effect of piezoelectric actuators. The hysteresis model is then cascaded in the feedforward path for hysteresis cancellation. It avoids the complex and difficult mathematical procedure for constructing an inversion of the hysteresis model. For the purpose of validation, an experimental platform is established. To identify the model parameters, an adaptive particle swarm optimization algorithm is adopted. Based on the identified model parameters, a real-time feedforward controller is implemented for fast hysteresis compensation. Finally, tests are conducted with various kinds of trajectories. The experimental results show that the tracking errors caused by the hysteresis effect are reduced by about 90%, which clearly demonstrates the effectiveness of the proposed inverse compensation method with the modified Prandtl-Ishlinskii model.
Model selection in cognitive science as an inverse problem
NASA Astrophysics Data System (ADS)
Myung, Jay I.; Pitt, Mark A.; Navarro, Daniel J.
2005-03-01
How should we decide among competing explanations (models) of a cognitive phenomenon? This problem of model selection is at the heart of the scientific enterprise. Ideally, we would like to identify the model that actually generated the data at hand. However, this is an un-achievable goal as it is fundamentally ill-posed. Information in a finite data sample is seldom sufficient to point to a single model. Multiple models may provide equally good descriptions of the data, a problem that is exacerbated by the presence of random error in the data. In fact, model selection bears a striking similarity to perception, in that both require solving an inverse problem. Just as perceptual ambiguity can be addressed only by introducing external constraints on the interpretation of visual images, the ill-posedness of the model selection problem requires us to introduce external constraints on the choice of the most appropriate model. Model selection methods differ in how these external constraints are conceptualized and formalized. In this review we discuss the development of the various approaches, the differences between them, and why the methods perform as they do. An application example of selection methods in cognitive modeling is also discussed.
Inverse Problems in Complex Models and Applications to Earth Sciences
NASA Astrophysics Data System (ADS)
Bosch, M. E.
2015-12-01
The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied
A full potential inverse method based on a density linearization scheme for wing design
NASA Technical Reports Server (NTRS)
Shankar, V.
1982-01-01
A mixed analysis inverse procedure based on the full potential equation in conservation form was developed to recontour a given base wing to produce density linearization scheme in applying the pressure boundary condition in terms of the velocity potential. The FL030 finite volume analysis code was modified to include the inverse option. The new surface shape information, associated with the modified pressure boundary condition, is calculated at a constant span station based on a mass flux integration. The inverse method is shown to recover the original shape when the analysis pressure is not altered. Inverse calculations for weakening of a strong shock system and for a laminar flow control (LFC) pressure distribution are presented. Two methods for a trailing edge closure model are proposed for further study.
Printer model inversion by constrained optimization
NASA Astrophysics Data System (ADS)
Cholewo, Tomasz J.
1999-12-01
This paper describes a novel method for finding colorant amounts for which a printer will produce a requested color appearance based on constrained optimization. An error function defines the gamut mapping method and black replacement method. The constraints limit the feasible solution region to the device gamut and prevent exceeding the maximum total area coverage. Colorant values corresponding to in-gamut colors are found with precision limited only by the accuracy of the device model. Out-of- gamut colors are mapped to colors within the boundary of the device gamut. This general approach, used in conjunction with different types of color difference equations, can perform a wide range of out-of-gamut mappings such as chroma clipping or for finding colors on gamut boundary having specified properties. We present an application of this method to the creation of PostScript color rendering dictionaries and ICC profiles.
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
An equivalent source inversion method for imaging complex structures
NASA Astrophysics Data System (ADS)
Munk, Jens
Accurate subsurface imaging is of interest to geophysicists, having applications in geological mapping, underground void detection, ground contaminant mapping and land mine detection. The mathematical framework necessary to generate images of the subsurface from measurements of these fields describe the inverse problem, which is generally ill-posed and non-linear. Target scattering from an electromagnetic excitation results in a non-linear formulation, which is usually linearized using a weak scattering approximation. The equivalent source inversion method, in contrast, does not rely on a weak scattering approximation. The method combines the unknown total field and permittivity contrast into a single unknown distribution of "equivalent sources". Once determined, these sources are used to obtain an estimate of the total fields within the target or scatterer. The final step in the inversion is to use these fields in obtaining the desired physical property. Excellent reconstructions are obtained when the target is illuminated using multiple look angles and frequencies. Target reconstructions are further enhanced using various iterative algorithms. The general formulation of the method allow it to be used in conjunction with a number of geophysical applications. Specifically, the method can be applied to any geophysical technique incorporating a measured response to a known induced input. This is illustrated by formulating the method within resistivity electrical prospecting.
Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.
Efficiency of Pareto joint inversion of 2D geophysical data using global optimization methods
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2016-04-01
Pareto joint inversion of two or more sets of data is a promising new tool of modern geophysical exploration. In the first stage of our investigation we created software enabling execution of forward solvers of two geophysical methods (2D magnetotelluric and gravity) as well as inversion with possibility of constraining solution with seismic data. In the algorithm solving MT forward solver Helmholtz's equations, finite element method and Dirichlet's boundary conditions were applied. Gravity forward solver was based on Talwani's algorithm. To limit dimensionality of solution space we decided to describe model as sets of polygons, using Sharp Boundary Interface (SBI) approach. The main inversion engine was created using Particle Swarm Optimization (PSO) algorithm adapted to handle two or more target functions and to prevent acceptance of solutions which are non - realistic or incompatible with Pareto scheme. Each inversion run generates single Pareto solution, which can be added to Pareto Front. The PSO inversion engine was parallelized using OpenMP standard, what enabled execution code for practically unlimited amount of threads at once. Thereby computing time of inversion process was significantly decreased. Furthermore, computing efficiency increases with number of PSO iterations. In this contribution we analyze the efficiency of created software solution taking under consideration details of chosen global optimization engine used as a main joint minimization engine. Additionally we study the scale of possible decrease of computational time caused by different methods of parallelization applied for both forward solvers and inversion algorithm. All tests were done for 2D magnetotelluric and gravity data based on real geological media. Obtained results show that even for relatively simple mid end computational infrastructure proposed solution of inversion problem can be applied in practice and used for real life problems of geophysical inversion and interpretation.
Inverse design of airfoils using a flexible membrane method
NASA Astrophysics Data System (ADS)
Thinsurat, Kamon
The Modified Garabedian Mc-Fadden (MGM) method is used to inversely design airfoils. The Finite Difference Method (FDM) for Non-Uniform Grids was developed to discretize the MGM equation for numerical solving. The Finite Difference Method (FDM) for Non-Uniform Grids has the advantage of being used flexibly with an unstructured grids airfoil. The commercial software FLUENT is being used as the flow solver. Several conditions are set in FLUENT such as subsonic inviscid flow, subsonic viscous flow, transonic inviscid flow, and transonic viscous flow to test the inverse design code for each condition. A moving grid program is used to create a mesh for new airfoils prior to importing meshes into FLUENT for the analysis of flows. For validation, an iterative process is used so the Cp distribution of the initial airfoil, the NACA0011, achieves the Cp distribution of the target airfoil, the NACA2315, for the subsonic inviscid case at M=0.2. Three other cases were carried out to validate the code. After the code validations, the inverse design method was used to design a shock free airfoil in the transonic condition and to design a separation free airfoil at a high angle of attack in the subsonic condition.
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10^{1} to ~10^{2} in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Stochastic reduced order models for inverse problems under uncertainty
Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.
2014-01-01
This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115
Stochastic reduced order models for inverse problems under uncertainty.
Warner, James E; Aquino, Wilkins; Grigoriu, Mircea D
2015-03-01
This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well.
Optimised spectral merge of the background model in seismic inversion
NASA Astrophysics Data System (ADS)
White, Roy; Zabihi Naeini, Ehsan
2017-01-01
The inversion of seismic reflection data to absolute impedance generates low-frequency deviations around the true impedance if the frequency content of the background impedance model does not merge seamlessly into the spectrum of the inverted seismic data. We present a systematic method of selecting a background model that minimises the mismatch between the background model and the relative impedance obtained by inverting the seismic data at wells. At each well a set of well-log relative impedances is formed by passing the impedance log through a set of zero-phase high-pass filters. The corresponding background models are constructed by passing the impedance log through the complementary zero-phase low-pass filters and a set of seismic relative impedances is computed by inverting the seismic data using these background models. If the inverted seismic data is to merge perfectly with the background model, it should correspond at the well to the well-log relative impedance. This correspondence is the basis of a procedure for finding the optimum combination of background model and inverted seismic data. It is difficult to predict the low-frequency content of inverted seismic data. These low frequencies are affected by the uncertainties in (1) measuring the low-frequency response of the seismic wavelet and (2) knowing how inversion protects the signal-to-noise ratio at low frequencies. Uncertainty (1) becomes acute for broadband seismic data; the low-frequency phase is especially difficult to estimate. Moreover we show that a mismatch of low-frequency phase is a serious source of inversion artefacts. We also show that relative impedance can estimate the low-frequency phase where a well tie cannot. Consequently we include a low-frequency phase shift, applied to the seismic relative impedances, in the search for the best spectral merge. The background models are specified by a low-cut corner frequency and the phase shifts by a phase intercept at zero frequency. A scan of
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
The inversion method in measuring noise emitted by machines in opencast mines of rock material.
Pleban, Dariusz; Piechowicz, Janusz; Kosała, Krzysztof
2013-01-01
The inversion method was used to test vibroacoustic processes in large-size machines used in opencast mines of rock material. When this method is used, the tested machine is replaced with a set of substitute sources, whose acoustic parameters are determined on the basis of sound pressure levels and phase shift angles of acoustic signals, measured with an array of 24 microphones. This article presents test results of a combine unit comprising a crusher and a vibrating sieve, for which an acoustic model of 7 substitute sources was developed with the inversion method.
Express method of construction of accurate inverse pole figures
NASA Astrophysics Data System (ADS)
Perlovich, Yu; Isaenkova, M.; Fesenko, V.
2016-04-01
With regard to metallic materials with the FCC and BCC crystal lattice a new method for constructing the X-ray texture inverse pole figures (IPF) by using tilt curves of spinning sample, characterized by high accuracy and rapidity (express), was proposed. In contrast to the currently widespread method to construct IPF using orientation distribution function (ODF), synthesized in several partial direct pole figures, the proposed method is based on a simple geometrical interpretation of a measurement procedure, requires a minimal operating time of the X-ray diffractometer.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
Inversion of magnetotelluric data in a sparse model domain
NASA Astrophysics Data System (ADS)
Nittinger, Christian G.; Becken, Michael
2016-08-01
The inversion of magnetotelluric data into subsurface electrical conductivity poses an ill-posed problem. Smoothing constraints are widely employed to estimate a regularized solution. Here, we present an alternative inversion scheme that estimates a sparse representation of the model in a wavelet basis. The objective of the inversion is to determine the few non-zero wavelet coefficients which are required to fit the data. This approach falls into the class of sparsity constrained inversion schemes and minimizes the combination of the data misfit in a least-squares ℓ2 sense and of a model coefficient norm in an ℓ1 sense (ℓ2-ℓ1 minimization). The ℓ1 coefficient norm renders the solution sparse in a suitable representation such as the multiresolution wavelet basis, but does not impose explicit structural penalties on the model as it is the case for ℓ2 regularization. The presented numerical algorithm solves the mixed ℓ2-ℓ1 norm minimization problem for the nonlinear magnetotelluric inverse problem. We demonstrate the feasibility of our algorithm on synthetic 2-D MT data as well as on a real data example. We found that sparse models can be estimated by inversion and that the spatial distribution of non-vanishing coefficients indicates regions in the model which are resolved.
Ellipsoidal head model for fetal magnetoencephalography: forward and inverse solutions
NASA Astrophysics Data System (ADS)
Gutiérrez, David; Nehorai, Arye; Preissl, Hubert
2005-05-01
Fetal magnetoencephalography (fMEG) is a non-invasive technique where measurements of the magnetic field outside the maternal abdomen are used to infer the source location and signals of the fetus' neural activity. There are a number of aspects related to fMEG modelling that must be addressed, such as the conductor volume, fetal position and orientation, gestation period, etc. We propose a solution to the forward problem of fMEG based on an ellipsoidal head geometry. This model has the advantage of highlighting special characteristics of the field that are inherent to the anisotropy of the human head, such as the spread and orientation of the field in relationship with the localization and position of the fetal head. Our forward solution is presented in the form of a kernel matrix that facilitates the solution of the inverse problem through decoupling of the dipole localization parameters from the source signals. Then, we use this model and the maximum likelihood technique to solve the inverse problem assuming the availability of measurements from multiple trials. The applicability and performance of our methods are illustrated through numerical examples based on a real 151-channel SQUID fMEG measurement system (SARA). SARA is an MEG system especially designed for fetal assessment and is currently used for heart and brain studies. Finally, since our model requires knowledge of the best-fitting ellipsoid's centre location and semiaxes lengths, we propose a method for estimating these parameters through a least-squares fit on anatomical information obtained from three-dimensional ultrasound images.
Computer Model Inversion and Uncertainty Quantification in the Geosciences
NASA Astrophysics Data System (ADS)
White, Jeremy T.
The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for
Diffuse interface methods for inverse problems: case study for an elliptic Cauchy problem
NASA Astrophysics Data System (ADS)
Burger, Martin; Løseth Elvetun, Ole; Schlottbom, Matthias
2015-12-01
Many inverse problems have to deal with complex, evolving and often not exactly known geometries, e.g. as domains of forward problems modeled by partial differential equations. This makes it desirable to use methods which are robust with respect to perturbed or not well resolved domains, and which allow for efficient discretizations not resolving any fine detail of those geometries. For forward problems in partial differential equations methods based on diffuse interface representations have gained strong attention in the last years, but so far they have not been considered systematically for inverse problems. In this work we introduce a diffuse domain method as a tool for the solution of variational inverse problems. As a particular example we study ECG inversion in further detail. ECG inversion is a linear inverse source problem with boundary measurements governed by an anisotropic diffusion equation, which naturally cries for solutions under changing geometries, namely the beating heart. We formulate a regularization strategy using Tikhonov regularization and, using standard source conditions, we prove convergence rates. A special property of our approach is that not only operator perturbations are introduced by the diffuse domain method, but more important we have to deal with topologies which depend on a parameter \\varepsilon in the diffuse domain method, i.e. we have to deal with \\varepsilon -dependent forward operators and \\varepsilon -dependent norms. In particular the appropriate function spaces for the unknown and the data depend on \\varepsilon . This prevents the application of some standard convergence techniques for inverse problems, in particular interpreting the perturbations as data errors in the original problem does not yield suitable results. We consequently develop a novel approach based on saddle-point problems. The numerical solution of the problem is discussed as well and results for several computational experiments are reported. In
Ray, J.; Lee, J.; Yadav, V.; ...
2015-04-29
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also
Inverse Kinematic Analysis of Human Hand Thumb Model
NASA Astrophysics Data System (ADS)
Toth-Tascau, Mirela; Pater, Flavius; Stoia, Dan Ioan; Menyhardt, Karoly; Rosu, Serban; Rusu, Lucian; Vigaru, Cosmina
2011-09-01
This paper deals with a kinematic model of the thumb of the human hand. The proposed model has 3 degrees of freedom being able to model the movements of the thumb tip with respect to the wrist joint centre. The kinematic equations are derived based on Denavit-Hartenberg Convention and solved in both direct and inverse way. Inverse kinematic analysis of human hand thumb model reveals multiple and connected solutions which are characteristic to nonlinear systems when the number of equations is greater than number of unknowns and correspond to natural movements of the finger.
Inverse estimation of parameters for an estuarine eutrophication model
Shen, J.; Kuo, A.Y.
1996-11-01
An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulations with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.
Inverse modeling with RZWQM2 to predict water quality
Technology Transfer Automated Retrieval System (TEKTRAN)
Agricultural systems models such as RZWQM2 are complex and have numerous parameters that are unknown and difficult to estimate. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals...
Towards inverse modeling of intratumor heterogeneity
NASA Astrophysics Data System (ADS)
Brutovsky, Branislav; Horvath, Denis
2015-08-01
Development of resistance limits efficiency of present anticancer therapies and preventing it remains a big challenge in cancer research. It is accepted, at the intuitive level, that resistance emerges as a consequence of the heterogeneity of cancer cells at the molecular, genetic and cellular levels. Produced by many sources, tumor heterogeneity is extremely complex time dependent statistical characteristics which may be quantified by measures defined in many different ways, most of them coming from statistical mechanics. In this paper, we apply the Markovian framework to relate population heterogeneity to the statistics of the environment. As, from an evolutionary viewpoint, therapy corresponds to a purposeful modi- fication of the cells' fitness landscape, we assume that understanding general relationship between the spatiotemporal statistics of a tumor microenvironment and intratumor heterogeneity will allow to conceive the therapy as an inverse problem and to solve it by optimization techniques. To account for the inherent stochasticity of biological processes at cellular scale, the generalized distancebased concept was applied to express distances between probabilistically described cell states and environmental conditions, respectively.
The inversion method of Matrix mineral bulk modulus based on Gassmann equation
NASA Astrophysics Data System (ADS)
Kai, L.; He, X.; Zhang, Z. H.
2015-12-01
In recent years, seismic rock physics has played an important role in oil and gas exploration. The seismic rock physics model can quantitatively describe the reservoir characteristics, such as lithologic association, pore structure, geological processes and so on. But the classic rock physics models need to determine the background parameter, that is, matrix mineral bulk modulus. An inaccurate inputs greatly influence the prediction reliability. By introducing different rock physics parameters, Gassmann equation is used to derive a reasonable modification. Two forms of Matrix mineral bulk modulus inversion methods including the linear regression method and Self-adapting inversion method are proposed. They effectively solve the value issues of Matrix mineral bulk modulus in different complex parameters conditions. Based on laboratory tests data, compared with the conventional method, the linear regression method is more simple and accurate. Meanwhile Self-adapting inversion method also has higher precision in the known rich rock physics parameters. Consequently, the modulus value was applied to reservoir fluid substitution, porosity inversion and S-wave velocity prediction. The introduction of Matrix mineral modulus base on Gassmann equations can effectively improve the reliability of the fluid impact prediction, and computational efficiency.
NASA Astrophysics Data System (ADS)
Sahin, O. K.; Asci, M.
2014-12-01
At this study, determination of theoretical parameters for inversion process of Trabzon-Sürmene-Kutlular ore bed anomalies was examined. Making a decision of which model equation can be used for inversion is the most important step for the beginning. It is thought that will give a chance to get more accurate results. So, sections were evaluated with sphere-cylinder nomogram. After that, same sections were analyzed with cylinder-dike nomogram to determine the theoretical parameters for inversion process for every single model equations. After comparison of results, we saw that only one of them was more close to parameters of nomogram evaluations. But, other inversion result parameters were different from their nomogram parameters.
Effects of experimental and modeling errors on electrocardiographic inverse formulations.
Cheng, Leo K; Bodley, John M; Pullan, Andrew J
2003-01-01
The inverse problem of electrocardiology aims to reconstruct the electrical activity occurring within the heart using information obtained noninvasively on the body surface. Potentials obtained on the torso surface can be used as input for the inverse problem and an electrical image of the heart obtained. There are a number of different inverse algorithms currently used to produce electrical images of the heart. The relative performances of these inverse algorithms at this stage is largely unknown. Although there have been many simulation studies investigating the accuracy of each of these algorithms, to date, there has been no comprehensive study which compares a wide variety of inverse methods. By performing a detailed simulation study, we compare the performances of epicardial potential [Tikhonov, Truncated singular value decomposition (TSVD), and Greensite] and myocardial activation-based (critical point) inverse simulations along with different methods of choosing the appropriate level of regularization (optimal, L-curve, composite residual and smoothing operator, zero-crossing) to apply to each of these inverse methods. We also examine the effects of a variety of signal error, material property error, geometric error and a combination of these errors on each of the electrocardiographic inverse algorithms. Results from the simulation study show that the activation-based method is able to produce solutions which are more accurate and stable than potential-based methods especially in the presence of correlated errors such as geometric uncertainty. In general, the Greensite-Tikhonov method produced the most realistic potential-based solutions while the zero-crossing and L-curve were the preferred method for determining the regularization parameter. The presence of signal or material property error has little effect on the inverse solutions when compared with the large errors which resulted from the presence of any geometric error. In the presence of combined
A Test of Maxwell's Z Model Using Inverse Modeling
NASA Technical Reports Server (NTRS)
Anderson, J. L. B.; Schultz, P. H.; Heineck, T.
2003-01-01
In modeling impact craters a small region of energy and momentum deposition, commonly called a "point source", is often assumed. This assumption implies that an impact is the same as an explosion at some depth below the surface. Maxwell's Z Model, an empirical point-source model derived from explosion cratering, has previously been compared with numerical impact craters with vertical incidence angles, leading to two main inferences. First, the flowfield center of the Z Model must be placed below the target surface in order to replicate numerical impact craters. Second, for vertical impacts, the flow-field center cannot be stationary if the value of Z is held constant; rather, the flow-field center migrates downward as the crater grows. The work presented here evaluates the utility of the Z Model for reproducing both vertical and oblique experimental impact data obtained at the NASA Ames Vertical Gun Range (AVGR). Specifically, ejection angle data obtained through Three-Dimensional Particle Image Velocimetry (3D PIV) are used to constrain the parameters of Maxwell's Z Model, including the value of Z and the depth and position of the flow-field center via inverse modeling.
UCODE, a computer code for universal inverse modeling
Poeter, E.P.; Hill, M.C.
1999-01-01
This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating
Comparison of carbon dioxide uptake between inverse and bottom-up models over the Mountain West
NASA Astrophysics Data System (ADS)
Brooks, B.; Desai, A. R.; Stephens, B. B.
2010-12-01
An essential objective of the North American Carbon Program (NACP) has been to constrain carbon cycle sources and sinks in particular through land surface model intercomparison. Many of these bottom-up models estimate fluxes of carbon dioxide using remotely sensed satellite products such as fraction of Photosynthetically Active Radiation (fPAR) and Leaf Area Index (LAI), which are difficult to calibrate over the complex terrain and heterogeneous land cover of the United States Mountain West. Inverse methods that retrieve fluxes by assimilating in situ CO2 concentrations offer a different approach for estimating carbon dioxide exchange. In this study we compare CO2 fluxes between several models that participated in the NACP Regional and Continental Interim Synthesis and CarbonTracker, a nested grid tracer transport inverse model, over a domain that encompasses the Regional Atmospheric Continuous CO2 Network in the Rocky Mountains (RACCOON). An inverse to bottom-up model comparison over the RACCOON domain allows us to address several key questions: 'How do inverse and bottom-up models differ in CO2 uptake?', 'Do the inverse model - bottom-up model mismatches exceed error estimates?', and 'Does filtering-out CO2 observations representing local flows before assimilation by the inverse model reduce such discrepancies?'
Nonlinear inversion for arbitrarily-oriented anisotropic models II: Inversion techniques
NASA Astrophysics Data System (ADS)
Bremner, P. M.; Panning, M. P.
2011-12-01
We present output models from inversion of a synthetic surface wave dataset. We implement new 3-D finite-frequency kernels, based on the Born approximation, to invert for upper mantle structure beneath western North America. The kernels are formulated based on a hexagonal symmetry with an arbitrary orientation. Numerical tests were performed to achieve a robust inversion scheme. Four synthetic input models were created, to include: isotropic, constant strength anisotropic, variable strength anisotropic, and both anisotropic and isotropic together. The reference model was a simplified version of PREM (dubbed PREM LIGHT) in which the crust and 220 km discontinuity have been removed. Output models from inversions of calculated synthetic data are compared against these input models to test for accurate reproduction of input model features, and the resolution of those features. The object of this phase of the study was to determine appropriate nonlinear inversion schemes that adequately recover the input models. The synthetic dataset consists of collected seismic waveforms of 126 earthquake mechanisms, of magnitude 6-7 from Dec 2006 to Feb 2009, from the IRIS database. Events were selected to correlate with USArray deployments, and to have as complete an azimuthal coverage as possible. The events occurred within a circular region of radius 150o centered about 44o lat, -110o lon (an arbitrary location within USArray coverage). Synthetic data were calculated utilizing a spectral element code (SEM) coupled to a normal mode solution. The mesh consists of a 3-D heterogeneous outer shell, representing the upper mantle above 450 km depth, coupled to a spherically symmetric inner sphere. From the synthetic dataset, multi-taper fundamental mode surface wave phase delay measurements are taken. The orthogonal 2.5π -prolate spheroidal wave function eigentapers (Slepian tapers) reduce noise biasing, and can provide error estimates in phase delay measurements. This study is a
Gravity Inversion with Geological Modeling Constraint and Its Application in the Okinawa Trough
NASA Astrophysics Data System (ADS)
Zhang, S.
2014-12-01
The satellite altimetry gravity data is used to recover the 3D distribution of oceanic lithosphere density in the Okinawa Trough and its neighbor region. It's difficult to use only gravity data to invert complex geological structure and density distribution by 3D gravity Inversion method. In order to improve the vertical resolution of the density inversion result, 3D geological modeling method is used to build structural model for the inversion, prior constraint conditions can be applied to solve the non-unique problem. In the Okinawa Trough, it is proved by earthquake data that the Philippine plate dives beneath the Okinawa Trough, which result in the upwelling of mantel material and decrease of the crust thickness. The Benioff zone clearly shows the plate's subduction parameter, such as direction, dip, transformation. Therefore, a structural subduction model is created by geological modeling method and works as the initial model and as constraint condition in gravity inversion. The 3D gravity inversion result and seismology CMT data are both used to explain the oceanic lithosphere structure in the Okinawa Trough. The inversion result illustrates high density anomaly under the Okinawa Trough. Affected by small scale mantle convections, the continental lithosphere is separated, which result in the spreading of back-arc basin and the formation of the Okinawa Trough.
NASA Astrophysics Data System (ADS)
Rezaie, Mohammad; Moradzadeh, Ali; Kalate, Ali Nejati; Aghajani, Hamid
2017-01-01
Inversion of gravity data is one of the important steps in the interpretation of practical data. One of the most interesting geological frameworks for gravity data inversion is the detection of sharp boundaries between orebody and host rocks. The focusing inversion is able to reconstruct a sharp image of the geological target. This technique can be efficiently applied for the quantitative interpretation of gravity data. In this study, a new reweighted regularized method for the 3D focusing inversion technique based on Lanczos bidiagonalization method is developed. The inversion results of synthetic data show that the new method is faster than common reweighted regularized conjugate gradient method to produce an acceptable solution for focusing inverse problem. The new developed inversion scheme is also applied for inversion of the gravity data collected over the San Nicolas Cu-Zn orebody in Zacatecas State, Mexico. The inversion results indicate a remarkable correlation with the true structure of the orebody that is achieved from drilling data.
Coupled inverse geochemical and microbial reactive transport models in porous media
NASA Astrophysics Data System (ADS)
Samper, J.; Yang, C.
2007-12-01
Microbial processes play a major role in controlling geochemical conditions in subsurface systems. Various laboratory and in situ experiments have been performed to evaluate the relevance of microbial processes and derive key microbial parameters. Such experiments are often interpreted by suboptimal trial-and-error curve fitting. Here we present an inverse model for coupled flow, reactive solute transport, geochemical and microbial processes which overcomes the limitations of trial-and-error methods by making data interpretation in a systematic, objective, and efficient manner. It extends the capabilities of existing inverse models which deal mostly with flow and chemically-reactive solute transport. Our inverse model relies on the microbial reactive transport code BIOCORE of Samper et al. (2006a) and improves the inverse reactive transport model INVERSE- CORE of Dai and Samper (2004) by allowing the simultaneous estimation of geochemical and microbial parameters. The inverse model has been implemented in a finite element code, INVERSE-BIOCORE2D and its capabilities have been verified and tested with a synthetic experiment involving equilibrium speciation, kinetic sorption/desorption and kinetic biodegradation reactions. Model results indicate that both chemical and microbial parameters can be estimated accurately for error-free data. Estimation errors of microbial parameters are larger than those of kinetic sorption parameters and generally increase with increasing standard deviation of data noise. Estimation error of yield coefficient is the smallest among all microbial parameter and which does not depend on data noise. The inverse model has been used also to estimate microbial parameters of a laboratory experiment involving sucrose fermentation by yeast. Inverse estimation improves significantly the fit to measured data.
A method for determining void arrangements in inverse opals.
Blanford, C F; Carter, C B; Stein, A
2004-12-01
The periodic arrangement of voids in ceramic materials templated by colloidal crystal arrays (inverse opals) has been analysed by transmission electron microscopy. Individual particles consisting of an approximately spherical array of at least 100 voids were tilted through 90 degrees along a single axis within the transmission electron microscope. The bright-field images of these particles at high-symmetry points, their diffractograms calculated by fast Fourier transforms, and the transmission electron microscope goniometer angles were compared with model face-centred cubic, body-centred cubic, hexagonal close-packed, and simple cubic lattices in real and reciprocal space. The spatial periodicities were calculated for two-dimensional projections. The systematic absences in these diffractograms differed from those found in diffraction patterns from three-dimensional objects. The experimental data matched only the model face-centred cubic lattice, so it was concluded that the packing of the voids (and, thus, the polymer spheres that composed the original colloidal crystals) was face-centred cubic. In face-centred cubic structures, the stacking-fault displacement vector is a/6<211> . No stacking faults were observed when viewing the inverse opal structure along the orthogonal <110>-type directions, eliminating the possibility of a random hexagonally close-packed structure for the particles observed. This technique complements synchrotron X-ray scattering work on colloidal crystals by allowing both real-space and reciprocal-space analysis to be carried out on a smaller cross-sectional area.
Inverse modeling for heat conduction problem in human abdominal phantom.
Huang, Ming; Chen, Wenxi
2011-01-01
Noninvasive methods for deep body temperature measurement are based on the principle of heat equilibrium between the thermal sensor and the target location theoretically. However, the measurement position is not able to be definitely determined. In this study, a 2-dimensional mathematical model was built based upon some assumptions for the physiological condition of the human abdomen phantom. We evaluated the feasibility in estimating the internal organs temperature distribution from the readings of the temperature sensors arranged on the skin surface. It is a typical inverse heat conduction problem (IHCP), and is usually mathematically ill-posed. In this study, by integrating some physical and physiological a-priori information, we invoked the quasi-linear (QL) method to reconstruct the internal temperature distribution. The solutions of this method were improved by increasing the accuracy of the sensors and adjusting their arrangement on the outer surface, and eventually reached the state of converging at the best state accurately. This study suggests that QL method is able to reconstruct the internal temperature distribution in this phantom and might be worthy of a further study in an anatomical based model.
NASA Astrophysics Data System (ADS)
Henderson, Laura S.; Subbarao, Kamesh
2016-12-01
This work presents a case wherein the selection of models when producing synthetic light curves affects the estimation of the size of unresolved space objects. Through this case, "inverse crime" (using the same model for the generation of synthetic data and data inversion), is illustrated. This is done by using two models to produce the synthetic light curve and later invert it. It is shown here that the choice of model indeed affects the estimation of the shape/size parameters. When a higher fidelity model (henceforth the one that results in the smallest error residuals after the crime is committed) is used to both create, and invert the light curve model the estimates of the shape/size parameters are significantly better than those obtained when a lower fidelity model (in comparison) is implemented for the estimation. It is therefore of utmost importance to consider the choice of models when producing synthetic data, which later will be inverted, as the results might be misleadingly optimistic.
Estimates of tropical bromoform emissions using an inversion method
NASA Astrophysics Data System (ADS)
Ashfold, M. J.; Harris, N. R. P.; Manning, A. J.; Robinson, A. D.; Warwick, N. J.; Pyle, J. A.
2014-01-01
Bromine plays an important role in ozone chemistry in both the troposphere and stratosphere. When measured by mass, bromoform (CHBr3) is thought to be the largest organic source of bromine to the atmosphere. While seaweed and phytoplankton are known to be dominant sources, the size and the geographical distribution of CHBr3 emissions remains uncertain. Particularly little is known about emissions from the Maritime Continent, which have usually been assumed to be large, and which appear to be especially likely to reach the stratosphere. In this study we aim to reduce this uncertainty by combining the first multi-annual set of CHBr3 measurements from this region, and an inversion process, to investigate systematically the distribution and magnitude of CHBr3 emissions. The novelty of our approach lies in the application of the inversion method to CHBr3. We find that local measurements of a short-lived gas like CHBr3 can be used to constrain emissions from only a relatively small, sub-regional domain. We then obtain detailed estimates of CHBr3 emissions within this area, which appear to be relatively insensitive to the assumptions inherent in the inversion process. We extrapolate this information to produce estimated emissions for the entire tropics (defined as 20° S-20° N) of 225 Gg CHBr3 yr-1. The ocean in the area we base our extrapolations upon is typically somewhat shallower, and more biologically productive, than the tropical average. Despite this, our tropical estimate is lower than most other recent studies, and suggests that CHBr3 emissions in the coastline-rich Maritime Continent may not be stronger than emissions in other parts of the tropics.
NASA Astrophysics Data System (ADS)
Lohman, R. B.; Simons, M.
2004-12-01
We examine inversions of geodetic data for fault slip and discuss how inferred results are affected by choices of regularization. The final goal of any slip inversion is to enhance our understanding of the dynamics governing fault zone processes through kinematic descriptions of fault zone behavior at various temporal and spatial scales. Important kinematic observations include ascertaining whether fault slip is correlated with topographic and gravitational anomalies, whether coseismic and postseismic slip occur on complementary or overlapping regions of the fault plane, and how aftershock distributions compare with areas of coseismic and postseismic slip. Fault slip inversions are generally poorly-determined inverse problems requiring some sort of regularization. Attempts to place inversion results in the context of understanding fault zone processes should be accompanied by careful treatment of how the applied regularization affects characteristics of the inferred slip model. Most regularization techniques involve defining a metric that quantifies the solution "simplicity". A frequently employed method defines a "simple" slip distribution as one that is spatially smooth, balancing the fit to the data vs. the spatial complexity of the slip distribution. One problem related to the use of smoothing constraints is the "smearing" of fault slip into poorly-resolved areas on the fault plane. In addition, even if the data is fit well by a point source, the fact that a point source is spatially "rough" will force the inversion to choose a smoother model with slip over a broader area. Therefore, when we interpret the area of inferred slip we must ask whether the slipping area is truly constrained by the data, or whether it could be fit equally well by a more spatially compact source with larger amplitudes of slip. We introduce an alternate regularization technique for fault slip inversions, where we seek an end member model that is the smallest region of fault slip that
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian; Wilson, John L.
2000-09-01
Inverse methods can be used to reconstruct the release history of a known source of groundwater contamination from concentration data describing the present-day spatial distribution of the contaminant plume. Using hypothetical release history functions and contaminant plumes, we evaluate the relative effectiveness of two proposed inverse methods, Tikhonov regularization (TR) and minimum relative entropy (MRE) inversion, in reconstructing the release history of a conservative contaminant in a one-dimensional domain [Skaggs and Kabala, 1994; Woodbury and Ulrych, 1996]. We also address issues of reproducibility of the solution and the appropriateness of models for simulating random measurement error. The results show that if error-free plume concentration data are available, both methods perform well in reconstructing a smooth source history function. With error-free data the MRE method is more robust than TR in reconstructing a nonsmooth source history function; however, the TR method is more robust if the data contain measurement error. Two error models were evaluated in this study, and we found that the particular error model does not affect the reliability of the solutions. The results for the TR method have somewhat greater reproducibility because, in some cases, its input parameters are less subjective than those of the MRE method; however, the MRE solution can identify regions where the data give little or no information about the source history function, while the TR solution cannot.
Seismic imaging and inversion based on spectral-element and adjoint methods
NASA Astrophysics Data System (ADS)
Luo, Yang
One of the most important topics in seismology is to construct detailed tomographic images beneath the surface, which can be interpreted geologically and geochemically to understand geodynamic processes happening in the interior of the Earth. Classically, these images are usually produced based upon linearized traveltime anomalies involving several particular seismic phases, whereas nonlinear inversion fitting synthetic seismograms and recorded signals based upon the adjoint method becomes more and more favorable. The adjoint tomography, also referred to as waveform inversion, is advantageous over classical techniques in several aspects, such as better resolution, while it also has several drawbacks, e.g., slow convergence and lack of quantitative resolution analysis. In this dissertation, we focus on solving these remaining issues in adjoint tomography, from a theoretical perspective and based upon synthetic examples. To make the thesis complete by itself and easy to follow, we start from development of the spectral-element method, a wave equation solver that enables access to accurate synthetic seismograms for an arbitrary Earth model, and the adjoint method, which provides Frechet derivatives, also named as sensitivity kernels, of a given misfit function. Then, the sensitivity kernels for waveform misfit functions are illustrated, using examples from exploration seismology, in other words, for migration purposes. Next, we show step by step how these gradient derivatives may be utilized in minimizing the misfit function, which leads to iterative refinements on the Earth model. Strategies needed to speed up the inversion, ensure convergence and improve resolution, e.g., preconditioning, quasi-Newton methods, multi-scale measurements and combination of traveltime and waveform misfit functions, are discussed. Through comparisons between the adjoint tomography and classical tomography, we address the resolution issue by calculating the point-spread function, the
NASA Astrophysics Data System (ADS)
Palmer, Paul I.; Barnett, J. J.; Eyre, J. R.; Healy, S. B.
2000-07-01
An optimal estimation inverse method is presented which can be used to retrieve simultaneously vertical profiles of temperature and specific humidity, in addition to surface pressure, from satellite-to-satellite radio occultation observations of the Earth's atmosphere. The method is a nonlinear, maximum a posteriori technique which can accommodate most aspects of the real radio occultation problem and is found to be stable and to converge rapidly in most cases. The optimal estimation inverse method has two distinct advantages over the analytic inverse method in that it accounts for some of the effects of horizontal gradients and is able to retrieve optimally temperature and humidity simultaneously from the observations. It is also able to account for observation noise and other sources of error. Combined, these advantages ensure a realistic retrieval of atmospheric quantities. A complete error analysis emerges naturally from the optimal estimation theory, allowing a full characterization of the solution. Using this analysis, a quality control scheme is implemented which allows anomalous retrieval conditions to be recognized and removed, thus preventing gross retrieval errors. The inverse method presented in this paper has been implemented for bending angle measurements derived from GPS/MET radio occultation observations of the Earth. Preliminary results from simulated data suggest that these observations have the potential to improve numerical weather prediction model analyses significantly throughout their vertical range.
A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Liu, Tianyou
2014-07-01
Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.
A self-constrained inversion of magnetic data based on correlation method
NASA Astrophysics Data System (ADS)
Sun, Shida; Chen, Chao
2016-12-01
Geologically-constrained inversion is a powerful method for producing geologically reasonable solutions in geophysical exploration problems. But in many cases, except the observed geophysical data to be inverted, the geological information is insufficiently available for improving reliability of recovered models. To deal with these situations, self-constraints extracted from preprocessing observed data have been applied to constrain the inversion. In this paper, we present a self-constrained inversion method based on correlation method. In our approach the correlation results are first obtained by calculating the cross-correlation between theoretical data and horizontal gradients of the observed data. Subsequently, we propose two specific strategies to extract the spatial variation from the correlation results and then translate them into spatial weighting functions. Incorporating the spatial weighting functions into the model objective function, we obtain self-constrained solutions with higher reliability. We presented two synthetic and one field magnetic data example to test the validity. All results demonstrate that the solution from our self-constrained inversion can delineate the geological bodies with clearer boundaries and much more concentrated physical property.
2012-08-01
AFRL-RX-WP-TP-2012-0397 INVERSE PROBLEM FOR ELECTROMAGNETIC PROPAGATION IN A DIELECTRIC MEDIUM USING MARKOV CHAIN MONTE CARLO METHOD ...SUBTITLE INVERSE PROBLEM FOR ELECTROMAGNETIC PROPAGATION IN A DIELECTRIC MEDIUM USING MARKOV CHAIN MONTE CARLO METHOD (PREPRINT) 5a. CONTRACT...a stochastic inverse methodology arising in electromagnetic imaging. Nondestructive testing using guided microwaves covers a wide range of
Multiresolution subspace-based optimization method for inverse scattering problems.
Oliveri, Giacomo; Zhong, Yu; Chen, Xudong; Massa, Andrea
2011-10-01
This paper investigates an approach to inverse scattering problems based on the integration of the subspace-based optimization method (SOM) within a multifocusing scheme in the framework of the contrast source formulation. The scattering equations are solved by a nested three-step procedure composed of (a) an outer multiresolution loop dealing with the identification of the regions of interest within the investigation domain through an iterative information-acquisition process, (b) a spectrum analysis step devoted to the reconstruction of the deterministic components of the contrast sources, and (c) an inner optimization loop aimed at retrieving the ambiguous components of the contrast sources through a conjugate gradient minimization of a suitable objective function. A set of representative reconstruction results is discussed to provide numerical evidence of the effectiveness of the proposed algorithmic approach as well as to assess the features and potentialities of the multifocusing integration in comparison with the state-of-the-art SOM implementation.
Inverse methods for stellarator error-fields and emission
NASA Astrophysics Data System (ADS)
Hammond, K. C.; Anichowski, A.; Brenner, P. W.; Diaz-Pacheco, R.; Volpe, F. A.; Wei, Y.; Kornbluth, Y.; Pedersen, T. S.; Raftopoulos, S.; Traverso, P.
2016-10-01
Work at the CNT stellarator at Columbia University has resulted in the development of two inverse diagnosis techniques that infer difficult-to-measure properties from simpler measurements. First, CNT's error-field is determined using a Newton-Raphson algorithm to infer coil misalignments based on measurements of flux surfaces. This is obtained by reconciling the computed flux surfaces (a function of coil misalignments) with the measured flux surfaces. Second, the plasma emissivity profile is determined based on a single CCD camera image using an onion-peeling method. This approach posits a system of linear equations relating pixel brightness to emission from a discrete set of plasma layers bounded by flux surfaces. Results for both of these techniques as applied to CNT will be shown, and their applicability to large modular coil stellarators will be discussed.
A comparison of lidar inversion methods for cirrus applications
NASA Technical Reports Server (NTRS)
Elouragini, Salem; Flamant, Pierre H.
1992-01-01
Several methods for inverting the lidar equation are suggested as means to derive the cirrus optical properties (beta backscatter, alpha extinction coefficients, and delta optical depth) at one wavelength. The lidar equation can be inverted in a linear or logarithmic form; either solution assumes a linear relationship: beta = kappa(alpha), where kappa is the lidar ratio. A number of problems prevent us from calculating alpha (or beta) with a good accuracy. Some of these are as follows: (1) the multiple scattering effect (most authors neglect it); (2) an absolute calibration of the lidar system (difficult and sometimes not possible); (3) lack of accuracy on the lidar ratio k (taken as constant, but in fact it varies with range and cloud species); and (4) the determination of boundary condition for logarithmic solution which depends on signal to noise ration (SNR) at cloud top. An inversion in a linear form needs an absolute calibration of the system. In practice one uses molecular backscattering below the cloud to calibrate the system. This method is not permanent because the lower atmosphere turbidity is variable. For a logarithmic solution, a reference extinction coefficient (alpha(sub f)) at cloud top is required. Several methods to determine alpha(sub f) were suggested. We tested these methods at low SNR. This led us to propose two new methods referenced as S1 and S2.
Study of TEC fluctuation via stochastic models and Bayesian inversion
NASA Astrophysics Data System (ADS)
Bires, A.; Roininen, L.; Damtie, B.; Nigussie, M.; Vanhamäki, H.
2016-11-01
We propose stochastic processes to be used to model the total electron content (TEC) observation. Based on this, we model the rate of change of TEC (ROT) variation during ionospheric quiet conditions with stationary processes. During ionospheric disturbed conditions, for example, when irregularity in ionospheric electron density distribution occurs, stationarity assumption over long time periods is no longer valid. In these cases, we make the parameter estimation for short time scales, during which we can assume stationarity. We show the relationship between the new method and commonly used TEC characterization parameters ROT and the ROT Index (ROTI). We construct our parametric model within the framework of Bayesian statistical inverse problems and hence give the solution as an a posteriori probability distribution. Bayesian framework allows us to model measurement errors systematically. Similarly, we mitigate variation of TEC due to factors which are not of ionospheric origin, like due to the motion of satellites relative to the receiver, by incorporating a priori knowledge in the Bayesian model. In practical computations, we draw the so-called maximum a posteriori estimates, which are our ROT and ROTI estimates, from the posterior distribution. Because the algorithm allows to estimate ROTI at each observation time, the estimator does not depend on the period of time for ROTI computation. We verify the method by analyzing TEC data recorded by GPS receiver located in Ethiopia (11.6°N, 37.4°E). The results indicate that the TEC fluctuations caused by the ionospheric irregularity can be effectively detected and quantified from the estimated ROT and ROTI values.
Inversion of canopy reflectance models for estimation of vegetation parameters
NASA Technical Reports Server (NTRS)
Goel, Narendra S.
1987-01-01
One of the keys to successful remote sensing of vegetation is to be able to estimate important agronomic parameters like leaf area index (LAI) and biomass (BM) from the bidirectional canopy reflectance (CR) data obtained by a space-shuttle or satellite borne sensor. One approach for such an estimation is through inversion of CR models which relate these parameters to CR. The feasibility of this approach was shown. The overall objective of the research carried out was to address heretofore uninvestigated but important fundamental issues, develop the inversion technique further, and delineate its strengths and limitations.
Inverse Methods. Interdisciplinary Elements of Methodology, Computation, and Applications
NASA Astrophysics Data System (ADS)
Jacobsen, Bo Holm; Mosegaard, Klaus; Sibani, Paolo
Over the last few decades inversion concepts have become an integral part of experimental data interpretation in several branches of science. In numerous cases similar inversion-like techniques were developed independently in separate disciplines, sometimes based on different lines of reasoning, but not always to the same level of sophistication. This book is based on the Interdisciplinary Inversion Conference held at the University of Aarhus, Denmark. For scientists and graduate students in geophysics, astronomy, oceanography, petroleum geology, and geodesy, the book offers a wide variety of examples and theoretical background in the field of inversion techniques.
Localization of incipient tip vortex cavitation using ray based matched field inversion method
NASA Astrophysics Data System (ADS)
Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon
2015-10-01
Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.
Efficient inversion and uncertainty quantification of a tephra fallout model
NASA Astrophysics Data System (ADS)
White, J. T.; Connor, C. B.; Connor, L.; Hasenaka, T.
2017-01-01
An efficient and effective inversion and uncertainty quantification approach is proposed for estimating eruption parameters given a data set collected from a tephra deposit. The approach is model independent and here is applied using Tephra2, a code that simulates advective and dispersive tephra transport and deposition. The Levenburg-Marquardt algorithm is combined with formal Tikhonov and subspace regularization to invert eruption parameters; a linear equation for conditional uncertainty propagation is used to estimate posterior parameter uncertainty. Both the inversion and uncertainty analysis support simultaneous analysis of the full eruption and wind field parameterization. The combined inversion/uncertainty quantification approach is applied to the 1992 eruption of Cerro Negro and the 2011 Kirishima-Shinmoedake eruption. While eruption mass uncertainty is reduced by inversion against tephra isomass data, considerable uncertainty remains for many eruption and wind field parameters, such as plume height. Supplementing the inversion data set with tephra granulometry data is shown to further reduce the uncertainty of most eruption and wind field parameters. The eruption mass of the 2011 Kirishima-Shinmoedake eruption is 0.82 × 1010 kg to 2.6 × 1010 kg, with 95% confidence; total eruption mass for the 1992 Cerro Negro eruption is 4.2 × 1010 kg to 7.3 × 1010 kg, with 95% confidence. These results indicate that eruption classification and characterization of eruption parameters can be significantly improved through this uncertainty quantification approach.
An evolution equation modeling inversion of tulip flames
Dold, J.W.; Joulin, G.
1995-02-01
The authors attempt to reduce the number of physical ingredients needed to model the phenomenon of tulip-flame inversion to a bare minimum. This is achieved by synthesizing the nonlinear, first-order Michelson-Sivashinsky (MS) equation with the second order linear dispersion relation of Landau and Darrieus, which adds only one extra term to the MS equation without changing any of its stationary behavior and without changing its dynamics in the limit of small density change when the MS equation is asymptotically valid. However, as demonstrated by spectral numerical solutions, the resulting second-order nonlinear evolution equation is found to describe the inversion of tulip flames in good qualitative agreement with classical experiments on the phenomenon. This shows that the combined influences of front curvature, geometric nonlinearity and hydrodynamic instability (including its second-order, or inertial effects, which are an essential result of vorticity production at the flame front) are sufficient to reproduce the inversion process.
Inverse modelling of NOx emissions over eastern China: uncertainties due to chemical non-linearity
NASA Astrophysics Data System (ADS)
Gu, Dasa; Wang, Yuhang; Yin, Ran; Zhang, Yuzhong; Smeltzer, Charles
2016-10-01
Satellite observations of nitrogen dioxide (NO2) have often been used to derive nitrogen oxides (NOx = NO + NO2) emissions. A widely used inversion method was developed by Martin et al. (2003). Refinements of this method were subsequently developed. In the context of this inversion method, we show that the local derivative (of a first-order Taylor expansion) is more appropriate than the "bulk ratio" (ratio of emission to column) used in the original formulation for polluted regions. Using the bulk ratio can lead to biases in regions of high NOx emissions such as eastern China due to chemical non-linearity. Inverse modelling using the local derivative method is applied to both GOME-2 and OMI satellite measurements to estimate anthropogenic NOx emissions over eastern China. Compared with the traditional method using bulk ratio, the local derivative method produces more consistent NOx emission estimates between the inversion results using GOME-2 and OMI measurements. The results also show significant changes in the spatial distribution of NOx emissions, especially over high emission regions of eastern China. We further discuss a potential pitfall of using the difference of two satellite measurements to derive NOx emissions. Our analysis suggests that chemical non-linearity needs to be accounted for and that a careful bias analysis is required in order to use the satellite differential method in inverse modelling of NOx emissions.
NASA Astrophysics Data System (ADS)
Agata, R.; Ichimura, T.; Hori, T.; Hirahara, K.; Hori, M.
2012-12-01
preconditioner, which uses a solution obtained by a lower resolution model generated in the same area to improve the convergence of the iterative solver. Also, the preconditioner is solved in single precision. As a result, the computation time is significantly reduced. Verification for our method is done by comparing the results with the analytical solution in a half-space (Okada, 1985.) As an application example, we performed an inversion analysis of fault slip in the 2011 Tohoku earthquake using Northeast Japan models generated by both our method and a conventional method (corresponding to Okada's analytical solution.) Our model has more than 150 million DOF and 4 layers with complex shape and different material properties. As a result, a significant difference in the results by the two models was seen, indicating the importance of introducing the layer shape of crust and heterogeneity of material in the models. The total computation time for Green's function is reduced by almost 1/7 because of the improvement of the computation method. We expect that this method will be a core technique of crustal deformation analysis. Our next plan is to take the ambiguity of the shape and the material property of the crust into consideration. Also, we would like to introduce viscoelasticity to the models.
The Wing-Body Aeroelastic Analyses Using the Inverse Design Method
NASA Astrophysics Data System (ADS)
Lee, Seung Jun; Im, Dong-Kyun; Lee, In; Kwon, Jang-Hyuk
Flutter phenomenon is one of the most dangerous problems in aeroelasticity. When it occurs, the aircraft structure can fail in a few second. In recent aeroelastic research, computational fluid dynamics (CFD) techniques become important means to predict the aeroelastic unstable responses accurately. Among various flow equations like Navier-Stokes, Euler, full potential and so forth, the transonic small disturbance (TSD) theory is widely recognized as one of the most efficient theories. However, the small disturbance assumption limits the applicable range of the TSD theory to the thin wings. For a missile which usually has small aspect ratio wings, the influence of body aerodynamics on the wing surface may be significant. Thus, the flutter stability including the body effect should be verified. In this research an inverse design method is used to complement the aerodynamic deficiency derived from the fuselage. MGM (modified Garabedian-McFadden) inverse design method is used to optimize the aerodynamic field of a full aircraft model. Furthermore, the present TSD aeroelastic analyses do not require the grid regeneration process. The MGM inverse design method converges faster than other conventional aerodynamic theories. Consequently, the inverse designed aeroelastic analyses show that the flutter stability has been lowered by the body effect.
Research on inverse methods and optimization in Italy
NASA Technical Reports Server (NTRS)
Larocca, Francesco
1991-01-01
The research activities in Italy on inverse design and optimization are reviewed. The review is focused on aerodynamic aspects in turbomachinery and wing section design. Inverse design of blade rows and ducts of turbomachinery in subsonic and transonic regime are illustrated by the Politecnico di Torino and turbomachinery industry (FIAT AVIO).
Numerical methods for problems involving the Drazin inverse
NASA Technical Reports Server (NTRS)
Meyer, C. D., Jr.
1979-01-01
The objective was to try to develop a useful numerical algorithm for the Drazin inverse and to analyze the numerical aspects of the applications of the Drazin inverse relating to the study of homogeneous Markov chains and systems of linear differential equations with singular coefficient matrices. It is felt that all objectives were accomplished with a measurable degree of success.
NASA Technical Reports Server (NTRS)
Vazquez, Sixto L.; Tessler, Alexander; Quach, Cuong C.; Cooper, Eric G.; Parks, Jeffrey; Spangler, Jan L.
2005-01-01
In an effort to mitigate accidents due to system and component failure, NASA s Aviation Safety has partnered with industry, academia, and other governmental organizations to develop real-time, on-board monitoring capabilities and system performance models for early detection of airframe structure degradation. NASA Langley is investigating a structural health monitoring capability that uses a distributed fiber optic strain system and an inverse finite element method for measuring and modeling structural deformations. This report describes the constituent systems that enable this structural monitoring function and discusses results from laboratory tests using the fiber strain sensor system and the inverse finite element method to demonstrate structural deformation estimation on an instrumented test article
Mirror Writing: Learning, Transfer, and Implications for Internal Inverse Models.
Latash, Mark L.
1999-06-01
In a study of the effects of practicing mirror writing, the effects of transfer to the nonpracticed hand and to nonpracticed phrases were assessed in 185 students. Large transfer effects were observed. An interpretation of those effects is based on a suggestion that the learning led to the creation of a new internal inverse model (or a modification of a pre-existent model) mapping the space of task variables onto the space of internal variables.
NASA Astrophysics Data System (ADS)
Rizzuti, G.; Gisolf, A.
2017-03-01
We study a reconstruction algorithm for the general inverse scattering problem based on the estimate of not only medium properties, as in more conventional approaches, but also wavefields propagating inside the computational domain. This extended set of unknowns is justified as a way to prevent local minimum stagnation, which is a common issue for standard methods. At each iteration of the algorithm, (i) the model parameters are obtained by solution of a convex problem, formulated from a special bilinear relationship of the data with respect to properties and wavefields (where the wavefield is kept fixed), and (ii) a better estimate of the wavefield is calculated, based on the previously reconstructed properties. The resulting scheme is computationally convenient since step (i) can greatly benefit from parallelization and the wavefield update (ii) requires modeling only in the known background model, which can be sped up considerably by factorization-based direct methods. The inversion method is successfully tested on synthetic elastic datasets.
Numerical study of the inverse problem for the diffusion-reaction equation using optimization method
NASA Astrophysics Data System (ADS)
Soboleva, O. V.; Brizitskii, R. V.
2016-04-01
The model of transfer of substance with mixed boundary condition is considered. The inverse extremum problem of identification of the main coefficient in a nonstationary diffusion-reaction equation is formulated. The numerical algorithm based on the Newton-method of nonlinear optimization and finite difference discretization for solving this extremum problem is developed and realized on computer. The results of numerical experiments are discussed.
Parallelized Three-Dimensional Resistivity Inversion Using Finite Elements And Adjoint State Methods
NASA Astrophysics Data System (ADS)
Schaa, Ralf; Gross, Lutz; Du Plessis, Jaco
2015-04-01
The resistivity method is one of the oldest geophysical exploration methods, which employs one pair of electrodes to inject current into the ground and one or more pairs of electrodes to measure the electrical potential difference. The potential difference is a non-linear function of the subsurface resistivity distribution described by an elliptic partial differential equation (PDE) of the Poisson type. Inversion of measured potentials solves for the subsurface resistivity represented by PDE coefficients. With increasing advances in multichannel resistivity acquisition systems (systems with more than 60 channels and full waveform recording are now emerging), inversion software require efficient storage and solver algorithms. We developed the finite element solver Escript, which provides a user-friendly programming environment in Python to solve large-scale PDE-based problems (see https://launchpad.net/escript-finley). Using finite elements, highly irregular shaped geology and topography can readily be taken into account. For the 3D resistivity problem, we have implemented the secondary potential approach, where the PDE is decomposed into a primary potential caused by the source current and the secondary potential caused by changes in subsurface resistivity. The primary potential is calculated analytically, and the boundary value problem for the secondary potential is solved using nodal finite elements. This approach removes the singularity caused by the source currents and provides more accurate 3D resistivity models. To solve the inversion problem we apply a 'first optimize then discretize' approach using the quasi-Newton scheme in form of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method (see Gross & Kemp 2013). The evaluation of the cost function requires the solution of the secondary potential PDE for each source current and the solution of the corresponding adjoint-state PDE for the cost function gradients with respect to the subsurface
Dynamic inverse models in human-cyber-physical systems
NASA Astrophysics Data System (ADS)
Robinson, Ryan M.; Scobee, Dexter R. R.; Burden, Samuel A.; Sastry, S. Shankar
2016-05-01
Human interaction with the physical world is increasingly mediated by automation. This interaction is characterized by dynamic coupling between robotic (i.e. cyber) and neuromechanical (i.e. human) decision-making agents. Guaranteeing performance of such human-cyber-physical systems will require predictive mathematical models of this dynamic coupling. Toward this end, we propose a rapprochement between robotics and neuromechanics premised on the existence of internal forward and inverse models in the human agent. We hypothesize that, in tele-robotic applications of interest, a human operator learns to invert automation dynamics, directly translating from desired task to required control input. By formulating the model inversion problem in the context of a tracking task for a nonlinear control system in control-a_ne form, we derive criteria for exponential tracking and show that the resulting dynamic inverse model generally renders a portion of the physical system state (i.e., the internal dynamics) unobservable from the human operator's perspective. Under stability conditions, we show that the human can achieve exponential tracking without formulating an estimate of the system's state so long as they possess an accurate model of the system's dynamics. These theoretical results are illustrated using a planar quadrotor example. We then demonstrate that the automation can intervene to improve performance of the tracking task by solving an optimal control problem. Performance is guaranteed to improve under the assumption that the human learns and inverts the dynamic model of the altered system. We conclude with a discussion of practical limitations that may hinder exact dynamic model inversion.
The generalized Phillips-Twomey method for NMR relaxation time inversion
NASA Astrophysics Data System (ADS)
Gao, Yang; Xiao, Lizhi; Zhang, Yi; Xie, Qingming
2016-10-01
The inversion of NMR relaxation time involves the Fredholm integral equation of the first kind. Due to its ill-posedness, numerical solutions to this type of equations are often found much less accurate and bear little resemblance to the true solution. There has been a strong interest in finding a well-posed method for this ill-posed problem since 1950s. In this paper, we prove the existence, the uniqueness, the stability and the convergence of the generalized Phillips-Twomey regularization method for solving this type of equations. Numerical simulations and core analyses arising from NMR transverse relaxation time inversion are conducted to show the effectiveness of the generalized Phillips-Twomey method. Both the simulation results and the core analyses agree well with the model and the realities.
New inverse method of centrifugal pump blade based on free form deformation
NASA Astrophysics Data System (ADS)
Zhang, R. H.; Guo, M.; Yang, J. H.; Liu, Y.; Li, R. N.
2013-12-01
In this research, a new inverse method for centrifugal pump blade based on free form deformation is proposed, the free form deformation is used to parametric the pump blade. The blade is implanted to a trivariate control volume which is equally subdivided by control lattice. The control volume can be deformed by moving the control lattice, thereupon the object is deformed. The flow in pump is solved by using a three dimensional turbulent model. The lattice deformation function is constructed according to the gradient distribution of fluid energy along the blade and its objective distribution. Deform the blade shape continually according to the flow solve, and we can get the objective blade shape. The calculation case shows that the proposed inverse method based on FFD method is rational.
The generalized Phillips-Twomey method for NMR relaxation time inversion.
Gao, Yang; Xiao, Lizhi; Zhang, Yi; Xie, Qingming
2016-10-01
The inversion of NMR relaxation time involves the Fredholm integral equation of the first kind. Due to its ill-posedness, numerical solutions to this type of equations are often found much less accurate and bear little resemblance to the true solution. There has been a strong interest in finding a well-posed method for this ill-posed problem since 1950s. In this paper, we prove the existence, the uniqueness, the stability and the convergence of the generalized Phillips-Twomey regularization method for solving this type of equations. Numerical simulations and core analyses arising from NMR transverse relaxation time inversion are conducted to show the effectiveness of the generalized Phillips-Twomey method. Both the simulation results and the core analyses agree well with the model and the realities.
The genetic algorithm: A robust method for stress inversion
NASA Astrophysics Data System (ADS)
Thakur, Prithvi; Srivastava, Deepak C.; Gupta, Pravin K.
2017-01-01
The stress inversion of geological or geophysical observations is a nonlinear problem. In most existing methods, it is solved by linearization, under certain assumptions. These linear algorithms not only oversimplify the problem but also are vulnerable to entrapment of the solution in a local optimum. We propose the use of a nonlinear heuristic technique, the genetic algorithm, which searches the global optimum without making any linearizing assumption or simplification. The algorithm mimics the natural evolutionary processes of selection, crossover and mutation and, minimizes a composite misfit function for searching the global optimum, the fittest stress tensor. The validity and efficacy of the algorithm are demonstrated by a series of tests on synthetic and natural fault-slip observations in different tectonic settings and also in situations where the observations are noisy. It is shown that the genetic algorithm is superior to other commonly practised methods, in particular, in those tectonic settings where none of the principal stresses is directed vertically and/or the given data set is noisy.
Modelling and genetic algorithm based optimisation of inverse supply chain
NASA Astrophysics Data System (ADS)
Bányai, T.
2009-04-01
(Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a
Neural network fusion and inversion model for NDIR sensor measurement
NASA Astrophysics Data System (ADS)
Cieszczyk, Sławomir; Komada, Paweł
2015-12-01
This article presents the problem of the impact of environmental disturbances on the determination of information from measurements. As an example, NDIR sensor is studied, which can measure industrial or environmental gases of varying temperature. The issue of changes of influence quantities value appears in many industrial measurements. Developing of appropriate algorithms resistant to conditions changes is key problem. In the resulting mathematical model of inverse problem additional input variables appears. Due to the difficulties in the mathematical description of inverse model neural networks have been applied. They do not require initial assumptions about the structure of the created model. They provide correction of sensor non-linearity as well as correction of influence of interfering quantity. The analyzed issue requires additional measurement of disturbing quantity and its connection with measurement of primary quantity. Combining this information with the use of neural networks belongs to the class of sensor fusion algorithm.
Inverse problems in the modeling of vibrations of flexible beams
NASA Technical Reports Server (NTRS)
Banks, H. T.; Powers, R. K.; Rosen, I. G.
1987-01-01
The formulation and solution of inverse problems for the estimation of parameters which describe damping and other dynamic properties in distributed models for the vibration of flexible structures is considered. Motivated by a slewing beam experiment, the identification of a nonlinear velocity dependent term which models air drag damping in the Euler-Bernoulli equation is investigated. Galerkin techniques are used to generate finite dimensional approximations. Convergence estimates and numerical results are given. The modeling of, and related inverse problems for the dynamics of a high pressure hose line feeding a gas thruster actuator at the tip of a cantilevered beam are then considered. Approximation and convergence are discussed and numerical results involving experimental data are presented.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.
2005-01-01
. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.
Why does inverse modeling of drainage inventories work?
NASA Astrophysics Data System (ADS)
White, Nicky; Roberts, Gareth
2016-04-01
We describe and apply a linear inverse model which calculates spatial and temporal patterns of uplift rate by minimizing the misfit between inventories of observed and predicted longitudinal river profiles. This approach builds upon a more general, non-linear, optimization model, which suggests that shapes of river profiles are dominantly controlled by upstream advection of kinematic waves of incision produced by spatial and temporal changes in regional uplift rate. We have tested both algorithms by inverting thousands of river profiles from Africa, Eurasia, the Americas, and Australia. For each continent, the drainage network was constructed from a digital elevation model and the fidelity of river profiles extracted from this network was carefully checked using satellite imagery. Spatial and temporal patterns of both uplift rate and cumulative uplift were calibrated using independent geologic and geophysical observations. Inverse modeling of these substantial inventories of river profiles suggests that drainage networks contain coherent signals that record the regional growth of elevation. In the second part of this presentation, we use spectral analysis of river profiles to suggest why drainage networks behave in a coherent, albeit non-linear, fashion. Our analysis implies that large-scale topographic signals injected into landscapes generate spectral slopes that are usually red (i.e. Brownian). At wavelengths shorter than tens of km, spectral slopes whiten which suggests that coherent topographic signals cease to exist at these shorter length scales. Our results suggest that inverse modeling of drainage networks can reveal useful information about landscape growth through space and time.
Chen, Yu; Gao, Kai; Huang, Lianjie; Sabin, Andrew
2016-03-31
Accurate imaging and characterization of fracture zones is crucial for geothermal energy exploration. Aligned fractures within fracture zones behave as anisotropic media for seismic-wave propagation. The anisotropic properties in fracture zones introduce extra difficulties for seismic imaging and waveform inversion. We have recently developed a new anisotropic elastic-waveform inversion method using a modified total-variation regularization scheme and a wave-energy-base preconditioning technique. Our new inversion method uses the parameterization of elasticity constants to describe anisotropic media, and hence it can properly handle arbitrary anisotropy. We apply our new inversion method to a seismic velocity model along a 2D-line seismic data acquired at Eleven-Mile Canyon located at the Southern Dixie Valley in Nevada for geothermal energy exploration. Our inversion results show that anisotropic elastic-waveform inversion has potential to reconstruct subsurface anisotropic elastic parameters for imaging and characterization of fracture zones.
2.5D complex resistivity modeling and inversion using unstructured grids
NASA Astrophysics Data System (ADS)
Xu, Kaijun; Sun, Jie
2016-04-01
The characteristic of complex resistivity on rock and ore has been recognized by people for a long time. Generally we have used the Cole-Cole Model(CCM) to describe complex resistivity. It has been proved that the electrical anomaly of geologic body can be quantitative estimated by CCM parameters such as direct resistivity(ρ0), chargeability(m), time constant(τ) and frequency dependence(c). Thus it is very important to obtain the complex parameters of geologic body. It is difficult to approximate complex structures and terrain using traditional rectangular grid. In order to enhance the numerical accuracy and rationality of modeling and inversion, we use an adaptive finite-element algorithm for forward modeling of the frequency-domain 2.5D complex resistivity and implement the conjugate gradient algorithm in the inversion of 2.5D complex resistivity. An adaptive finite element method is applied for solving the 2.5D complex resistivity forward modeling of horizontal electric dipole source. First of all, the CCM is introduced into the Maxwell's equations to calculate the complex resistivity electromagnetic fields. Next, the pseudo delta function is used to distribute electric dipole source. Then the electromagnetic fields can be expressed in terms of the primary fields caused by layered structure and the secondary fields caused by inhomogeneities anomalous conductivity. At last, we calculated the electromagnetic fields response of complex geoelectric structures such as anticline, syncline, fault. The modeling results show that adaptive finite-element methods can automatically improve mesh generation and simulate complex geoelectric models using unstructured grids. The 2.5D complex resistivity invertion is implemented based the conjugate gradient algorithm.The conjugate gradient algorithm doesn't need to compute the sensitivity matrix but directly computes the sensitivity matrix or its transpose multiplying vector. In addition, the inversion target zones are
NASA Astrophysics Data System (ADS)
Ji, Hongzhu; Chen, Siying; Zhang, Yinchao; Chen, He; Guo, Pan; Chen, Hao
2017-02-01
A calibration method is proposed to invert the extinction coefficient for Fernald and Klett inversion by using the particle backscattering coefficient inversed with Raman and Elastic return signals. The calibration method is analyzed theoretically and experimentally, the inversion accuracy can be improved by removing the dependence on reference altitudes and intervals in conventional calibration methods, which resulted from the introduction of backscattering coefficient with relatively higher accuracy obtained by Raman-Mie inversion method. The standard deviation of this new calibration method can be reduced by about 20×, compared to that of the conventional calibration methods of Fernald and Klett inversion. And, the more stable effective inversed range with this new calibration method can be obtained by removing the dimple phenomenon in clouds position.
3D CSEM data inversion using Newton and Halley class methods
NASA Astrophysics Data System (ADS)
Amaya, M.; Hansen, K. R.; Morten, J. P.
2016-05-01
For the first time in 3D controlled source electromagnetic data inversion, we explore the use of the Newton and the Halley optimization methods, which may show their potential when the cost function has a complex topology. The inversion is formulated as a constrained nonlinear least-squares problem which is solved by iterative optimization. These methods require the derivatives up to second order of the residuals with respect to model parameters. We show how Green's functions determine the high-order derivatives, and develop a diagrammatical representation of the residual derivatives. The Green's functions are efficiently calculated on-the-fly, making use of a finite-difference frequency-domain forward modelling code based on a multi-frontal sparse direct solver. This allow us to build the second-order derivatives of the residuals keeping the memory cost in the same order as in a Gauss-Newton (GN) scheme. Model updates are computed with a trust-region based conjugate-gradient solver which does not require the computation of a stabilizer. We present inversion results for a synthetic survey and compare the GN, Newton, and super-Halley optimization schemes, and consider two different approaches to set the initial trust-region radius. Our analysis shows that the Newton and super-Halley schemes, using the same regularization configuration, add significant information to the inversion so that the convergence is reached by different paths. In our simple resistivity model examples, the convergence speed of the Newton and the super-Halley schemes are either similar or slightly superior with respect to the convergence speed of the GN scheme, close to the minimum of the cost function. Due to the current noise levels and other measurement inaccuracies in geophysical investigations, this advantageous behaviour is at present of low consequence, but may, with the further improvement of geophysical data acquisition, be an argument for more accurate higher-order methods like those
Large scale 3-D modeling by integration of resistivity models and borehole data through inversion
NASA Astrophysics Data System (ADS)
Foged, N.; Marker, P. A.; Christansen, A. V.; Bauer-Gottwein, P.; Jørgensen, F.; Høyer, A.-S.; Auken, E.
2014-02-01
We present an automatic method for parameterization of a 3-D model of the subsurface, integrating lithological information from boreholes with resistivity models through an inverse optimization, with the objective of further detailing for geological models or as direct input to groundwater models. The parameter of interest is the clay fraction, expressed as the relative length of clay-units in a depth interval. The clay fraction is obtained from lithological logs and the clay fraction from the resistivity is obtained by establishing a simple petrophysical relationship, a translator function, between resistivity and the clay fraction. Through inversion we use the lithological data and the resistivity data to determine the optimum spatially distributed translator function. Applying the translator function we get a 3-D clay fraction model, which holds information from the resistivity dataset and the borehole dataset in one variable. Finally, we use k means clustering to generate a 3-D model of the subsurface structures. We apply the concept to the Norsminde survey in Denmark integrating approximately 700 boreholes and more than 100 000 resistivity models from an airborne survey in the parameterization of the 3-D model covering 156 km2. The final five-cluster 3-D model differentiates between clay materials and different high resistive materials from information held in resistivity model and borehole observations respectively.
Large-scale 3-D modeling by integration of resistivity models and borehole data through inversion
NASA Astrophysics Data System (ADS)
Foged, N.; Marker, P. A.; Christansen, A. V.; Bauer-Gottwein, P.; Jørgensen, F.; Høyer, A.-S.; Auken, E.
2014-11-01
We present an automatic method for parameterization of a 3-D model of the subsurface, integrating lithological information from boreholes with resistivity models through an inverse optimization, with the objective of further detailing of geological models, or as direct input into groundwater models. The parameter of interest is the clay fraction, expressed as the relative length of clay units in a depth interval. The clay fraction is obtained from lithological logs and the clay fraction from the resistivity is obtained by establishing a simple petrophysical relationship, a translator function, between resistivity and the clay fraction. Through inversion we use the lithological data and the resistivity data to determine the optimum spatially distributed translator function. Applying the translator function we get a 3-D clay fraction model, which holds information from the resistivity data set and the borehole data set in one variable. Finally, we use k-means clustering to generate a 3-D model of the subsurface structures. We apply the procedure to the Norsminde survey in Denmark, integrating approximately 700 boreholes and more than 100 000 resistivity models from an airborne survey in the parameterization of the 3-D model covering 156 km2. The final five-cluster 3-D model differentiates between clay materials and different high-resistivity materials from information held in the resistivity model and borehole observations, respectively.
Full Waveform Inversion Methods for Source and Media Characterization before and after SPE5
NASA Astrophysics Data System (ADS)
Phillips-Alonge, K. E.; Knox, H. A.; Ober, C.; Abbott, R. E.
2015-12-01
The Source Physics Experiment (SPE) was designed to advance our understanding of explosion-source phenomenology and subsequent wave propagation through the development of innovative physics-based models. Ultimately, these models will be used for characterizing explosions, which can occur with a variety of yields, depths of burial, and in complex media. To accomplish this, controlled chemical explosions were conducted in a granite outcrop at the Nevada Nuclear Security Test Site. These explosions were monitored with extensive seismic and infrasound instrumentation both in the near and far-field. Utilizing this data, we calculate predictions before the explosions occur and iteratively improve our models after each explosion. Specifically, we use an adjoint-based full waveform inversion code that employs discontinuous Galerkin techniques to predict waveforms at station locations prior to the fifth explosion in the series (SPE5). The full-waveform inversions are performed using a realistic geophysical model based on local 3D tomography and inversions for media properties using previous shot data. The code has capabilities such as unstructured meshes that align with material interfaces, local polynomial refinement, and support for various physics and methods for implicit and explicit time-integration. The inversion results we show here evaluate these different techniques, which allows for model fidelity assessment (acoustic versus elastic versus anelastic, etc.). In addition, the accuracy and efficiency of several time-integration methods can be determined. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Theoretical study on the inverse modeling of deep body temperature measurement.
Huang, Ming; Chen, Wenxi
2012-03-01
We evaluated the theoretical aspects of monitoring the deep body temperature distribution with the inverse modeling method. A two-dimensional model was built based on anatomical structure to simulate the human abdomen. By integrating biophysical and physiological information, the deep body temperature distribution was estimated from cutaneous surface temperature measurements using an inverse quasilinear method. Simulations were conducted with and without the heat effect of blood perfusion in the muscle and skin layers. The results of the simulations showed consistently that the noise characteristics and arrangement of the temperature sensors were the major factors affecting the accuracy of the inverse solution. With temperature sensors of 0.05 °C systematic error and an optimized 16-sensor arrangement, the inverse method could estimate the deep body temperature distribution with an average absolute error of less than 0.20 °C. The results of this theoretical study suggest that it is possible to reconstruct the deep body temperature distribution with the inverse method and that this approach merits further investigation.
The appropriateness of ignorance in the inverse kinetic Ising model
NASA Astrophysics Data System (ADS)
Dunn, Benjamin; Battistin, Claudia
2017-03-01
We develop efficient ways to consider and correct for the effects of hidden units for the paradigmatic case of the inverse kinetic Ising model with fully asymmetric couplings. We identify two sources of error in reconstructing the connectivity among the observed units while ignoring part of the network. One leads to a systematic bias in the inferred parameters, whereas the other involves correlations between the visible and hidden populations and has a magnitude that depends on the coupling strength. We estimate these two terms using a mean field approach and derive self-consistent equations for the couplings accounting for the systematic bias. Through application of these methods on simple networks of varying relative population size and connectivity strength, we assess how and under what conditions the hidden portion can influence inference and to what degree it can be crudely estimated. We find that for weak to moderately coupled systems, the effects of the hidden units is a simple rotation that can be easily corrected for. For strongly coupled systems, the non-systematic term becomes large and can no longer be safely ignored, further highlighting the importance of understanding the average strength of couplings for a given system of interest.
An optimal constrained linear inverse method for magnetic source imaging
Hughett, P.
1993-09-01
Magnetic source imaging is the reconstruction of the current distribution inside an inaccessible volume from magnetic field measurements made outside the volume. If the unknown current distribution is expressed as a linear combination of elementary current distributions in fixed positions, then the magnetic field measurements are linear in the unknown source amplitudes and both the least square and minimum mean square reconstructions are linear problems. This offers several advantages: The problem is well understood theoretically and there is only a single, global minimum. Efficient and reliable software for numerical linear algebra is readily available. If the sources are localized and statistically uncorrelated, then a map of expected power dissipation is equivalent to the source covariance matrix. Prior geological or physiological knowledge can be used to determine such an expected power map and thus the source covariance matrix. The optimal constrained linear inverse method (OCLIM) derived in this paper uses this prior knowledge to obtain a minimum mean square error estimate of the current distribution. OCLIM can be efficiently computed using the Cholesky decomposition, taking about a second on a workstation-class computer for a problem with 64 sources and 144 detectors. Any source and detector configuration is allowed as long as their positions are fixed a priori. Correlations among source and noise amplitudes are permitted. OCLIM reduces to the optimally weighted pseudoinverse method of Shim and Cho if the source amplitudes are independent and identically distributed and to the minimum-norm least squares estimate in the limit of no measurement noise or no prior knowledge of the source amplitudes. In the general case, OCLIM has better mean square error than either previous method. OCLIM appears well suited to magnetic imaging, since it exploits prior information, provides the minimum reconstruction error, and is inexpensive to compute.
A method to calculate tunneling leakage currents in silicon inversion layers
NASA Astrophysics Data System (ADS)
Lujan, Guilherme S.; Sorée, Bart; Magnus, Wim; De Meyer, Kristin
2006-08-01
This paper proposes a quantum mechanical model for the calculation of tunneling leakage currents in a metal-oxide-semiconductor structure. The model incorporates both variational calculus and the transfer matrix method to compute the subband energies and the lifetimes of the inversion layer states. The use of variational calculus simplifies the subband energy calculation due to the analytical form of the wave functions, which offers an attractive perspective towards the calculation of the electron mobility in the channel. The model can be extended to high-k dielectrics with several layers. Good agreement between experimental data and simulation results is obtained for metal gate capacitors.
Inverse hydrological modelling of spatio-temporal rainfall patterns
NASA Astrophysics Data System (ADS)
Grundmann, Jens; Hörning, Sebastian; Bárdossy, András
2016-04-01
Distributed hydrological models are commonly used for simulating the non-linear response of a watershed to rainfall events for addressing different hydrological properties of the landscape. Such models are driven by spatial rainfall patterns for consecutive time steps, which are normally generated from point measurements using spatial interpolation methods. However, such methods fail in reproducing the true spatio-temporal rainfall patterns especially in data scarce regions with poorly gauged catchments or for highly dynamic, small scaled rainstorms which are not well recorded by existing monitoring networks. Consequently, uncertainties are associated with poorly identified spatio-temporal rainfall distribution in distributed rainfall-runoff-modelling since the amount of rainfall received by a catchment as well as the dynamics of the runoff generation of flood waves are underestimated. For addressing these challenges a novel methodology for inverse hydrological modelling is proposed using a Markov-Chain-Monte-Carlo framework. Thereby, potential candidates of spatio-temporal rainfall patterns are generated and selected according their ability to reproduce the observed surface runoff at the catchment outlet for a given transfer function in a best way. The Methodology combines the concept of random mixing of random spatial fields with a grid-based spatial distributed rainfall runoff model. The conditional target rainfall field is obtained as a linear combination of unconditional spatial random fields. The corresponding weights of the linear combination are selected such that the spatial variability of the rainfall amounts as well as the actual observed rainfall values are reproduced. The functionality of the methodology is demonstrated on a synthetic example. Thereby, the known spatio-temporal distribution of rainfall is reproduced for a given number of point observations of rainfall and the integral catchment response at the catchment outlet for a synthetic catchment
Group-theoretic models of the inversion process in bacterial genomes.
Egri-Nagy, Attila; Gebhardt, Volker; Tanaka, Mark M; Francis, Andrew R
2014-07-01
The variation in genome arrangements among bacterial taxa is largely due to the process of inversion. Recent studies indicate that not all inversions are equally probable, suggesting, for instance, that shorter inversions are more frequent than longer, and those that move the terminus of replication are less probable than those that do not. Current methods for establishing the inversion distance between two bacterial genomes are unable to incorporate such information. In this paper we suggest a group-theoretic framework that in principle can take these constraints into account. In particular, we show that by lifting the problem from circular permutations to the affine symmetric group, the inversion distance can be found in polynomial time for a model in which inversions are restricted to acting on two regions. This requires the proof of new results in group theory, and suggests a vein of new combinatorial problems concerning permutation groups on which group theorists will be needed to collaborate with biologists. We apply the new method to inferring distances and phylogenies for published Yersinia pestis data.
Sediment Acoustics: Wideband Model, Reflection Loss and Ambient Noise Inversion
2009-09-30
between 1 and 10 kHz. The model is also capable of explaining the apparent discrepancy between the data and the Kramers- Kronig relationship (K-K...of in-situ measurements of sediment sound speed and attenuation from SAX99, SAX04 and SW06 with the commonly used Kramers- Kronig equation (black...inverse quality factor. The data is overlaid by the Kramers- Kronig estimate of sound speed from measured attenuation, by both the commonly used equation
Three-dimensional electromagnetic modeling and inversion on massively parallel computers
Newman, G.A.; Alumbaugh, D.L.
1996-03-01
This report has demonstrated techniques that can be used to construct solutions to the 3-D electromagnetic inverse problem using full wave equation modeling. To this point great progress has been made in developing an inverse solution using the method of conjugate gradients which employs a 3-D finite difference solver to construct model sensitivities and predicted data. The forward modeling code has been developed to incorporate absorbing boundary conditions for high frequency solutions (radar), as well as complex electrical properties, including electrical conductivity, dielectric permittivity and magnetic permeability. In addition both forward and inverse codes have been ported to a massively parallel computer architecture which allows for more realistic solutions that can be achieved with serial machines. While the inversion code has been demonstrated on field data collected at the Richmond field site, techniques for appraising the quality of the reconstructions still need to be developed. Here it is suggested that rather than employing direct matrix inversion to construct the model covariance matrix which would be impossible because of the size of the problem, one can linearize about the 3-D model achieved in the inverse and use Monte-Carlo simulations to construct it. Using these appraisal and construction tools, it is now necessary to demonstrate 3-D inversion for a variety of EM data sets that span the frequency range from induction sounding to radar: below 100 kHz to 100 MHz. Appraised 3-D images of the earth`s electrical properties can provide researchers opportunities to infer the flow paths, flow rates and perhaps the chemistry of fluids in geologic mediums. It also offers a means to study the frequency dependence behavior of the properties in situ. This is of significant relevance to the Department of Energy, paramount to characterizing and monitoring of environmental waste sites and oil and gas exploration.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.
Inverse Modelling to Obtain Head Movement Controller Signal
NASA Technical Reports Server (NTRS)
Kim, W. S.; Lee, S. H.; Hannaford, B.; Stark, L.
1984-01-01
Experimentally obtained dynamics of time-optimal, horizontal head rotations have previously been simulated by a sixth order, nonlinear model driven by rectangular control signals. Electromyography (EMG) recordings have spects which differ in detail from the theoretical rectangular pulsed control signal. Control signals for time-optimal as well as sub-optimal horizontal head rotations were obtained by means of an inverse modelling procedures. With experimentally measured dynamical data serving as the input, this procedure inverts the model to produce the neurological control signals driving muscles and plant. The relationships between these controller signals, and EMG records should contribute to the understanding of the neurological control of movements.
NASA Astrophysics Data System (ADS)
Ren, Tao; Modest, Michael F.; Fateev, Alexander; Clausen, Sønnik
2015-01-01
In this study, we present an inverse calculation model based on the Levenberg-Marquardt optimization method to reconstruct temperature and species concentration from measured line-of-sight spectral transmissivity data for homogeneous gaseous media. The high temperature gas property database HITEMP 2010 (Rothman et al. (2010) [1]), which contains line-by-line (LBL) information for several combustion gas species, such as CO2 and H2O, was used to predict gas spectral transmissivities. The model was validated by retrieving temperatures and species concentrations from experimental CO2 and H2O transmissivity measurements. Optimal wavenumber ranges for CO2 and H2O transmissivity measured across a wide range of temperatures and concentrations were determined according to the performance of inverse calculations. Results indicate that the inverse radiation model shows good feasibility for measurements of temperature and gas concentration.
Inversion of submesoscale patterns from a high-resolution Solomon Sea model: Feasibility assessment
NASA Astrophysics Data System (ADS)
Gaultier, Lucile; Djath, Bughsin'; Verron, Jacques; Brankart, Jean-Michel; Brasseur, Pierre; Melet, Angelique
2014-07-01
A high-resolution realistic numerical model of the Solomon Sea, which exhibits a high level of variability at mesoscales and submesoscales, is used to explore new avenues for data assimilation. Image data assimilation represents a powerful methodology to integrate information from high-resolution observations such as satellite sea surface temperature or chlorophyll, or high-resolution altimetric sea surface height that will be observed in the forthcoming SWOT mission. The present study investigates the feasibility and accuracy of the inversion of the dynamical submesoscale information contained in high-resolution images of sea surface temperature (SST) or salinity (SSS) to improve the estimation of oceanic surface currents. The inversion method is tested in the context of twin experiments, with SST and SSS data provided by a model of the Solomon Sea. For that purpose, synthetic tracer images are built by binarizing the norm of the gradient of SST, SSS or spiciness. The binarized tracer images are compared to the dynamical image which is derived from the Finite-Size Lyapunov Exponents. The adjustment of the dynamical image to the tracer image provides the optimal correction to be applied on the surface velocity field. The method is evaluated by comparing the result of the inversion to the reference model solution. The feasibility of the inversion of various images (SST, SSS, both SST and SSS or spiciness) is explored on two small areas of the Solomon Sea. We show that errors in the surface velocity field can be substantially reduced through the inversion of tracer images.
Advanced model of eddy-current NDE inverse problem with sparse grid algorithm
NASA Astrophysics Data System (ADS)
Zhou, Liming; Sabbagh, Harold A.; Sabbagh, Elias H.; Murphy, R. Kim; Bernacchi, William
2017-02-01
In model-based inverse problem, some unknown parameters need to be estimated. These parameters are used not only to characterize the physical properties of cracks, but also to describe the position of the probes (such as lift off and angles) in the calibration. After considering the effect of the position of the probes in the inverse problem, the accuracy of the inverse result will be improved. With increasing the number of the parameters in the inverse problems, the burden of calculations will increase exponentially in the traditional full grid method. The sparse grid algorithm, which was introduced by Sergey A. Smolyak, was used in our work. With this algorithm, we obtain a powerful interpolation method that requires significantly fewer support nodes than conventional interpolation on a full grid. In this work, we combined sparse grid toolbox TASMANIAN, which is produced by Oak Ridge National Laboratory, and professional eddy-current NDE software, VIC-3D R◯, to solve a specific inverse problem. An advanced model based on our previous one is used to estimate length and depth of the crack, lift off and two angles of the position of probes. Considering the calibration process, pseudorandom noise is considered in the model and statistical behavior is discussed.
Han, Jijun; Yang, Deqiang; Sun, Houjun; Xin, Sherman Xuegang
2017-01-01
Inverse method is inherently suitable for calculating the distribution of source current density related with an irregularly structured electromagnetic target field. However, the present form of inverse method cannot calculate complex field-tissue interactions. A novel hybrid inverse/finite-difference time domain (FDTD) method that can calculate the complex field-tissue interactions for the inverse design of source current density related with an irregularly structured electromagnetic target field is proposed. A Huygens' equivalent surface is established as a bridge to combine the inverse and FDTD method. Distribution of the radiofrequency (RF) magnetic field on the Huygens' equivalent surface is obtained using the FDTD method by considering the complex field-tissue interactions within the human body model. The obtained magnetic field distributed on the Huygens' equivalent surface is regarded as the next target. The current density on the designated source surface is derived using the inverse method. The homogeneity of target magnetic field and specific energy absorption rate are calculated to verify the proposed method.
Inverse modeling of biomass smoke emissions using the TOMS AI
NASA Astrophysics Data System (ADS)
Zhang, S. Y.; Penner, J. E.; Torres, O.
2003-12-01
Results of inverse modeling of biomass smoke emissions using the TOMS AI and a three-dimensional transport model are presented. The IMPACT model with DAO meteorology data in 1997 are utilized to obtain aerosol spatial and temporal distributions. Two absorbing aerosol types are considered, including biomass smoke and mineral dust. First, a radiative transfer model is applied to generate the modeled AI. Then a Bayesian inverse technique is applied to optimize the difference between the modeled AI and the EP TOMS AI in the same period by regulating monthly a priori biomass smoke emissions, while the dust emissions are fixed. The modeled AI with a posteriori emissions generally is in better agreement with the EP TOMS AI. The annual global a posteriori source increases by about 13% for the year 1997 (6.31 Tg/yr BC) in the base scenario, with a larger adjustment of monthly regional emissions. Five sensitivity scenarios are carried out, including sensitivity to the a priori uncertainties, the height of the smoke layer, the cloud screening criteria of the daily EP TOMS AI, the adjustment of emissions in a lumped region outside of the major biomass burning regions, and the covariances between observations. Results suggest that a posteriori annual global emissions in the sensitivity scenarios are within 15% of that of the base scenario. However, the difference of annual a posteriori emissions between the sensitivity scenarios and the base scenario can be as large as 50% on regional scale. We are also applying the inverse model technique to the year 2000 to compare with biomass emissions deduced from an analysis based on burned areas.
Efficient inversion of three-dimensional finite element models of volcano deformation
NASA Astrophysics Data System (ADS)
Charco, M.; Galán del Sastre, P.
2014-03-01
Numerical techniques, as such as finite element method, allow for the inclusion of features, such as topography and/or mechanical heterogeneities, for the interpretation of volcanic deformation. However, models based on these numerical techniques usually are not suitable to be included in non-linear estimations of source parameters based on explorative optimization schemes because they require a calculation of the numerical approach for every evaluation of the misfit function. We present a procedure for finite element (FE) models that can be combined with explorative inversion schemes. The methodology is based on including a body force term representing an infinitesimal source in the model formulation that is responsible for pressure (volume) changes in the medium. This provides significant savings in both the time required for mesh generation and actual computational time of the numerical approach. Furthermore, we develop an inversion algorithm to estimate those parameters that characterize the changes in location and pressure (volume) of deformation sources. Both provide FE inversions in a single step, avoiding remeshing and assembly of the linear system of algebraic equations that define the numerical approach and/or the automatic mesh generation. After providing the theoretical basis for the model, the numerical approach and the algorithm for the inversions, we test the methodology using a synthetic example in a stratovolcano. Our results suggest that the FE inversion methodology can be considered suitable for efficiently save time in quantitative interpretations of volcano deformation.
Inverse modeling of geochemical and mechanical compaction in sedimentary basins
NASA Astrophysics Data System (ADS)
Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto
2015-04-01
We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model
Haynes, Mark; Verweij, Sacha A. M.; Moghaddam, Mahta; Carson, Paul L.
2014-01-01
A self-contained source characterization method for commercial ultrasound probes in transmission acoustic inverse scattering is derived and experimentally tested. The method is based on modified scattered field volume integral equations that are linked to the source-scattering transducer model. The source-scattering parameters are estimated via pair-wise transducer measurements and the nonlinear inversion of an acoustic propagation model that is derived. This combination creates a formal link between the transducer characterization and the inverse scattering algorithm. The method is tested with two commercial ultrasound probes in a transmission geometry including provisions for estimating the probe locations and aligning a robotic rotator. The transducer characterization results show that the nonlinear inversion fit the measured data well. The transducer calibration and inverse scattering algorithm are tested on simple targets. Initial images show that the recovered contrasts are physically consistent with expected values. PMID:24569251
A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems
NASA Astrophysics Data System (ADS)
Iglesias, Marco A.
2016-02-01
We introduce a derivative-free computational framework for approximating solutions to nonlinear PDE-constrained inverse problems. The general aim is to merge ideas from iterative regularization with ensemble Kalman methods from Bayesian inference to develop a derivative-free stable method easy to implement in applications where the PDE (forward) model is only accessible as a black box (e.g. with commercial software). The proposed regularizing ensemble Kalman method can be derived as an approximation of the regularizing Levenberg-Marquardt (LM) scheme (Hanke 1997 Inverse Problems 13 79-95) in which the derivative of the forward operator and its adjoint are replaced with empirical covariances from an ensemble of elements from the admissible space of solutions. The resulting ensemble method consists of an update formula that is applied to each ensemble member and that has a regularization parameter selected in a similar fashion to the one in the LM scheme. Moreover, an early termination of the scheme is proposed according to a discrepancy principle-type of criterion. The proposed method can be also viewed as a regularizing version of standard Kalman approaches which are often unstable unless ad hoc fixes, such as covariance localization, are implemented. The aim of this paper is to provide a detailed numerical investigation of the regularizing and convergence properties of the proposed regularizing ensemble Kalman scheme; the proof of these properties is an open problem. By means of numerical experiments, we investigate the conditions under which the proposed method inherits the regularizing properties of the LM scheme of (Hanke 1997 Inverse Problems 13 79-95) and is thus stable and suitable for its application in problems where the computation of the Fréchet derivative is not computationally feasible. More concretely, we study the effect of ensemble size, number of measurements, selection of initial ensemble and tunable parameters on the performance of the method
NASA Technical Reports Server (NTRS)
Gherlone, Marco; Cerracchio, Priscilla; Mattone, Massimiliano; Di Sciuva, Marco; Tessler, Alexander
2011-01-01
A robust and efficient computational method for reconstructing the three-dimensional displacement field of truss, beam, and frame structures, using measured surface-strain data, is presented. Known as shape sensing , this inverse problem has important implications for real-time actuation and control of smart structures, and for monitoring of structural integrity. The present formulation, based on the inverse Finite Element Method (iFEM), uses a least-squares variational principle involving strain measures of Timoshenko theory for stretching, torsion, bending, and transverse shear. Two inverse-frame finite elements are derived using interdependent interpolations whose interior degrees-of-freedom are condensed out at the element level. In addition, relationships between the order of kinematic-element interpolations and the number of required strain gauges are established. As an example problem, a thin-walled, circular cross-section cantilevered beam subjected to harmonic excitations in the presence of structural damping is modeled using iFEM; where, to simulate strain-gauge values and to provide reference displacements, a high-fidelity MSC/NASTRAN shell finite element model is used. Examples of low and high-frequency dynamic motion are analyzed and the solution accuracy examined with respect to various levels of discretization and the number of strain gauges.
The application of inverse methods to spatially-distributed acoustic sources
NASA Astrophysics Data System (ADS)
Holland, K. R.; Nelson, P. A.
2013-10-01
Acoustic inverse methods, based on the output of an array of microphones, can be readily applied to the characterisation of acoustic sources that can be adequately modelled as a number of discrete monopoles. However, there are many situations, particularly in the fields of vibroacoustics and aeroacoustics, where the sources are distributed continuously in space over a finite area (or volume). This paper is concerned with the practical problem of applying inverse methods to such distributed source regions via the process of spatial sampling. The problem is first tackled using computer simulations of the errors associated with the application of spatial sampling to a wide range of source distributions. It is found that the spatial sampling criterion for minimising the errors in the radiated far-field reconstructed from the discretised source distributions is strongly dependent on acoustic wavelength but is only weakly dependent on the details of the source field itself. The results of the computer simulations are verified experimentally through the application of the inverse method to the sound field radiated by a ducted fan. The un-baffled fan source with the associated flow field is modelled as a set of equivalent monopole sources positioned on the baffled duct exit along with a matrix of complimentary non-flow Green functions. Successful application of the spatial sampling criterion involves careful frequency-dependent selection of source spacing, and results in the accurate reconstruction of the radiated sound field. Discussions of the conditioning of the Green function matrix which is inverted are included and it is shown that the spatial sampling criterion may be relaxed if conditioning techniques, such as regularisation, are applied to this matrix prior to inversion.
GARCH modelling of covariance in dynamical estimation of inverse solutions
NASA Astrophysics Data System (ADS)
Galka, Andreas; Yamashita, Okito; Ozaki, Tohru
2004-12-01
The problem of estimating unobserved states of spatially extended dynamical systems poses an inverse problem, which can be solved approximately by a recently developed variant of Kalman filtering; in order to provide the model of the dynamics with more flexibility with respect to space and time, we suggest to combine the concept of GARCH modelling of covariance, well known in econometrics, with Kalman filtering. We formulate this algorithm for spatiotemporal systems governed by stochastic diffusion equations and demonstrate its feasibility by presenting a numerical simulation designed to imitate the situation of the generation of electroencephalographic recordings by the human cortex.
Shinnery oak bidirectional reflectance properties and canopy model inversion
NASA Technical Reports Server (NTRS)
Deering, Donald W.; Eck, Thomas F.; Grier, Toby
1992-01-01
Field measurements are presented, together with the results of a 3D canopy-model inversion for sand shinnery oak community in western Texas. The spectral bidirectional radiance measurements were in three spectral channels encompassing both the complete land surface and sky hemispheres. The changes in canopy reflectance that occur with variations in solar zenith angle and view direction for two seasons of the year were evaluated, and the 3D radiation-interaction model was inverted to estimate the oak leaf area index and canopy density from the reflectance data.
Developing seasonal ammonia emission estimates with an inverse modeling technique.
Gilliland, A B; Dennis, R L; Roselle, S J; Pierce, T E; Bender, L E
2001-11-21
Significant uncertainty exists in magnitude and variability of ammonia (NH3) emissions, which are needed for air quality modeling of aerosols and deposition of nitrogen compounds. Approximately 85% of NH3 emissions are estimated to come from agricultural nonpoint sources. We suspect a strong seasonal pattern in NH 3 emissions; however, current NH3 emission inventories lack intra-annual variability. Annually averaged NH 3 emissions could significantly affect model-predicted concentrations and wet and dry deposition of nitrogen-containing compounds. We apply a Kalman filter inverse modeling technique to deduce monthly NH3 emissions for the eastern U.S. Final products of this research will include monthly emissions estimates from each season. Results for January and June 1990 are currently available and are presented here. The U.S. Environmental Protection Agency (USEPA) Community Multiscale Air Quality (CMAQ) model and ammonium (NH4+) wet concentration data from the National Atmospheric Deposition Program (NADP) network are used. The inverse modeling technique estimates the emission adjustments that provide optimal modeled results with respect to wet NH4+ concentrations, observational data error, and emission uncertainty. Our results suggest that annual average NH 3 emissions estimates should be decreased by 64% for January 1990 and increased by 25% for June 1990. These results illustrate the strong differences that are anticipated for NH3 emissions.
Inverse-model-based cuffless blood pressure estimation using a single photoplethysmography sensor.
Suzuki, Arata
2015-07-01
This paper proposes an inverse-model-based cuffless method for estimating blood pressure using a single photoplethysmography sensor. The proposed method, which is based on the relationship between blood pressure and the features of pulse waves, employs an inverse estimation and uses the blood pressure as the explanatory variable. Using this method, the blood pressure can be estimated with high accuracy even in situations where the pulse wave features are scattered, as the method uses the dynamic signal-to-noise ratio of the Taguchi method. In order to verify the effectiveness of the proposed method, we employed it to measure the systolic blood pressure. It could be confirmed that the estimation accuracy of the proposed method is higher than that of similar methods.
Inverse modeling of unsaturated flow using clusters of soil texture and pedotransfer functions
NASA Astrophysics Data System (ADS)
Zhang, Yonggen; Schaap, Marcel G.; Guadagnini, Alberto; Neuman, Shlomo P.
2016-10-01
Characterization of heterogeneous soil hydraulic parameters of deep vadose zones is often difficult and expensive, making it necessary to rely on other sources of information. Pedotransfer functions (PTFs) based on soil texture data constitute a simple alternative to inverse hydraulic parameter estimation, but their accuracy is often modest. Inverse modeling entails a compromise between detailed description of subsurface heterogeneity and the need to restrict the number of parameters. We propose two methods of parameterizing vadose zone hydraulic properties using a combination of k-means clustering of kriged soil texture data, PTFs, and model inversion. One approach entails homogeneous and the other heterogeneous clusters. Clusters may include subdomains of the computational grid that need not be contiguous in space. The first approach homogenizes within-cluster variability into initial hydraulic parameter estimates that are subsequently optimized by inversion. The second approach maintains heterogeneity through multiplication of each spatially varying initial hydraulic parameter by a scale factor, estimated a posteriori through inversion. This allows preserving heterogeneity without introducing a large number of adjustable parameters. We use each approach to simulate a 95 day infiltration experiment in unsaturated layered sediments at a semiarid site near Phoenix, Arizona, over an area of 50 × 50 m2 down to a depth of 14.5 m. Results show that both clustering approaches improve simulated moisture contents considerably in comparison to those based solely on PTF estimates. Our calibrated models are validated against data from a subsequent 295 day infiltration experiment at the site.
Using informative priors in facies inversion: The case of C-ISR method
NASA Astrophysics Data System (ADS)
Valakas, G.; Modis, K.
2016-08-01
Inverse problems involving the characterization of hydraulic properties of groundwater flow systems by conditioning on observations of the state variables are mathematically ill-posed because they have multiple solutions and are sensitive to small changes in the data. In the framework of McMC methods for nonlinear optimization and under an iterative spatial resampling transition kernel, we present an algorithm for narrowing the prior and thus producing improved proposal realizations. To achieve this goal, we cosimulate the facies distribution conditionally to facies observations and normal scores transformed hydrologic response measurements, assuming a linear coregionalization model. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior. The effectiveness of our approach is demonstrated by an example application on a synthetic underdetermined inverse problem in aquifer characterization.
Analysis of Inverse Modelling Procedures For The Estimation of Parameters Controlling Macropore Flow
NASA Astrophysics Data System (ADS)
Roulier, S.; Jarvis, N.
Because they are objective, reproducible, and unambiguous, inverse modelling pro- cedures are increasingly used to identify water flow and solute transport parameters. This study focused on the development and testing of inverse methods to estimate transfer parameters in simulation models which account for rapid non-equilibrium flow in soil macropores. The dual-porosity/dual-permeability model of water flow and solute transport MACRO was linked with the inverse modelling package SUFI. The Bayesian approach followed by SUFI is stable, converging, and is not affected by the usual issues of initial values and local minima. A theoretical study was carried out using the combined tool SUFI/MACRO to assess data requirements for robust param- eter estimation in macropore flow models. Generated "dummy" data were used for this purpose, representing transient state leaching experiments for tracers and pesticides in small soil columns (20 cm height). General issues related to inverse modelling, such as internal correlation and sensitivity, were investigated. Attention was also focused on the significance of experimental and model errors, the degree of macropore flow in the system, and the availability of resident and flux concentrations. The study showed reliable results, especially in the case of strong macropore flow, but both resident and flux concentrations were needed. Errors (up to 30% for the pesticide concentrations) did not affect the robustness of the tool. SUFI linked to MACRO appeared thus to be well suited for global optimisation of the system parameters in soils affected by macropore flow.
NASA Astrophysics Data System (ADS)
Wang, M. C.; Niu, X. F.; Chen, S. B.; Guo, P. J.; Yang, Q.; Wang, Z. J.
2014-03-01
Chlorophyll content, the most important pigment related to photosynthesis, is the key parameter for vegetation growth. The continuous spectrum characteristics of ground objects can be captured through hyperspectral remotely sensed data. In this study, based on the coniferous forest radiative transfer model, chlorophyll contents were inverted by use of hyperspectral CHRIS data in the coniferous forest coverage of Changbai Mountain Area. In addition, the sensitivity of LIBERTY model was analyzed. The experimental results validated that the reflectance simulation of different chlorophyll contents was coincided with that of the field measurement, and hyperspectral vegetation indices applied to the quantitative inversion of chlorophyll contents was feasible and accurate. This study presents a reasonable method of chlorophyll inversion for the coniferous forest, promotes the inversion precision, is of significance in coniferous forest monitoring.
Parallel Infrastructure Modeling and Inversion Module for E4D
2014-10-09
Electrical resistivity tomography ERT is a method of imaging the electrical conductivity of the subsurface. Electrical conductivity is a useful metric for understanding the subsurface because it is governed by geomechanical and geochemical properties that drive subsurface systems. ERT works by injecting current into the subsurface across a pair of electrodes, and measuring the corresponding electrical potential response across another pair of electrodes. Many such measurements are strategically taken across an array of electrodes to produce an ERT data set. These data are then processed through a computationally demanding process known as inversion to produce an image of the subsurface conductivity structure that gave rise to the measurements. Data can be inverted to provide 2D images, 3D images, or in the case of time-lapse 3D imaging, 4D images. ERT is generally not well suited for environments with buried electrically conductive infrastructure such as pipes, tanks, or well casings, because these features tend to dominate and degrade ERT images. This reduces or eliminates the utility of ERT imaging where it would otherwise be highly useful for, for example, imaging fluid migration from leaking pipes, imaging soil contamination beneath leaking subusurface tanks, and monitoring contaminant migration in locations with dense network of metal cased monitoring wells. The location and dimension of buried metallic infrastructure is often known. If so, then the effects of the infrastructure can be explicitly modeled within the ERT imaging algorithm, and thereby removed from the corresponding ERT image. However,there are a number of obstacles limiting this application. 1) Metallic infrastructure cannot be accurately modeled with standard codes because of the large contrast in conductivity between the metal and host material. 2) Modeling infrastructure in true dimension requires the computational mesh to be highly refined near the metal inclusions, which increases
Goal Directed Model Inversion: A Study of Dynamic Behavior
NASA Technical Reports Server (NTRS)
Colombano, Silvano P.; Compton, Michael; Raghavan, Bharathi; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Goal Directed Model Inversion (GDMI) is an algorithm designed to generalize supervised learning to the case where target outputs are not available to the learning system. The output of the learning system becomes the input to some external device or transformation, and only the output of this device or transformation can be compared to a desired target. The fundamental driving mechanism of GDMI is to learn from success. Given that a wrong outcome is achieved, one notes that the action that produced that outcome 0 "would have been right if the outcome had been the desired one." The algorithm then proceeds as follows: (1) store the action that produced the wrong outcome as a "target" (2) redefine the wrong outcome as a desired goal (3) submit the new desired goal to the system (4) compare the new action with the target action and modify the system by using a suitable algorithm for credit assignment (Back propagation in our example) (5) resubmit the original goal. Prior publications by our group in this area focused on demonstrating empirical results based on the inverse kinematic problem for a simulated robotic arm. In this paper we apply the inversion process to much simpler analytic functions in order to elucidate the dynamic behavior of the system and to determine the sensitivity of the learning process to various parameters. This understanding will be necessary for the acceptance of GDMI as a practical tool.
Reconstruction of multiple gastric electrical wave fronts using potential-based inverse methods.
Kim, J H K; Pullan, A J; Cheng, L K
2012-08-21
One approach for non-invasively characterizing gastric electrical activity, commonly used in the field of electrocardiography, involves solving an inverse problem whereby electrical potentials on the stomach surface are directly reconstructed from dense potential measurements on the skin surface. To investigate this problem, an anatomically realistic torso model and an electrical stomach model were used to simulate potentials on stomach and skin surfaces arising from normal gastric electrical activity. The effectiveness of the Greensite-Tikhonov or the Tikhonov inverse methods were compared under the presence of 10% Gaussian noise with either 84 or 204 body surface electrodes. The stability and accuracy of the Greensite-Tikhonov method were further investigated by introducing varying levels of Gaussian signal noise or by increasing or decreasing the size of the stomach by 10%. Results showed that the reconstructed solutions were able to represent the presence of propagating multiple wave fronts and the Greensite-Tikhonov method with 204 electrodes performed best (correlation coefficients of activation time: 90%; pacemaker localization error: 3 cm). The Greensite-Tikhonov method was stable with Gaussian noise levels up to 20% and 10% change in stomach size. The use of 204 rather than 84 body surface electrodes improved the performance; however, for all investigated cases, the Greensite-Tikhonov method outperformed the Tikhonov method.
Reconstruction of multiple gastric electrical wave fronts using potential-based inverse methods
NASA Astrophysics Data System (ADS)
Kim, J. H. K.; Pullan, A. J.; Cheng, L. K.
2012-08-01
One approach for non-invasively characterizing gastric electrical activity, commonly used in the field of electrocardiography, involves solving an inverse problem whereby electrical potentials on the stomach surface are directly reconstructed from dense potential measurements on the skin surface. To investigate this problem, an anatomically realistic torso model and an electrical stomach model were used to simulate potentials on stomach and skin surfaces arising from normal gastric electrical activity. The effectiveness of the Greensite-Tikhonov or the Tikhonov inverse methods were compared under the presence of 10% Gaussian noise with either 84 or 204 body surface electrodes. The stability and accuracy of the Greensite-Tikhonov method were further investigated by introducing varying levels of Gaussian signal noise or by increasing or decreasing the size of the stomach by 10%. Results showed that the reconstructed solutions were able to represent the presence of propagating multiple wave fronts and the Greensite-Tikhonov method with 204 electrodes performed best (correlation coefficients of activation time: 90%; pacemaker localization error: 3 cm). The Greensite-Tikhonov method was stable with Gaussian noise levels up to 20% and 10% change in stomach size. The use of 204 rather than 84 body surface electrodes improved the performance; however, for all investigated cases, the Greensite-Tikhonov method outperformed the Tikhonov method.
A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty
Zhang, Kai; Wang, Zengfei; Zhang, Liming; Yao, Jun; Yan, Xia
2015-01-01
In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method), for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model’s parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively ‘important’ elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient. PMID:26252392
Risk evaluation of uranium mining: A geochemical inverse modelling approach
NASA Astrophysics Data System (ADS)
Rillard, J.; Zuddas, P.; Scislewski, A.
2011-12-01
It is well known that uranium extraction operations can increase risks linked to radiation exposure. The toxicity of uranium and associated heavy metals is the main environmental concern regarding exploitation and processing of U-ore. In areas where U mining is planned, a careful assessment of toxic and radioactive element concentrations is recommended before the start of mining activities. A background evaluation of harmful elements is important in order to prevent and/or quantify future water contamination resulting from possible migration of toxic metals coming from ore and waste water interaction. Controlled leaching experiments were carried out to investigate processes of ore and waste (leached ore) degradation, using samples from the uranium exploitation site located in Caetité-Bahia, Brazil. In experiments in which the reaction of waste with water was tested, we found that the water had low pH and high levels of sulphates and aluminium. On the other hand, in experiments in which ore was tested, the water had a chemical composition comparable to natural water found in the region of Caetité. On the basis of our experiments, we suggest that waste resulting from sulphuric acid treatment can induce acidification and salinization of surface and ground water. For this reason proper storage of waste is imperative. As a tool to evaluate the risks, a geochemical inverse modelling approach was developed to estimate the water-mineral interaction involving the presence of toxic elements. We used a method earlier described by Scislewski and Zuddas 2010 (Geochim. Cosmochim. Acta 74, 6996-7007) in which the reactive surface area of mineral dissolution can be estimated. We found that the reactive surface area of rock parent minerals is not constant during time but varies according to several orders of magnitude in only two months of interaction. We propose that parent mineral heterogeneity and particularly, neogenic phase formation may explain the observed variation of the
NASA Technical Reports Server (NTRS)
Gutmann, Ethan D.; Small, Eric E.
2007-01-01
Soil hydraulic properties (SHPs) regulate the movement of water in the soil. This in turn plays an important role in the water and energy cycles at the land surface. At present, SHPS are commonly defined by a simple pedotransfer function from soil texture class, but SHPs vary more within a texture class than between classes. To examine the impact of using soil texture class to predict SHPS, we run the Noah land surface model for a wide variety of measured SHPs. We find that across a range of vegetation cover (5 - 80% cover) and climates (250 - 900 mm mean annual precipitation), soil texture class only explains 5% of the variance expected from the real distribution of SHPs. We then show that modifying SHPs can drastically improve model performance. We compare two methods of estimating SHPs: (1) inverse method, and (2) soil texture class. Compared to texture class, inverse modeling reduces errors between measured and modeled latent heat flux from 88 to 28 w/m(exp 2). Additionally we find that with increasing vegetation cover the importance of SHPs decreases and that the van Genuchten m parameter becomes less important, while the saturated conductivity becomes more important.
NASA Astrophysics Data System (ADS)
Kalscheuer, Thomas; Pedersen, Laust B.; Siripunvaraporn, Weerachai
2008-11-01
Electromagnetic surface measurements with the radiomagnetotelluric (RMT) method in the frequency range between 10 and 300kHz are typically interpreted in the quasi-static approximation, that is, assuming displacement currents are negligible. In this paper, the dielectric effect of displacement currents on RMT responses over resistive subsurface models is studied with a 2-D forward and inverse scheme that can operate both in the quasi-static approximation and including displacement currents. Forward computations of simple models exemplify how responses that allow for displacement currents deviate from responses computed in the quasi-static approximation. The differences become most obvious for highly resistive subsurface models of about 3000Ωm and more and at high frequencies. For such cases, the apparent resistivities and phases of the transverse magnetic (TM) and transverse electric (TE) modes are significantly smaller than in the quasi-static approximation. Along profiles traversing 2-D subsurface models, sign reversals in the real part of the vertical magnetic transfer function (VMT) are often more pronounced than in the quasi-static approximation. On both sides of such sign reversals, the responses computed including displacement currents are larger than typical measurement errors. The 2-D inversion of synthetic data computed including displacement currents demonstrates that serious misinterpretations in the form of artefacts in inverse models can be made if displacement currents are neglected during the inversion. Hence, the inclusion of the dielectric effect is a crucial improvement over existing quasi-static 2-D inverse schemes. Synthetic data from a 2-D model with constant dielectric permittivity and a conductive block buried in a highly resistive layer, which in turn is underlain by a conductive layer, are inverted. In the quasi-static inverse model, the depth to the conductive structures is overestimated, artefactual resistors appear on both sides of the
Ray, J.; Lee, J.; Yadav, V.; ...
2014-08-20
We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
NASA Astrophysics Data System (ADS)
Gallovic, F.; Ampuero, J. P.
2015-12-01
Slip inversion methods differ in how the rupture model is parameterized and which regularizations or constraints are applied. However, there is still no consensus about which of the slip inversion methods are preferable and how reliable the inferred source models are due to the non-uniqueness or ill-posedness of the inverse problem. The 'Source Inversion Validation' (SIV) initiative aims to characterize and understand the performance of slip inversion methods (http://equake-rc.info/SIV/). Up to now, four benchmark test cases have been proposed, some of which were even conducted as blind tests. The next step is performing quantitative comparisons of the inverted rupture models. To this aim, we introduce a new comparison technique based on a Singular Value Decomposition (SVD) of the design matrix of the continuum inverse problem. We separate the range and null sub-spaces (representing resolved and unresolved features, respectively) by a selected 'cut-off' singular value, and compare different inverted models to the target (exact) model after projecting them on the range sub-space. This procedure effectively quantifies the ability of an inversion result to reproduce the resolvable features of the source. We find that even with perfect Green's functions the quality of an inverted model deteriorates with decreasing cut-off singular value due to applied regularization (smoothing and positivity constraints). Applying this approach to the inversion results of the SIV2a benchmark from various authors shows that the inferred source images are very similar to the target model when we consider a cut-off at ~1/10 of the largest singular value. Although the truncated model captures the overall rupture propagation, the final slip is biased significantly, showing distinct peaks below the stations lying above the rupture. We also show synthetic experiments to assess the role of station coverage, crustal velocity model, etc. on the conditioning of the slip inversion.
Fu, Y B; Chui, C K; Teo, C L
2013-04-01
Biological soft tissue is highly inhomogeneous with scattered stress-strain curves. Assuming that the instantaneous strain at a specific stress varies according to a normal distribution, a nondeterministic approach is proposed to model the scattered stress-strain relationship of the tissue samples under compression. Material parameters of the liver tissue modeled using Mooney-Rivlin hyperelastic constitutive equation were represented by a statistical function with normal distribution. Mean and standard deviation of the material parameters were determined using inverse finite element method and inverse mean-value first-order second-moment (IMVFOSM) method respectively. This method was verified using computer simulation based on direct Monte-Carlo (MC) method. The simulated cumulative distribution function (CDF) corresponded well with that of the experimental stress-strain data. The resultant nondeterministic material parameters were able to model the stress-strain curves from other separately conducted liver tissue compression tests. Stress-strain data from these new tests could be predicted using the nondeterministic material parameters.
Effects of geometric head model perturbations on the EEG forward and inverse problems.
von Ellenrieder, Nicolás; Muravchik, Carlos H; Nehorai, Arye
2006-03-01
We study the effect of geometric head model perturbations on the electroencephalography (EEG) forward and inverse problems. Small magnitude perturbations of the shape of the head could represent uncertainties in the head model due to errors on images or techniques used to construct the model. They could also represent small scale details of the shape of the surfaces not described in a deterministic model, such as the sulci and fissures of the cortical layer. We perform a first-order perturbation analysis, using a meshless method for computing the sensitivity of the solution of the forward problem to the geometry of the head model. The effect on the forward problem solution is treated as noise in the EEG measurements and the Cramér-Rao bound is computed to quantify the effect on the inverse problem performance. Our results show that, for a dipolar source, the effect of the perturbations on the inverse problem performance is under the level of the uncertainties due to the spontaneous brain activity. Thus, the results suggest that an extremely detailed model of the head may be unnecessary when solving the EEG inverse problem.
NASA Astrophysics Data System (ADS)
Li, Guo-Yang; Zheng, Yang; Liu, Yanlin; Destrade, Michel; Cao, Yanping
2016-11-01
A body force concentrated at a point and moving at a high speed can induce shear-wave Mach cones in dusty-plasma crystals or soft materials, as observed experimentally and named the elastic Cherenkov effect (ECE). The ECE in soft materials forms the basis of the supersonic shear imaging (SSI) technique, an ultrasound-based dynamic elastography method applied in clinics in recent years. Previous studies on the ECE in soft materials have focused on isotropic material models. In this paper, we investigate the existence and key features of the ECE in anisotropic soft media, by using both theoretical analysis and finite element (FE) simulations, and we apply the results to the non-invasive and non-destructive characterization of biological soft tissues. We also theoretically study the characteristics of the shear waves induced in a deformed hyperelastic anisotropic soft material by a source moving with high speed, considering that contact between the ultrasound probe and the soft tissue may lead to finite deformation. On the basis of our theoretical analysis and numerical simulations, we propose an inverse approach to infer both the anisotropic and hyperelastic parameters of incompressible transversely isotropic (TI) soft materials. Finally, we investigate the properties of the solutions to the inverse problem by deriving the condition numbers in analytical form and performing numerical experiments. In Part II of the paper, both ex vivo and in vivo experiments are conducted to demonstrate the applicability of the inverse method in practical use.
Moissenet, Florent; Chèze, Laurence; Dumas, Raphaël
2012-06-01
Inverse dynamics combined with a constrained static optimization analysis has often been proposed to solve the muscular redundancy problem. Typically, the optimization problem consists in a cost function to be minimized and some equality and inequality constraints to be fulfilled. Penalty-based and Lagrange multipliers methods are common optimization methods for the equality constraints management. More recently, the pseudo-inverse method has been introduced in the field of biomechanics. The purpose of this paper is to evaluate the ability and the efficiency of this new method to solve the muscular redundancy problem, by comparing respectively the musculo-tendon forces prediction and its cost-effectiveness against common optimization methods. Since algorithm efficiency and equality constraints fulfillment highly belong to the optimization method, a two-phase procedure is proposed in order to identify and compare the complexity of the cost function, the number of iterations needed to find a solution and the computational time of the penalty-based method, the Lagrange multipliers method and pseudo-inverse method. Using a 2D knee musculo-skeletal model in an isometric context, the study of the cost functions isovalue curves shows that the solution space is 2D with the penalty-based method, 3D with the Lagrange multipliers method and 1D with the pseudo-inverse method. The minimal cost function area (defined as the area corresponding to 5% over the minimal cost) obtained for the pseudo-inverse method is very limited and along the solution space line, whereas the minimal cost function area obtained for other methods are larger or more complex. Moreover, when using a 3D lower limb musculo-skeletal model during a gait cycle simulation, the pseudo-inverse method provides the lowest number of iterations while Lagrange multipliers and pseudo-inverse method have almost the same computational time. The pseudo-inverse method, by providing a better suited cost function and an
Xie, G.; Li, J.; Majer, E.; Zuo, D.
1998-07-01
This paper describes a new 3D parallel GILD electromagnetic (EM) modeling and nonlinear inversion algorithm. The algorithm consists of: (a) a new magnetic integral equation instead of the electric integral equation to solve the electromagnetic forward modeling and inverse problem; (b) a collocation finite element method for solving the magnetic integral and a Galerkin finite element method for the magnetic differential equations; (c) a nonlinear regularizing optimization method to make the inversion stable and of high resolution; and (d) a new parallel 3D modeling and inversion using a global integral and local differential domain decomposition technique (GILD). The new 3D nonlinear electromagnetic inversion has been tested with synthetic data and field data. The authors obtained very good imaging for the synthetic data and reasonable subsurface EM imaging for the field data. The parallel algorithm has high parallel efficiency over 90% and can be a parallel solver for elliptic, parabolic, and hyperbolic modeling and inversion. The parallel GILD algorithm can be extended to develop a high resolution and large scale seismic and hydrology modeling and inversion in the massively parallel computer.
Inverse magnetic catalysis in the linear sigma model
NASA Astrophysics Data System (ADS)
Ayala, A.; Loewe, M.; Zamora, R.
2016-05-01
We compute the critical temperature for the chiral transition in the background of a magnetic field in the linear sigma model, including the quark contribution and the thermo-magnetic effects induced on the coupling constants at one loop level. For the analysis, we go beyond mean field aproximation, by taking one loop thermo-magnetic corrections to the couplings as well as plasma screening effects for the boson's masses, expressed through the ring diagrams. We found inverse magnetic catalysis, i.e. a decreasing of the critical chiral temperature as function of the intensity of the magnetic field, which seems to be in agreement with recent results from the lattice community.
Fast full waveform inversion with source encoding and second-order optimization methods
NASA Astrophysics Data System (ADS)
Castellanos, Clara; Métivier, Ludovic; Operto, Stéphane; Brossier, Romain; Virieux, Jean
2015-02-01
Full waveform inversion (FWI) of 3-D data sets has recently been possible thanks to the development of high performance computing. However, FWI remains a computationally intensive task when high frequencies are injected in the inversion or more complex wave physics (viscoelastic) is accounted for. The highest computational cost results from the numerical solution of the wave equation for each seismic source. To reduce the computational burden, one well-known technique is to employ a random linear combination of the sources, rather that using each source independently. This technique, known as source encoding, has shown to successfully reduce the computational cost when applied to real data. Up to now, the inversion is normally carried out using gradient descent algorithms. With the idea of achieving a fast and robust frequency-domain FWI, we assess the performance of the random source encoding method when it is interfaced with second-order optimization methods (quasi-Newton l-BFGS, truncated Newton). Because of the additional seismic modelings required to compute the Newton descent direction, it is not clear beforehand if truncated Newton methods can indeed further reduce the computational cost compared to gradient algorithms. We design precise stopping criteria of iterations to fairly assess the computational cost and the speed-up provided by the source encoding method for each optimization method. We perform experiment on synthetic and real data sets. In both cases, we confirm that combining source encoding with second-order optimization methods reduces the computational cost compared to the case where source encoding is interfaced with gradient descent algorithms. For the synthetic data set, inspired from the geology of Gulf of Mexico, we show that the quasi-Newton l-BFGS algorithm requires the lowest computational cost. For the real data set application on the Valhall data, we show that the truncated Newton methods provide the most robust direction of descent.
Stochastic optimization algorithm for inverse modeling of air pollution
NASA Astrophysics Data System (ADS)
Yeo, Kyongmin; Hwang, Youngdeok; Liu, Xiao; Kalagnanam, Jayant
2016-11-01
A stochastic optimization algorithm to estimate a smooth source function from a limited number of observations is proposed in the context of air pollution, where the source-receptor relation is given by an advection-diffusion equation. First, a smooth source function is approximated by a set of Gaussian kernels on a rectangular mesh system. Then, the generalized polynomial chaos (gPC) expansion is used to represent the model uncertainty due to the choice of the mesh system. It is shown that the convolution of gPC basis and the Gaussian kernel provides hierarchical basis functions for a spectral function estimation. The spectral inverse model is formulated as a stochastic optimization problem. We propose a regularization strategy based on the hierarchical nature of the basis polynomials. It is shown that the spectral inverse model is capable of providing a good estimate of the source function even when the number of unknown parameters (m) is much larger the number of data (n), m/n > 50.
Inverse modeling of CO surface sources using the MOPITT data
NASA Astrophysics Data System (ADS)
Pétron, G.; Granier, C.; Khattatov, B.; Lamarque, J.-F.; Yudin, V.; Gille, J.
2003-04-01
Carbon monoxide CO is a key component of the troposphere. It is the principal sink of hydroxyl radicals OH in the free troposphere (CO global mean lifetime is 2 months) and thus it controls indirectly the lifetime of many other species, such as methane CH_4. In the presence of nitrogen oxides, NO_x (>10-15 pptv), and sunlight, CO is a precursor of tropospheric ozone O_3. The processes leading to the emission of CO are fairly well established. CO is a byproduct of fossil fuel use and incomplete biomass combustion. The incomplete oxidation of hydrocarbons both natural and anthropogenic also produces substantial amounts of CO. Uncertainties attached to CO global sources are still high and, as a result, the comparison of model results and observations can show large discrepancies. These discrepancies are used to optimize the monthly sources of CO over large regions. This approach is refered to as inverse modeling. We will present results of a Bayesian inversion, obtained with our 3D global tropospheric Chemistry Transport Model (MOZART) combined with CMDL surface observations and MOPITT satellite data.
NASA Astrophysics Data System (ADS)
Alkharji, Mohammed N.
Most fracture characterization methods provide a general description of the fracture parameters as part of the reservoirs parameters; the fracture interaction and geometry within the reservoir is given less attention. T-Matrix and Linear Slip effective medium fracture models are implemented to invert the elastic tensor for the parameters and geometries of the fractures within the reservoir. The fracture inverse problem has an ill-posed, overdetermined, underconstrained rank-deficit system of equations. Least-squares inverse methods are used to solve the problem. A good starting initial model for the parameters is a key factor in the reliability of the inversion. Most methods assume that the starting parameters are close to the solution to avoid inaccurate local minimum solutions. The prior knowledge of the fracture parameters and their geometry is not available. We develop a hybrid, enumerative and Gauss-Newton, method that estimates the fracture parameters and geometry from the elastic tensor with no prior knowledge of the initial parameter values. The fracture parameters are separated into two groups. The first group contains the fracture parameters with no prior information, and the second group contains the parameters with known prior information. Different models are generated from the first group parameters by sampling the solution space over a predefined range of possible solutions for each parameter. Each model generated by the first group is fixed and used as a starting model to invert for the second group of parameters using the Gauss-Newton method. The least-squares residual between the observed elastic tensor and the estimated elastic tensor is calculated for each model. The model parameters that yield the least-squares residual corresponds to the correct fracture reservoir parameters and geometry. Two synthetic examples of fractured reservoirs with oil and gas saturations were inverted with no prior information about the fracture properties. The
Model error estimation and correction by solving a inverse problem
NASA Astrophysics Data System (ADS)
Xue, Haile
2016-04-01
Nowadays, the weather forecasts and climate predictions are increasingly relied on numerical models. Yet, errors inevitably exist in model due to the imperfect numeric and parameterizations. From the practical point of view, model correction is an efficient strategy. Despite of the different complexity of forecast error correction algorithms, the general idea is to estimate the forecast errors by considering the NWP as a direct problem. Chou (1974) suggested an alternative view by considering the NWP as an inverse problem. The model error tendency term (ME) due to the model deficiency is assumed as an unknown term in NWP model, which can be discretized into short intervals (for example 6 hour) and considered as a constant or linear form in each interval. Given the past re-analyses and NWP model, the discretized MEs in the past intervals can be solved iteratively as a constant or linear-increased tendency term in each interval. These MEs can be further used as the online corrections. In this study, an iterative method for obtaining the MEs in past intervals was presented, and its convergence had been confirmed with sets of experiments in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August (JA) 2009 and January-February (JF) 2010. Then these MEs were used to get online model corretions based of systematic errors of GRAPES-GFS for July 2009 and January 2010. The data sets associated with initial condition and sea surface temperature (SST) used in this study are both based on NCEP final (FNL) data. According to the iterative numerical experiments, the following key conclusions can be drawn:(1) Batches of iteration test results indicated that the hour 6 forecast errors were reduced to 10% of their original value after 20 steps of iteration.(2) By offlinely comparing the error corrections estimated by MEs to the mean forecast errors, the patterns of estimated errors were considered to agree well with those
Modeling direct and inverse problems in ferritic heat-exchanger tubes
NASA Astrophysics Data System (ADS)
Sabbagh, Harold A.; Aldrin, John C.; Murphy, R. Kim; Sabbagh, Elias H.
2012-05-01
We develop forward and inverse models, together with laboratory data, to characterize a SEACURE tube, with and without a drilled hole and/or tube-support plate (TSP). The measured data are impedances obtained using the HP4192A impedance analyzer, and model calculations are carred out using VIC-3D{copyright, serif}. We demonstrate conditions that are peculiar to ferritic tubes, and give insight into the optimum methods for characterizing the tubes and flaws within them.
Sneutrino dark matter in gauged inverse seesaw models for neutrinos.
An, Haipeng; Dev, P S Bhupal; Cai, Yi; Mohapatra, R N
2012-02-24
Extending the minimal supersymmetric standard model to explain small neutrino masses via the inverse seesaw mechanism can lead to a new light supersymmetric scalar partner which can play the role of inelastic dark matter (IDM). It is a linear combination of the superpartners of the neutral fermions in the theory (the light left-handed neutrino and two heavy standard model singlet neutrinos) which can be very light with mass in ~5-20 GeV range, as suggested by some current direct detection experiments. The IDM in this class of models has keV-scale mass splitting, which is intimately connected to the small Majorana masses of neutrinos. We predict the differential scattering rate and annual modulation of the IDM signal which can be testable at future germanium- and xenon-based detectors.
Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling
NASA Technical Reports Server (NTRS)
Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon
2010-01-01
We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.
Numerical Methods for Forward and Inverse Problems in Discontinuous Media
Chartier, Timothy P.
2011-03-08
The research emphasis under this grant's funding is in the area of algebraic multigrid methods. The research has two main branches: 1) exploring interdisciplinary applications in which algebraic multigrid can make an impact and 2) extending the scope of algebraic multigrid methods with algorithmic improvements that are based in strong analysis.The work in interdisciplinary applications falls primarily in the field of biomedical imaging. Work under this grant demonstrated the effectiveness and robustness of multigrid for solving linear systems that result from highly heterogeneous finite element method models of the human head. The results in this work also give promise to medical advances possible with software that may be developed. Research to extend the scope of algebraic multigrid has been focused in several areas. In collaboration with researchers at the University of Colorado, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory, the PI developed an adaptive multigrid with subcycling via complementary grids. This method has very cheap computing costs per iterate and is showing promise as a preconditioner for conjugate gradient. Recent work with Los Alamos National Laboratory concentrates on developing algorithms that take advantage of the recent advances in adaptive multigrid research. The results of the various efforts in this research could ultimately have direct use and impact to researchers for a wide variety of applications, including, astrophysics, neuroscience, contaminant transport in porous media, bi-domain heart modeling, modeling of tumor growth, and flow in heterogeneous porous media. This work has already led to basic advances in computational mathematics and numerical linear algebra and will continue to do so into the future.
NASA Astrophysics Data System (ADS)
Monteiller, Vadim; Chevrot, Sébastien; Komatitsch, Dimitri; Wang, Yi
2015-08-01
We present a method for high-resolution imaging of lithospheric structures based on full waveform inversion of teleseismic waveforms. We model the propagation of seismic waves using our recently developed direct solution method/spectral-element method hybrid technique, which allows us to simulate the propagation of short-period teleseismic waves through a regional 3-D model. We implement an iterative quasi-Newton method based upon the L-BFGS algorithm, where the gradient of the misfit function is computed using the adjoint-state method. Compared to gradient or conjugate-gradient methods, the L-BFGS algorithm has a much faster convergence rate. We illustrate the potential of this method on a synthetic test case that consists of a crustal model with a crustal discontinuity at 25 km depth and a sharp Moho jump. This model contains short- and long-wavelength heterogeneities along the lateral and vertical directions. The iterative inversion starts from a smooth 1-D model derived from the IASP91 reference Earth model. We invert both radial and vertical component waveforms, starting from long-period signals filtered at 10 s and gradually decreasing the cut-off period down to 1.25 s. This multiscale algorithm quickly converges towards a model that is very close to the true model, in contrast to inversions involving short-period waveforms only, which always get trapped into a local minimum of the cost function.
Gu, Dasa; Wang, Yuhang; Smeltzer, Charles; Boersma, K. Folkert
2014-06-27
Inverse modeling using satellite observations of nitrogen dioxide (NO2) columns has been extensively used to estimate nitrogen oxides (NOx) emissions in China. Recently, the Global Ozone Monitoring Experiment-2 (GOME-2) and Ozone Monitoring Instrument (OMI) provide independent global NO2 column measurements on a nearly daily basis at around 9:30 and 13:30 local time across the equator, respectively. Anthropogenic NOx emission estimates by applying previously developed monthly inversion (MI) or daily inversion (DI) methods to these two sets of measurements show substantial differences. We improve the DI method by conducting model simulation, satellite retrieval, and inverse modeling sequentially on a daily basis. After each inversion, we update anthropogenic NOx emissions in the model simulation with the newly obtained a posteriori results. Consequently, the inversion-optimized emissions are used to compute the a priori NO2 profiles for satellite retrievals. As such, the a priori profiles used in satellite retrievals are now coupled to inverse modeling results. The improved procedure was applied to GOME-2 and OMI NO2 measurements in 2011. The new daily retrieval-inversion (DRI) method estimates an average NOx emission of 6.9 Tg N/yr over China, and the difference between using GOME-2 and OMI measurements is 0.4 Tg N/yr, which is significantly smaller than the difference of 1.3 Tg N/yr using the previous DI method. Using the more consistent DRI inversion results, we find that anthropogenic NOx emissions tend to be higher in winter and summer than spring (and possibly fall) and the weekday-to-weekend emission ratio tends to increase with NOx emission in China.
NASA Astrophysics Data System (ADS)
Dolman, A. J.; Shvidenko, A.; Schepaschenko, D.; Ciais, P.; Tchebakova, N.; Chen, T.; van der Molen, M. K.; Belelli Marchesini, L.; Maximov, T. C.; Maksyutov, S.; Schulze, E.-D.
2012-12-01
We determine the net land to atmosphere flux of carbon in Russia, including Ukraine, Belarus and Kazakhstan, using inventory-based, eddy covariance, and inversion methods. Our high boundary estimate is -342 Tg C yr-1 from the eddy covariance method, and this is close to the upper bounds of the inventory-based Land Ecosystem Assessment and inverse models estimates. A lower boundary estimate is provided at -1350 Tg C yr-1 from the inversion models. The average of the three methods is -613.5 Tg C yr-1. The methane emission is estimated separately at 41.4 Tg C yr-1. These three methods agree well within their respective error bounds. There is thus good consistency between bottom-up and top-down methods. The forests of Russia primarily cause the net atmosphere to land flux (-692 Tg C yr-1 from the LEA. It remains however remarkable that the three methods provide such close estimates (-615, -662, -554 Tg C yr-1) for net biome production (NBP), given the inherent uncertainties in all of the approaches. The lack of recent forest inventories, the few eddy covariance sites and associated uncertainty with upscaling and undersampling of concentrations for the inversions are among the prime causes of the uncertainty. The dynamic global vegetation models (DGVMs) suggest a much lower uptake at -91 Tg C yr-1, and we argue that this is caused by a high estimate of heterotrophic respiration compared to other methods.
A direct-inverse method for transonic and separated flows about airfoils
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1990-01-01
A direct-inverse technique and computer program called TAMSEP that can be used for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicting the flow field about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.
A direct-inverse method for transonic and separated flows about airfoils
NASA Technical Reports Server (NTRS)
Carlson, K. D.
1985-01-01
A direct-inverse technique and computer program called TAMSEP that can be sued for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicing the flowfield about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.
Earthquake source tensor inversion with the gCAP method and 3D Green's functions
NASA Astrophysics Data System (ADS)
Zheng, J.; Ben-Zion, Y.; Zhu, L.; Ross, Z.
2013-12-01
We develop and apply a method to invert earthquake seismograms for source properties using a general tensor representation and 3D Green's functions. The method employs (i) a general representation of earthquake potency/moment tensors with double couple (DC), compensated linear vector dipole (CLVD), and isotropic (ISO) components, and (ii) a corresponding generalized CAP (gCap) scheme where the continuous wave trains are broken into Pnl and surface waves (Zhu & Ben-Zion, 2013). For comparison, we also use the waveform inversion method of Zheng & Chen (2012) and Ammon et al. (1998). Sets of 3D Green's functions are calculated on a grid of 1 km3 using the 3-D community velocity model CVM-4 (Kohler et al. 2003). A bootstrap technique is adopted to establish robustness of the inversion results using the gCap method (Ross & Ben-Zion, 2013). Synthetic tests with 1-D and 3-D waveform calculations show that the source tensor inversion procedure is reasonably reliable and robust. As initial application, the method is used to investigate source properties of the March 11, 2013, Mw=4.7 earthquake on the San Jacinto fault using recordings of ~45 stations up to ~0.2Hz. Both the best fitting and most probable solutions include ISO component of ~1% and CLVD component of ~0%. The obtained ISO component, while small, is found to be a non-negligible positive value that can have significant implications for the physics of the failure process. Work on using higher frequency data for this and other earthquakes is in progress.
NASA Astrophysics Data System (ADS)
Kim, A.; Dreger, D. S.; Taira, T.
2009-12-01
In this study, we developed a finite-source inversion method using the waveforms of small earthquakes as empirical Green's functions (eGf) to study the rupture process of micro-earthquakes on the San Andreas fault. This method is different from the ordinarily eGf deconvolution method which deconvolves the seismogram of the smaller simpler-source event from the seismogram of the larger event recovering the moment rate function of the larger more complex-source event. In the eGf deconvolution method commonly spectral domain deconvolution is used where the small earthquake spectrum is divided from the larger target event spectrum, and low spectral values are replaced by a water-level value to damp the effect of division by small numbers (e.g. Clayton and Wiggins, 1976). The water-level is chosen by trial and error. Such a rough regularization of the spectral ratio can result in the solution having unrealistic negative values and short-period oscillations. Also the amplitude and duration of the moment rate functions can be influenced by the adopted water-level value. In this study we propose to use the eGf waveform directly in the inversion, rather than the moment rate function obtained from spectral division. In this approach the eGf is treated as the Green’s function from each subfault, and contrary to the deconvolution approach can make use multiple eGfs distributed over the fault plane. The method can therefore be applied to short source-receiver distance situations since the variation in radiation pattern due to source-receiver geometry is better accounted for. Numerical tests of the waveform eGf inversion method indicate that in the case where the large slip asperity is not located at the hypocenter, the eGf located near the asperity recovers the prescribed model better than that using an eGf co-located with the main shock hypocenter. Synthetic analyses also show that using multiple eGfs can better constrain the slip model than using only one eGf in the
NASA Astrophysics Data System (ADS)
Lehikoinen, A.; Huttunen, J. M.; Finsterle, S.; Kowalsky, M. B.; Kaipio, J. P.
2007-05-01
We extend the previously presented methodology for imaging the evolution of electrically conductive fluids in porous media. In that method, the nonstationary inversion problem was solved using Bayesian filtering. The method was demonstrated using a synthetically generated test case where the monitored target is a time-varying water plume in an unsaturated porous medium, and the imaging modality was electrical resistance tomography (ERT). The inverse problem was formulated as a state estimation problem, which is based on observation- evolution models. As an observation model for ERT, the complete electrode model was used, and for time- varying unsaturated flow, the Richards equation was used as an evolution model. Although the "true" evolution of water flow was simulated using a heterogeneous permeability field, in the inversion step the permeability was assumed to be homogeneous. This assumption leads to approximation errors that have been taken into account by constructing a statistical model between the different realizations of the accurate and the approximate fluid flow models. This statistical model was constructed using an ensemble of samples from the evolution model in a way that the construction can be carried out prior to taking observations. However, the statistics of approximation errors actually depends on observations (through the state). In this work we extend the previously presented method so that the statistics of the approximation error are adjusted based on the observations. The basic idea of the extension is to gather those samples from the ensemble which at the current time best represents the observed state. We then determine the statistics of the approximation error based on these collated samples. The extension of the methodology provides improved estimates of water saturation distributions compared to the previously presented approaches. The proposed methodology may be extended for imaging and estimating parameters of dynamical processes
Inversion of heterogeneous parabolic-type equations using the pilot points method
NASA Astrophysics Data System (ADS)
Alcolea, Andrés; Carrera, Jesús; Medina, Agustín
2006-07-01
The inverse problem (also referred to as parameter estimation) consists of evaluating the medium properties ruling the behaviour of a given equation from direct measurements of those properties and of the dependent state variables. The problem becomes ill-posed when the properties vary spatially in an unknown manner, which is often the case when modelling natural processes. A possibility to fight this problem consists of performing stochastic conditional simulations. That is, instead of seeking a single solution (conditional estimation), one obtains an ensemble of fields, all of which honour the small scale variability (high frequency fluctuations) and direct measurements. The high frequency component of the field is different from one simulation to another, but a fixed component for all of them. Measurements of the dependent state variables are honoured by framing simulation as an inverse problem, where both model fit and parameter plausibility are maximized with respect to the coefficients of the basis functions (pilot point values). These coefficients (model parameters) are used for parameterizing the large scale variability patterns. The pilot points method, which is often used in hydrogeology, uses the kriging weights as basis functions. The performance of the method (both its variants of conditional estimation/simulation) is tested on a synthetic example using a parabolic-type equation. Results show that including the plausibility term improves the identification of the spatial variability of the unknown field function and that the weight assigned to the plausibility term does lead to optimal results both for conditional estimation and for stochastic simulations.
NASA Technical Reports Server (NTRS)
Bennett, Andrew F.
1990-01-01
Inverse methods for estimating the surface ciculation of the equatorial Pacific by combining a linear reduced-gravity shallow-water model with the Tropical Ocean-Global Atmosphere ship-of-opportunity expendable bathythermograph (TOGA SOP XBT) observing program are examined. It is demonstrated that a simple linear model of the upper circulation of the equatorial Pacific can be successfully used as a weak constraint when smoothing the TOGA SOP XBT data. A circulation is sought as the weighted least squares fit to the dynamics and the data. The solution method is an expansion in representer functions, and the generalized inverse problem is thereby reduced from a functional problem to an algebraic problem for the coefficients of the representer. A specific inverse calculation using synthetic forcing and data is presented.
Inverse energy cascade in nonlocal helical shell models of turbulence
NASA Astrophysics Data System (ADS)
De Pietro, Massimo; Biferale, Luca; Mailybaev, Alexei A.
2015-10-01
Following the exact decomposition in eigenstates of helicity for the Navier-Stokes equations in Fourier space [F. Waleffe, Phys. Fluids A 4, 350 (1992), 10.1063/1.858309], we introduce a modified version of helical shell models for turbulence with nonlocal triadic interactions. By using both an analytical argument and numerical simulation, we show that there exists a class of models, with a specific helical structure, that exhibits a statistically stable inverse energy cascade, in close analogy with that predicted for the Navier-Stokes equations restricted to the same helical interactions. We further support the idea that turbulent energy transfer is the result of a strong entanglement among triads possessing different transfer properties.
An inverse problem for a mathematical model of aquaponic agriculture
NASA Astrophysics Data System (ADS)
Bobak, Carly; Kunze, Herb
2017-01-01
Aquaponic agriculture is a sustainable ecosystem that relies on a symbiotic relationship between fish and macrophytes. While the practice has been growing in popularity, relatively little mathematical models exist which aim to study the system processes. In this paper, we present a system of ODEs which aims to mathematically model the population and concetrations dynamics present in an aquaponic environment. Values of the parameters in the system are estimated from the literature so that simulated results can be presented to illustrate the nature of the solutions to the system. As well, a brief sensitivity analysis is performed in order to identify redundant parameters and highlight those which may need more reliable estimates. Specifically, an inverse problem with manufactured data for fish and plants is presented to demonstrate the ability of the collage theorem to recover parameter estimates.
Unrealistic parameter estimates in inverse modelling: A problem or a benefit for model calibration?
Poeter, E.P.; Hill, M.C.
1996-01-01
Estimation of unrealistic parameter values by inverse modelling is useful for constructed model discrimination. This utility is demonstrated using the three-dimensional, groundwater flow inverse model MODFLOWP to estimate parameters in a simple synthetic model where the true conditions and character of the errors are completely known. When a poorly constructed model is used, unreasonable parameter values are obtained even when using error free observations and true initial parameter values. This apparent problem is actually a benefit because it differentiates accurately and inaccurately constructed models. The problems seem obvious for a synthetic problem in which the truth is known, but are obscure when working with field data. Situations in which unrealistic parameter estimates indicate constructed model problems are illustrated in applications of inverse modelling to three field sites and to complex synthetic test cases in which it is shown that prediction accuracy also suffers when constructed models are inaccurate.
Resolution of group velocity models obtained by adjoint inversion in the Czech Republic region
NASA Astrophysics Data System (ADS)
Valentova, Lubica; Gallovic, Frantisek; Ruzek, Bohuslav; de la Puente, Josep
2013-04-01
We performed tomographic inversion of crosscorrelation traveltimes of group waves in the Bohemian massif. The traveltimes used for inversion come from ambient seismic noise measurements between pairs of stations filtered for several period ranges between 2-20s. The inverse problem was solved by the conjugate gradients, which were calculated using efficient adjoint method. Assuming that the propagation of group waves can be approximated by membrane waves for each period separately, the computations are reduced to 2D domain. The numerical calculations were carried out using adjoint version of SeisSol, which solves elastodynamic system using Discontinuous Galerkin method with arbitrary high order time derivatives (ADER-DG). The adjoint inversion is based on computation of so called sensitivity kernels for each data, which are then combined into Fréchet kernel of misfit gradient. Therefore, if using only the longest wavelength data i.e. the traveltimes of 20s and 16s group waves, structures of even shorter wavelengths can be obtained by the inversion. However, these smaller-scale structures are possibly more affected by data noise and thus require careful treatment. Note that in the classical tomography based on ray method, such structures are subdued by regularization. This leads to question on the influence of data noise on the obtained models. Several synthetic tests were carried out to reveal the effect of data errors on the resulting model. Firstly, we tested the level of data noise required to obtain artificial small scale structures. As a target model we constructed simple heterogenous model consisting of one very long wavelength structure. The synthetic traveltime data were modified using random shifts for several distributions with different variances. The method appears to be extremely sensitive even for relatively small level of noise. The other set of tests concentrated on the main feature of models obtained from the real data. All models inverted using
Image synthesis with graph cuts: a fast model proposal mechanism in probabilistic inversion
NASA Astrophysics Data System (ADS)
Zahner, Tobias; Lochbühler, Tobias; Mariethoz, Grégoire; Linde, Niklas
2016-02-01
Geophysical inversion should ideally produce geologically realistic subsurface models that explain the available data. Multiple-point statistics is a geostatistical approach to construct subsurface models that are consistent with site-specific data, but also display the same type of patterns as those found in a training image. The training image can be seen as a conceptual model of the subsurface and is used as a non-parametric model of spatial variability. Inversion based on multiple-point statistics is challenging due to high nonlinearity and time-consuming geostatistical resimulation steps that are needed to create new model proposals. We propose an entirely new model proposal mechanism for geophysical inversion that is inspired by texture synthesis in computer vision. Instead of resimulating pixels based on higher-order patterns in the training image, we identify a suitable patch of the training image that replace a corresponding patch in the current model without breaking the patterns found in the training image, that is, remaining consistent with the given prior. We consider three cross-hole ground-penetrating radar examples in which the new model proposal mechanism is employed within an extended Metropolis Markov chain Monte Carlo (MCMC) inversion. The model proposal step is about 40 times faster than state-of-the-art multiple-point statistics resimulation techniques, the number of necessary MCMC steps is lower and the quality of the final model realizations is of similar quality. The model proposal mechanism is presently limited to 2-D fields, but the method is general and can be applied to a wide range of subsurface settings and geophysical data types.
Piovesan, Davide; Pierobon, Alberto; Dizio, Paul; Lackner, James R
2011-03-01
A common problem in the analyses of upper limb unfettered reaching movements is the estimation of joint torques using inverse dynamics. The inaccuracy in the estimation of joint torques can be caused by the inaccuracy in the acquisition of kinematic variables, body segment parameters (BSPs), and approximation in the biomechanical models. The effect of uncertainty in the estimation of body segment parameters can be especially important in the analysis of movements with high acceleration. A sensitivity analysis was performed to assess the relevance of different sources of inaccuracy in inverse dynamics analysis of a planar arm movement. Eight regression models and one water immersion method for the estimation of BSPs were used to quantify the influence of inertial models on the calculation of joint torques during numerical analysis of unfettered forward arm reaching movements. Thirteen subjects performed 72 forward planar reaches between two targets located on the horizontal plane and aligned with the median plane. Using a planar, double link model for the arm with a floating shoulder, we calculated the normalized joint torque peak and a normalized root mean square (rms) of torque at the shoulder and elbow joints. Statistical analyses quantified the influence of different BSP models on the kinetic variable variance for given uncertainty on the estimation of joint kinematics and biomechanical modeling errors. Our analysis revealed that the choice of BSP estimation method had a particular influence on the normalized rms of joint torques. Moreover, the normalization of kinetic variables to BSPs for a comparison among subjects showed that the interaction between the BSP estimation method and the subject specific somatotype and movement kinematics was a significant source of variance in the kinetic variables. The normalized joint torque peak and the normalized root mean square of joint torque represented valuable parameters to compare the effect of BSP estimation methods
Efficient non-negative constrained model-based inversion in optoacoustic tomography
NASA Astrophysics Data System (ADS)
Ding, Lu; Luís Deán-Ben, X.; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis
2015-09-01
The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency.
Joining direct and indirect inverse calibration methods to characterize karst, coastal aquifers
NASA Astrophysics Data System (ADS)
De Filippis, Giovanna; Foglia, Laura; Giudici, Mauro; Mehl, Steffen; Margiotta, Stefano; Negri, Sergio
2016-04-01
Parameter estimation is extremely relevant for accurate simulation of groundwater flow. Parameter values for models of large-scale catchments are usually derived from a limited set of field observations, which can rarely be obtained in a straightforward way from field tests or laboratory measurements on samples, due to a number of factors, including measurement errors and inadequate sampling density. Indeed, a wide gap exists between the local scale, at which most of the observations are taken, and the regional or basin scale, at which the planning and management decisions are usually made. For this reason, the use of geologic information and field data is generally made by zoning the parameter fields. However, pure zoning does not perform well in the case of fairly complex aquifers and this is particularly true for karst aquifers. In fact, the support of the hydraulic conductivity measured in the field is normally much smaller than the cell size of the numerical model, so it should be upscaled to a scale consistent with that of the numerical model discretization. Automatic inverse calibration is a valuable procedure to identify model parameter values by conditioning on observed, available data, limiting the subjective evaluations introduced with the trial-and-error technique. Many approaches have been proposed to solve the inverse problem. Generally speaking, inverse methods fall into two groups: direct and indirect methods. Direct methods allow determination of hydraulic conductivities from the groundwater flow equations which relate the conductivity and head fields. Indirect methods, instead, can handle any type of parameters, independently from the mathematical equations that govern the process, and condition parameter values and model construction on measurements of model output quantities, compared with the available observation data, through the minimization of an objective function. Both approaches have pros and cons, depending also on model complexity. For
NASA Astrophysics Data System (ADS)
D'Auria, Luca; Fernandez, Jose; Puglisi, Giuseppe; Rivalta, Eleonora; Camacho, Antonio; Nikkhoo, Mehdi; Walter, Thomas
2016-04-01
The inversion of ground deformation and gravity data is affected by an intrinsic ambiguity because of the mathematical formulation of the inverse problem. Current methods for the inversion of geodetic data rely on both parametric (i.e. assuming a source geometry) and non-parametric approaches. The former are able to catch the fundamental features of the ground deformation source but, if the assumptions are wrong or oversimplified, they could provide misleading results. On the other hand, the latter class of methods, even if not relying on stringent assumptions, could suffer from artifacts, especially when dealing with poor datasets. In the framework of the EC-FP7 MED-SUV project we aim at comparing different inverse approaches to verify how they cope with basic goals of Volcano Geodesy: determining the source depth, the source shape (size and geometry), the nature of the source (magmatic/hydrothermal) and hinting the complexity of the source. Other aspects that are important in volcano monitoring are: volume/mass transfer toward shallow depths, propagation of dikes/sills, forecasting the opening of eruptive vents. On the basis of similar experiments already done in the fields of seismic tomography and geophysical imaging, we have devised a bind test experiment. Our group was divided into one model design team and several inversion teams. The model design team devised two physical models representing volcanic events at two distinct volcanoes (one stratovolcano and one caldera). They provided the inversion teams with: the topographic reliefs, the calculated deformation field (on a set of simulated GPS stations and as InSAR interferograms) and the gravity change (on a set of simulated campaign stations). The nature of the volcanic events remained unknown to the inversion teams until after the submission of the inversion results. Here we present the preliminary results of this comparison in order to determine which features of the ground deformation and gravity source
NASA Astrophysics Data System (ADS)
D'Auria, L.; Fernandez, J.; Puglisi, G.; Rivalta, E.; Camacho, A. G.; Nikkhoo, M.; Walter, T. R.
2015-12-01
The inversion of ground deformation and gravity data is affected by an intrinsic ambiguity because of the mathematical formulation of the inverse problem. Current methods for the inversion of geodetic data rely on both parametric (i.e. assuming a source geometry) and non-parametric approaches. The former are able to catch the fundamental features of the ground deformation source but, if the assumptions are wrong or oversimplified, they could provide misleading results. On the other hand, the latter class of methods, even if not relying on stringent assumptions, could suffer from artifacts, especially when dealing with poor datasets. In the framework of the EC-FP7 MED-SUV project we aim at comparing different inverse approaches to verify how they cope with basic goals of Volcano Geodesy: determining the source depth, the source shape (size and geometry), the nature of the source (magmatic/hydrothermal) and hinting the complexity of the source. Other aspects that are important in volcano monitoring are: volume/mass transfer toward shallow depths, propagation of dikes/sills, forecasting the opening of eruptive vents. On the basis of similar experiments already done in the fields of seismic tomography and geophysical imaging, we have devised a bind test experiment. Our group was divided into one model design team and several inversion teams. The model design team devised two physical models representing volcanic events at two distinct volcanoes (one stratovolcano and one caldera). They provided the inversion teams with: the topographic reliefs, the calculated deformation field (on a set of simulated GPS stations and as InSAR interferograms) and the gravity change (on a set of simulated campaign stations). The nature of the volcanic events remained unknown to the inversion teams until after the submission of the inversion results. Here we present the preliminary results of this comparison in order to determine which features of the ground deformation and gravity source
Three dimensional modeling and inversion of Borehole-surface Electrical Resistivity Data
NASA Astrophysics Data System (ADS)
Zhang, Y.; Liu, D.; Liu, Y.; Qin, M.
2013-12-01
After a long time of exploration, many oil fields have stepped into the high water-cut period. It is sorely needed to determining the oil-water distribution and water flooding front. Borehole-surface electrical resistivity tomography (BSERT) system is a low-cost measurement with wide measuring scope and small influence on the reservoir. So it is gaining more and more application in detecting water flooding areas and evaluating residual oil distribution in oil fields. In BSERT system, current is connected with the steel casing of the observation well. The current flows along the long casing and transmits to the surface through inhomogeneous layers. Then received electric potential difference data on the surface can be used to inverse the deep subsurface resistivity distribution. This study presents the 3D modeling and inversion method of electrical resistivity data. In an extensive literature, the steel casing is treated as a transmission line current source with infinite small radius and constant current density. However, in practical multi-layered formations with different resistivity, the current density along the casing is not constant. In this study, the steel casing is modeled by a 2.5e-7 ohm-m physical volume that the casing occupies in the finite element mesh. Radius of the casing can be set to a little bigger than the true radius, and this helps reduce the element number and computation time. The current supply point is set on the center of the top surface of the physical volume. The homogeneous formation modeling result shows the same precision as the transmission line current source model. The multi-layered formation modeling result shows that the current density along the casing is high in the low-resistivity layer, and low in the high-resistivity layer. These results are more reasonable. Moreover, the deviated and horizontal well can be simulated as simple as the vertical well using this modeling method. Based on this forward modeling method, the
Quasiparticle density of states by inversion with maximum entropy method
NASA Astrophysics Data System (ADS)
Sui, Xiao-Hong; Wang, Han-Ting; Tang, Hui; Su, Zhao-Bin
2016-10-01
We propose to extract the quasiparticle density of states (DOS) of the superconductor directly from the experimentally measured superconductor-insulator-superconductor junction tunneling data by applying the maximum entropy method to the nonlinear systems. It merits the advantage of model independence with minimum a priori assumptions. Various components of the proposed method have been carefully investigated, including the meaning of the targeting function, the mock function, as well as the role and the designation of the input parameters. The validity of the developed scheme is shown by two kinds of tests for systems with known DOS. As a preliminary application to a Bi2Sr2CaCu2O8 +δ sample with its critical temperature Tc=89 K , we extract the DOS from the measured intrinsic Josephson junction current data at temperatures of T =4.2 K , 45 K , 55 K , 95 K , and 130 K . The energy gap decreases with increasing temperature below Tc, while above Tc, a kind of energy gap survives, which provides an angle to investigate the pseudogap phenomenon in high-Tc superconductors. The developed method itself might be a useful tool for future applications in various fields.
Video-based Nearshore Depth Inversion using WDM Method
NASA Astrophysics Data System (ADS)
Hampson, R. W.; Kirby, J. T.
2008-12-01
A new remote sensing method for estimating nearshore water depths from video imagery has been developed and applied as part of an ongoing field study at Bethany Beach, Delaware. The new method applies Donelan et al's Wavelet Direction Method (WDM) to compact arrays of pixel intensity time series extracted from video images. The WDM generates a non-stationary time series of the wavenumber and wave direction at different frequencies that can be used to create frequency-wavenumber and directional spectrums. The water depth is estimated at the center of each compact array by fitting the linear dispersion relation to the frequency-wavenumber spectrum. Directional spectral results show good correlation to directional spectral results obtained from a slope array located just offshore of Bethany Beach. Additionally, depth estimations from the WDM are compared to depth measurements taken with a kayak survey system at Bethany Beach. Continuous measurements of the bathymetry at Bethany Beach are needed for inputs to fluid dynamics and sediment transport models to study the morphodynamics in the nearshore zone and can be used to monitor the success of the recent beach replenishment project along the Delaware coast.
Method for the preparation of metal colloids in inverse micelles and product preferred by the method
Wilcoxon, Jess P.
1992-01-01
A method is provided for preparing catalytic elemental metal colloidal particles (e.g. gold, palladium, silver, rhodium, iridium, nickel, iron, platinum, molybdenum) or colloidal alloy particles (silver/iridium or platinum/gold). A homogeneous inverse micelle solution of a metal salt is first formed in a metal-salt solvent comprised of a surfactant (e.g. a nonionic or cationic surfactant) and an organic solvent. The size and number of inverse micelles is controlled by the proportions of the surfactant and the solvent. Then, the metal salt is reduced (by chemical reduction or by a pulsed or continuous wave UV laser) to colloidal particles of elemental metal. After their formation, the colloidal metal particles can be stabilized by reaction with materials that permanently add surface stabilizing groups to the surface of the colloidal metal particles. The sizes of the colloidal elemental metal particles and their size distribution is determined by the size and number of the inverse micelles. A second salt can be added with further reduction to form the colloidal alloy particles. After the colloidal elemental metal particles are formed, the homogeneous solution distributes to two phases, one phase rich in colloidal elemental metal particles and the other phase rich in surfactant. The colloidal elemental metal particles from one phase can be dried to form a powder useful as a catalyst. Surfactant can be recovered and recycled from the phase rich in surfactant.
Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Lan, Shiwei; Bui-Thanh, Tan; Christie, Mike; Girolami, Mark
2016-03-01
The Bayesian approach to Inverse Problems relies predominantly on Markov Chain Monte Carlo methods for posterior inference. The typical nonlinear concentration of posterior measure observed in many such Inverse Problems presents severe challenges to existing simulation based inference methods. Motivated by these challenges the exploitation of local geometric information in the form of covariant gradients, metric tensors, Levi-Civita connections, and local geodesic flows have been introduced to more effectively locally explore the configuration space of the posterior measure. However, obtaining such geometric quantities usually requires extensive computational effort and despite their effectiveness affects the applicability of these geometrically-based Monte Carlo methods. In this paper we explore one way to address this issue by the construction of an emulator of the model from which all geometric objects can be obtained in a much more computationally feasible manner. The main concept is to approximate the geometric quantities using a Gaussian Process emulator which is conditioned on a carefully chosen design set of configuration points, which also determines the quality of the emulator. To this end we propose the use of statistical experiment design methods to refine a potentially arbitrarily initialized design online without destroying the convergence of the resulting Markov chain to the desired invariant measure. The practical examples considered in this paper provide a demonstration of the significant improvement possible in terms of computational loading suggesting this is a promising avenue of further development.
Application of direct inverse analogy method (DIVA) and viscous design optimization techniques
NASA Technical Reports Server (NTRS)
Greff, E.; Forbrich, D.; Schwarten, H.
1991-01-01
A direct-inverse approach to the transonic design problem was presented in its initial state at the First International Conference on Inverse Design Concepts and Optimization in Engineering Sciences (ICIDES-1). Further applications of the direct inverse analogy (DIVA) method to the design of airfoils and incremental wing improvements and experimental verification are reported. First results of a new viscous design code also from the residual correction type with semi-inverse boundary layer coupling are compared with DIVA which may enhance the accuracy of trailing edge design for highly loaded airfoils. Finally, the capabilities of an optimization routine coupled with the two viscous full potential solvers are investigated in comparison to the inverse method.
NASA Astrophysics Data System (ADS)
Manning, A. J.; O'Doherty, S.; Jones, A. R.; Simmonds, P. G.; Derwent, R. G.
2011-01-01
Methane (CH4) and nitrous oxide (N2O) have strong radiative properties in the Earth's atmosphere and both are regulated through the United Nations Framework Convention on Climate Change. Through this convention the United Kingdom is obliged to report an inventory of annual emission estimates from 1990. This paper describes a methodology that estimates emissions of CH4 and N2O completely independent of the inventory values. Emissions have been estimated for each year 1990-2007 for the United Kingdom and for NW Europe. The methodology combines high-frequency observations from Mace Head, a monitoring site on the west coast of Ireland, with an atmospheric dispersion model and an inversion system. The sensitivities of the inversion method to the modeling assumptions are reported. The 20 year Northern Hemisphere midlatitude baseline mixing ratios, growth rates, and seasonal cycles of both gases are also presented. The results indicate reasonable agreement between the inventory and inversion results for the United Kingdom for N2O over the entire period. For CH4 the agreement is poor in the 1990s but good in the 2000s. The UK CH4 inventory reported reduction from 1990-1992 to 2005-2007 (over 50%) is dominated by changes to landfill and coal mine emissions and is more than double the corresponding drop in the inversion estimated emissions (24%). The inversion results suggest that the United Kingdom has met its Kyoto commitment (-12.5%) but by a smaller margin (-14.3%) than reported (-17.3%). The results for NW Europe with the United Kingdom removed show reasonable agreement in trend, on average the inversion results for N2O are 25% lower and for CH4 21% higher.
NASA Astrophysics Data System (ADS)
Bubis, E. L.; Lozhkarev, V. V.; Stepanov, A. N.; Smirnov, A. I.; Kuzmin, I. V.; Malshakova, O. A.; Gusev, S. A.; Skorokhodov, E. V.
2016-08-01
The adaptive phase-contrast method with nonlinear (photothermal) and linear Zernike filters was investigated. Liquid and polymer media partially absorbing radiation served as photothermal Zernike filters. Efficient visualization and inversion of images of small-scale model objects were demonstrated experimentally. Growth-sector boundary in a nonlinear crystal was visualized.
A boundary integral method for an inverse problem in thermal imaging
NASA Technical Reports Server (NTRS)
Bryan, Kurt
1992-01-01
An inverse problem in thermal imaging involving the recovery of a void in a material from its surface temperature response to external heating is examined. Uniqueness and continuous dependence results for the inverse problem are demonstrated, and a numerical method for its solution is developed. This method is based on an optimization approach, coupled with a boundary integral equation formulation of the forward heat conduction problem. Some convergence results for the method are proved, and several examples are presented using computationally generated data.
McGrail, B. Peter
2001-10-31
A numerically based simulator was developed to assist in the interpretation of complex laboratory experiments examining transport processes of chemical and biological contaminants subject to nonlinear adsorption and/or source terms. The inversion is performed with any of three nonlinear regression methods, Marquardt-Levenberg, conjugate gradient, or quasi-Newton. The governing equations for the problem are solved by the method of finite-differences including any combination of three boundary conditions: (1) Dirichlet, (2) Neumann, and (3) Cauchy. The dispersive terms in the transport equations were solved using the second-order accurate in time and space Crank-Nicolson scheme, while the advective terms were handled using a third-order in time and space, total variation diminishing (TVD) scheme that damps spurious oscillations around sharp concentration fronts. The numerical algorithms were implemented in the computer code INVERTS, which runs on any standard personal computer. Apart from a comprehensive set of test problems, INVERTS was also used to model the elution of a nonradioactive tracer, {sup 185}Re, in a pressurized unsaturated flow (PUF) experiment with a simulated waste glass for low-activity waste immobilization. Interpretation of the elution profile was best described with a nonlinear kinetic model for adsorption.
Thomas, Edward V.; Stork, Christopher L.; Mattingly, John K.
2015-07-01
Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is traditionally solved by finding the set of transport model parameter values that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature pre dicted by the hypothesized model parameters. The weights are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. The traditional implicit (often inaccurate) assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. Here, an alternative method that accounts for correlated errors between channels is described and illustrated using an inverse problem based on the combination of gam ma and neutron multiplicity counting measurements.
FOREWORD: 3rd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2013)
NASA Astrophysics Data System (ADS)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2013-10-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 3rd International Workshop on New Computational Methods for Inverse Problems, NCMIP 2013 (http://www.farman.ens-cachan.fr/NCMIP_2013.html). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 22 May 2013, at the initiative of Institut Farman. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 (http://www.farman.ens-cachan.fr/NCMIP_2012.html). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational
FOREWORD: 2nd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2012)
NASA Astrophysics Data System (ADS)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2012-09-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 2nd International Workshop on New Computational Methods for Inverse Problems, (NCMIP 2012). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 15 May 2012, at the initiative of Institut Farman. The first edition of NCMIP also took place in Cachan, France, within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finance. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition
NASA Astrophysics Data System (ADS)
Braun, Douglas; Birch, A.; Rempel, M.; Duvall, T.; J.
2011-05-01
Controversy exists in the interpretation and modeling of helioseismic signals in and around magnetic regions like sunspots. We show the results of applying local helioseismic inversions to travel-time shift measurements from realistic magnetoconvective sunspot simulations. We compare travel-time maps made from several simulations, using different measurements (helioseismic holography and center-annulus time distance helioseismology), and made on real sunspots observed with the HMI instrument onboard the Solar Dynamics Observatory. We find remarkable similarities in the travel-time perturbations measured between: 1) simulations extending both 8 and 16 Mm deep, 2) the methodology (holography or time-distance) applied, and 3) the simulated and real sunspots. The application of RLS inversions, using Born approximation kernels, to narrow frequency-band travel-time shifts from the simulations demonstrates that standard methods fail to reliably reproduce the true wave speed structure. These findings emphasize the need for new methods for inferring the subsurface structure of active regions. Artificial Dopplergrams from our simulations are available to the community at www.hao.ucar.edu under "Data" and "Sunspot Models." This work is supported by NASA under the SDO Science Center project (contract NNH09CE41C).
A method of fast, sequential experimental design for linearized geophysical inverse problems
NASA Astrophysics Data System (ADS)
Coles, Darrell A.; Morgan, Frank Dale
2009-07-01
An algorithm for linear(ized) experimental design is developed for a determinant-based design objective function. This objective function is common in design theory and is used to design experiments that minimize the model entropy, a measure of posterior model uncertainty. Of primary significance in design problems is computational expediency. Several earlier papers have focused attention on posing design objective functions and opted to use global search methods for finding the critical points of these functions, but these algorithms are too slow to be practical. The proposed technique is distinguished primarily for its computational efficiency, which derives partly from a greedy optimization approach, termed sequential design. Computational efficiency is further enhanced through formulae for updating determinants and matrix inverses without need for direct calculation. The design approach is orders of magnitude faster than a genetic algorithm applied to the same design problem. However, greedy optimization often trades global optimality for increased computational speed; the ramifications of this tradeoff are discussed. The design methodology is demonstrated on a simple, single-borehole DC electrical resistivity problem. Designed surveys are compared with random and standard surveys, both with and without prior information. All surveys were compared with respect to a `relative quality' measure, the post-inversion model per cent rms error. The issue of design for inherently ill-posed inverse problems is considered and an approach for circumventing such problems is proposed. The design algorithm is also applied in an adaptive manner, with excellent results suggesting that smart, compact experiments can be designed in real time.
Analysis for Cellinoid shape model in inverse process from lightcurves
NASA Astrophysics Data System (ADS)
Lu, Xiao-Ping; Ip, Wing-Huen; Huang, Xiang-Jie; Zhao, Hai-Bin
2017-01-01
Based on the special shape first introduced by Alberto Cellino, which consists of eight ellipsoidal octants with the constraint that adjacent octants must have two identical semi-axes, an efficient algorithm to derive the physical parameters, such as the rotational period, pole orientation, and overall shape from either lightcurves or sparse photometric data of asteroids, is developed by Lu et al. and named as 'Cellinoid' shape model. For thoroughly investigating the relationship between the morphology of the synthetic lightcurves generated by the Cellinoid shape and its six semi-axes as well as rotational period and pole, the numerical tests are implemented to compare the synthetic lightcurves generated by three Cellinoid models with different parameters in this article. Furthermore, from the synthetic lightcurves generated by two convex shape models of (6) Hebe and (4179) Toutatis, the inverse process based on Cellinoid shape model is applied to search the best-fit parameters. Especially, for better simulating the real observations, the synthetic lightcurves are generated under the orbit limit of the two asteroids. By comparing the results derived from synthetic lightcurves observed in one apparition and multiple apparitions, the performance of Cellinoid shape model is confirmed and the suggestions for observations are presented. Finally, the whole process is also applied to real observed lightcurves of (433) Eros and the derived results are consistent with the known results.
Unified dark energy-dark matter model with inverse quintessence
Ansoldi, Stefano; Guendelman, Eduardo I. E-mail: guendel@bgu.ac.il
2013-05-01
We consider a model where both dark energy and dark matter originate from the coupling of a scalar field with a non-canonical kinetic term to, both, a metric measure and a non-metric measure. An interacting dark energy/dark matter scenario can be obtained by introducing an additional scalar that can produce non constant vacuum energy and associated variations in dark matter. The phenomenology is most interesting when the kinetic term of the additional scalar field is ghost-type, since in this case the dark energy vanishes in the early universe and then grows with time. This constitutes an ''inverse quintessence scenario'', where the universe starts from a zero vacuum energy density state, instead of approaching it in the future.
Inverse magnetic catalysis in holographic models of QCD
NASA Astrophysics Data System (ADS)
Mamo, Kiminad A.
2015-05-01
We study the effect of magnetic field B on the critical temperature T c of the confinement-deconfinement phase transition in hard-wall AdS/QCD, and holographic duals of flavored and unflavored super-Yang Mills theories on . For all of the holographic models, we find that T c ( B) decreases with increasing magnetic field B ≪ T 2, consistent with the inverse magnetic catalysis recently observed in lattice QCD for B ≲ 1 GeV2. We also predict that, for large magnetic field B ≫ T 2, the critical temperature T c ( B), eventually, starts to increase with increasing magnetic field B ≫ T 2 and asymptotes to a constant value.
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B. Montgomery
2016-01-01
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution. PMID:27453632
Advanced Multivariate Inversion Techniques for High Resolution 3D Geophysical Modeling (Invited)
NASA Astrophysics Data System (ADS)
Maceira, M.; Zhang, H.; Rowe, C. A.
2009-12-01
We focus on the development and application of advanced multivariate inversion techniques to generate a realistic, comprehensive, and high-resolution 3D model of the seismic structure of the crust and upper mantle that satisfies several independent geophysical datasets. Building on previous efforts of joint invesion using surface wave dispersion measurements, gravity data, and receiver functions, we have added a fourth dataset, seismic body wave P and S travel times, to the simultaneous joint inversion method. We present a 3D seismic velocity model of the crust and upper mantle of northwest China resulting from the simultaneous, joint inversion of these four data types. Surface wave dispersion measurements are primarily sensitive to seismic shear-wave velocities, but at shallow depths it is difficult to obtain high-resolution velocities and to constrain the structure due to the depth-averaging of the more easily-modeled, longer-period surface waves. Gravity inversions have the greatest resolving power at shallow depths, and they provide constraints on rock density variations. Moreover, while surface wave dispersion measurements are primarily sensitive to vertical shear-wave velocity averages, body wave receiver functions are sensitive to shear-wave velocity contrasts and vertical travel-times. Addition of the fourth dataset, consisting of seismic travel-time data, helps to constrain the shear wave velocities both vertically and horizontally in the model cells crossed by the ray paths. Incorporation of both P and S body wave travel times allows us to invert for both P and S velocity structure, capitalizing on empirical relationships between both wave types’ seismic velocities with rock densities, thus eliminating the need for ad hoc assumptions regarding the Poisson ratios. Our new tomography algorithm is a modification of the Maceira and Ammon joint inversion code, in combination with the Zhang and Thurber TomoDD (double-difference tomography) program.
Zhang, Lin; Baladandayuthapani, Veerabhadran; Mallick, Bani K.; Manyam, Ganiraju C.; Thompson, Patricia A.; Bondy, Melissa L.; Do, Kim-Anh
2015-01-01
Summary The analysis of alterations that may occur in nature when segments of chromosomes are copied (known as copy number alterations) has been a focus of research to identify genetic markers of cancer. One high-throughput technique recently adopted is the use of molecular inversion probes (MIPs) to measure probe copy number changes. The resulting data consist of high-dimensional copy number profiles that can be used to ascertain probe-specific copy number alterations in correlative studies with patient outcomes to guide risk stratification and future treatment. We propose a novel Bayesian variable selection method, the hierarchical structured variable selection (HSVS) method, which accounts for the natural gene and probe-within-gene architecture to identify important genes and probes associated with clinically relevant outcomes. We propose the HSVS model for grouped variable selection, where simultaneous selection of both groups and within-group variables is of interest. The HSVS model utilizes a discrete mixture prior distribution for group selection and group-specific Bayesian lasso hierarchies for variable selection within groups. We provide methods for accounting for serial correlations within groups that incorporate Bayesian fused lasso methods for within-group selection. Through simulations we establish that our method results in lower model errors than other methods when a natural grouping structure exists. We apply our method to an MIP study of breast cancer and show that it identifies genes and probes that are significantly associated with clinically relevant subtypes of breast cancer. PMID:25705056
Inverse scattering for the one-dimensional Helmholtz equation: fast numerical method.
Belai, Oleg V; Frumin, Leonid L; Podivilov, Evgeny V; Shapiro, David A
2008-09-15
The inverse scattering problem for the one-dimensional Helmholtz wave equation is studied. The equation is reduced to a Fresnel set that describes multiple bulk reflection and is similar to the coupled-wave equations. The inverse scattering problem is equivalent to coupled Gel'fand-Levitan-Marchenko integral equations. In the discrete representation its matrix has Töplitz symmetry, and the fast inner bordering method can be applied for its inversion. Previously the method was developed for the design of fiber Bragg gratings. The testing example of a short Bragg reflector with deep modulation demonstrates the high efficiency of refractive-index reconstruction.
NASA Astrophysics Data System (ADS)
Koepke, C.; Irving, J.; Roubinet, D.
2014-12-01
Geophysical methods have gained much interest in hydrology over the past two decades because of their ability to provide estimates of the spatial distribution of subsurface properties at a scale that is often relevant to key hydrological processes. Because of an increased desire to quantify uncertainty in hydrological predictions, many hydrogeophysical inverse problems have recently been posed within a Bayesian framework, such that estimates of hydrological properties and their corresponding uncertainties can be obtained. With the Bayesian approach, it is often necessary to make significant approximations to the associated hydrological and geophysical forward models such that stochastic sampling from the posterior distribution, for example using Markov-chain-Monte-Carlo (MCMC) methods, is computationally feasible. These approximations lead to model structural errors, which, so far, have not been properly treated in hydrogeophysical inverse problems. Here, we study the inverse problem of estimating unsaturated hydraulic properties, namely the van Genuchten-Mualem (VGM) parameters, in a layered subsurface from time-lapse, zero-offset-profile (ZOP) ground penetrating radar (GPR) data, collected over the course of an infiltration experiment. In particular, we investigate the effects of assumptions made for computational tractability of the stochastic inversion on model prediction errors as a function of depth and time. These assumptions are that (i) infiltration is purely vertical and can be modeled by the 1D Richards equation, and (ii) the petrophysical relationship between water content and relative dielectric permittivity is known. Results indicate that model errors for this problem are far from Gaussian and independently identically distributed, which has been the common assumption in previous efforts in this domain. In order to develop a more appropriate likelihood formulation, we use (i) a stochastic description of the model error that is obtained through
Three-dimensional modeling of Mount Vesuvius with sequential integrated inversion
NASA Astrophysics Data System (ADS)
Tondi, Rosaria; de Franco, Roberto
2003-05-01
A new image of Mount Vesuvius and the surrounding area is recovered from the tomographic inversion of 693 first P wave arrivals recorded by 314 receivers deployed along five profiles which intersect the crater, and gravity data collected in 17,598 stations on land and offshore. The final three-dimensional (3-D) velocity model presented here is determined by interpolation of five 2-D velocity sections obtained from sequential integrated inversion (SII) of seismic and gravity data. The inversion procedure adopts the "maximum likelihood" scheme in order to jointly optimize seismic velocities and densities. In this way we recover velocity and density models both consistent with seismic and gravity data information. The model parameterization of these 2-D models is chosen in order to keep the diagonal elements of the seismic resolution matrix in the order of 0.2-0.8. The highest values of resolution are detected under the volcano edifice. The imaged 6-km-thick crustal volume underlies a 25 × 45 km2 area. The interpolation is performed by choosing the right grid for a smoothing algorithm which prepares optimum models for asymptotic ray theory methods. Hence this model can be used as a reference model for a 3-D tomographic inversion of seismic data. The 3-D gravity modeling is straightforward. The results of this study clearly image the continuous structure of the Mesozoic carbonate basement top and the connection of the volcano conduit structure to two shallow depressions, which in terms of hazard prevention are the regions through which magma may more easily flow toward the surface and cause possible eruptions.
NASA Astrophysics Data System (ADS)
Kachar, H.; Mobasheri, M. R.; Abkar, A. A.; Rahim Zadegan, M.
2015-12-01
Increase of temperature with height in the troposphere is called temperature inversion. Parameters such as strength and depth are characteristics of temperature inversion. Inversion strength is defined as the temperature difference between the surface and the top of the inversion and the depth of inversion is defined as the height of the inversion from the surface. The common approach in determination of these parameters is the use of Radiosonde where these measurements are too sparse. The main objective of this study is detection and modeling the temperature inversion using MODIS thermal infrared data. There are more than 180 days per year in which the temperature inversion conditions are present in Kermanshah city. Kermanshah weather station was selected as the study area. 90 inversion days was selected from 2007 to 2008 where the sky was clear and the Radiosonde data were available. Brightness temperature for all thermal infrared bands of MODIS was calculated for these days. Brightness temperature difference between any of the thermal infrared bands of MODIS and band 31 was found to be sensitive to strength and depth of temperature inversion. Then correlation coefficients between these pairs and the inversion depth and strength both calculated from Radiosonde were evaluated. The results showed poor linear correlation. This was found to be due to the change of the atmospheric water vapor content and the relatively weak temperature inversion strength and depth occurring in Kermanshah. The polynomial mathematical models and Artificial intelligence algorithms were deployed for detection and modeling the temperature inversion. A model with the lowest terms and highest possible accuracy was obtained. The Model was tested using 20 independent test data. Results indicate that the inversion strength can be estimated with RMSE of 0.84° C and R2 of 0.90. Also inversion depth can be estimated with RMSE of 54.56 m and R2 of 0.86.
An inverse problem approach to modelling coastal effluent plumes
NASA Astrophysics Data System (ADS)
Lam, D. C. L.; Murthy, C. R.; Miners, K. C.
Formulated as an inverse problem, the diffusion parameters associated with length-scale dependent eddy diffusivities can be viewed as the unknowns in the mass conservation equation for coastal zone transport problems. The values of the diffusion parameters can be optimized according to an error function incorporated with observed concentration data. Examples are given for the Fickian, shear diffusion and inertial subrange diffusion models. Based on a new set of dyeplume data collected in the coastal zone off Bronte, Lake Ontario, it is shown that the predictions of turbulence closure models can be evaluated for different flow conditions. The choice of computational schemes for this diagnostic approach is based on tests with analytic solutions and observed data. It is found that the optimized shear diffusion model produced a better agreement with observations for both high and low advective flows than, e.g., the unoptimized semi-empirical model, Ky=0.075 σy1.2, described by Murthy and Kenney.
Yavari, Fatemeh; Mahdavi, Shirin; Towhidkhah, Farzad; Ahmadi-Pajouh, Mohammad-Ali; Ekhtiari, Hamed; Darainy, Mohammad
2016-04-01
Despite several pieces of evidence, which suggest that the human brain employs internal models for motor control and learning, the location of these models in the brain is not yet clear. In this study, we used transcranial direct current stimulation (tDCS) to manipulate right cerebellar function, while subjects adapt to a visuomotor task. We investigated the effect of this manipulation on the internal forward and inverse models by measuring two kinds of behavior: generalization of training in one direction to neighboring directions (as a proxy for inverse models) and localization of the hand position after movement without visual feedback (as a proxy for forward model). The experimental results showed no effect of cerebellar tDCS on generalization, but significant effect on localization. These observations support the idea that the cerebellum is a possible brain region for internal forward, but not inverse model formation. We also used a realistic human head model to calculate current density distribution in the brain. The result of this model confirmed the passage of current through the cerebellum. Moreover, to further explain some observed experimental results, we modeled the visuomotor adaptation process with the help of a biologically inspired method known as population coding. The effect of tDCS was also incorporated in the model. The results of this modeling study closely match our experimental data and provide further evidence in line with the idea that tDCS manipulates FM's function in the cerebellum.
Development of direct-inverse 3-D methods for applied aerodynamic design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1988-01-01
Several inverse methods have been compared and initial results indicate that differences in results are primarily due to coordinate systems and fuselage representations and not to design procedures. Further, results from a direct-inverse method that includes 3-D wing boundary layer effects, wake curvature, and wake displacement are presented. These results show that boundary layer displacements must be included in the design process for accurate results.
Resampling: An optimization method for inverse planning in robotic radiosurgery
Schweikard, Achim; Schlaefer, Alexander; Adler, John R. Jr.
2006-11-15
By design, the range of beam directions in conventional radiosurgery are constrained to an isocentric array. However, the recent introduction of robotic radiosurgery dramatically increases the flexibility of targeting, and as a consequence, beams need be neither coplanar nor isocentric. Such a nonisocentric design permits a large number of distinct beam directions to be used in one single treatment. These major technical differences provide an opportunity to improve upon the well-established principles for treatment planning used with GammaKnife or LINAC radiosurgery. With this objective in mind, our group has developed over the past decade an inverse planning tool for robotic radiosurgery. This system first computes a set of beam directions, and then during an optimization step, weights each individual beam. Optimization begins with a feasibility query, the answer to which is derived through linear programming. This approach offers the advantage of completeness and avoids local optima. Final beam selection is based on heuristics. In this report we present and evaluate a new strategy for utilizing the advantages of linear programming to improve beam selection. Starting from an initial solution, a heuristically determined set of beams is added to the optimization problem, while beams with zero weight are removed. This process is repeated to sample a set of beams much larger compared with typical optimization. Experimental results indicate that the planning approach efficiently finds acceptable plans and that resampling can further improve its efficiency.
Time-Filtered Inverse Modeling of Land-Atmosphere Carbon Exchange
NASA Astrophysics Data System (ADS)
Geyer, N. M.; Denning, S.; Haynes, K. D.
2015-12-01
The sources and sinks of biospheric carbon dioxide represent one of the least understood and most critical processes in carbon science. Since the 1990's, carbon dioxide inversion models have estimated the magnitude, location, and uncertainty of carbon sources and sinks. These inversions are underconstrained estimation problems that employ aggressive statistical regularizations in both space and time to estimate quantities like net ecosystem exchange (NEE) on weekly timescales over fine spatial scales. We developed and tested a new method focusing observational constraints on estimation of corrections to slowly varying biospheric processes, which control time-averaged sources and sinks. Rather than estimate weekly additive corrections to NEE, we estimate persistent multiplicative biases to time mean and several seasonal harmonics of gross primary production (GPP) and total respiration (RESP). We tested the new method by estimating corrections to simulated component fluxes from the Simple Biosphere Model 4 (SiB4) using observations from 8 different eddy-covariance flux towers selected from the North American Carbon Program (NACP) site synthesis dataset. The time-filtering method correctly estimates of both the net and component fluxes and is more robust to observational uncertainty than a control experiment meant to represent current global inversions. Furthermore, the new method is flexible enough to separately estimate component fluxes (GPP and RESP) using additional observational constraints even with a high degree of uncertainty.
An inverse method was developed to integrate satellite observations of atmospheric pollutant column concentrations and direct sensitivities predicted by a regional air quality model in order to discern biases in the emissions of the pollutant precursors.
NASA Astrophysics Data System (ADS)
Yin, Zhi; Xu, Caijun; Wen, Yangmao; Jiang, Guoyan; Fan, Qingbiao; Liu, Yang
2016-05-01
Planar faults are widely adopted during inversions to determine slip distributions and fault geometries using geodetic observations; however, little research has been conducted with respect to curved faults. We attribute this to the lack of an appropriate parameterized modelling method. In this paper, we present a curved-fault modelling method (CFMM) that describes a curved fault according to specific parameters, and we also develop a corresponding hybrid iterative inversion algorithm (HIIA) to perform inversions for parametric curved-fault geometries and slips. The results of the strike-component and dip-component synthetic tests show that a complex S-shaped fault surface and a circular slip distribution are successfully recovered, indicating the strong performance of the CFMM and HIIA methods. In addition, we describe and verify a scenario for determining the number of necessary geometrical parameters for the HIIA and examine the case study of the Wenchuan earthquake, which occurred on a complex listric fault surface. During the iteration process of the HIIA, both the fault geometry and slip distribution of the Beichuan and Pengguan faults converge to optimal values, indicating a Beichuan fault (BCF) model with a continuous listric shape and gradual steepening from the southwest to the northeast, which is highly consistent with geological survey results. Both the synthetic and real-world case studies show that the HIIA and the CMFF are superior to the conventional fault modelling method based on rectangular planes and that these models have the potential for use in more integrated research involving inversion studies, such as joint slip/curved-fault-geometry inversions that take into account data resolving power.
Age-dependent forest carbon sink: Estimation via inverse modeling
NASA Astrophysics Data System (ADS)
Zhou, Tao; Shi, Peijun; Jia, Gensuo; Dai, Yongjiu; Zhao, Xiang; Shangguan, Wei; Du, Ling; Wu, Hao; Luo, Yiqi
2015-12-01
Forests have been recognized to sequester a substantial amount of carbon (C) from the atmosphere. However, considerable uncertainty remains regarding the magnitude and time course of the C sink. Revealing the intrinsic relationship between forest age and C sink is crucial for reducing uncertainties in prediction of forest C sink potential. In this study, we developed a stepwise data assimilation approach to combine a process-based Terrestrial ECOsystem Regional model, observations from multiple sources, and stochastic sampling to inversely estimate carbon cycle parameters including carbon sink at different forest ages for evergreen needle-leaved forests in China. The new approach is effective to estimate age-dependent parameter of maximal light-use efficiency (R2 = 0.99) and, accordingly, can quantify a relationship between forest age and the vegetation and soil C sinks. The estimated ecosystem C sink increases rapidly with age, peaks at 0.451 kg C m-2 yr-1 at age 22 years (ranging from 0.421 to 0.465 kg C m-2 yr-1), and gradually decreases thereafter. The dynamic patterns of C sinks in vegetation and soil are significantly different. C sink in vegetation first increases rapidly with age and then decreases. C sink in soil, however, increases continuously with age; it acts as a C source when the age is less than 20 years, after which it acts as a sink. For the evergreen needle-leaved forest, the highest C sink efficiency (i.e., C sink per unit net primary productivity) is approximately 60%, with age between 11 and 43 years. Overall, the inverse estimation of carbon cycle parameters can make reasonable estimates of age-dependent C sequestration in forests.
Numerical modeling of axi-symmetrical cold forging process by ``Pseudo Inverse Approach''
NASA Astrophysics Data System (ADS)
Halouani, A.; Li, Y. M.; Abbes, B.; Guo, Y. Q.
2011-05-01
The incremental approach is widely used for the forging process modeling, it gives good strain and stress estimation, but it is time consuming. A fast Inverse Approach (IA) has been developed for the axi-symmetric cold forging modeling [1-2]. This approach exploits maximum the knowledge of the final part's shape and the assumptions of proportional loading and simplified tool actions make the IA simulation very fast. The IA is proved very useful for the tool design and optimization because of its rapidity and good strain estimation. However, the assumptions mentioned above cannot provide good stress estimation because of neglecting the loading history. A new approach called "Pseudo Inverse Approach" (PIA) was proposed by Batoz, Guo et al.. [3] for the sheet forming modeling, which keeps the IA's advantages but gives good stress estimation by taking into consideration the loading history. Our aim is to adapt the PIA for the cold forging modeling in this paper. The main developments in PIA are resumed as follows: A few intermediate configurations are generated for the given tools' positions to consider the deformation history; the strain increment is calculated by the inverse method between the previous and actual configurations. An incremental algorithm of the plastic integration is used in PIA instead of the total constitutive law used in the IA. An example is used to show the effectiveness and limitations of the PIA for the cold forging process modeling.
Inverse Optimization: A New Perspective on the Black-Litterman Model
Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch.
2014-01-01
The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct “BL”-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new “BL”-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views. PMID:25382873
Lehikoinen, A.; Huttunen, J.M.J.; Finsterle, S.; Kowalsky, M.B.; Kaipio, J.P.
2009-08-01
We propose an approach for imaging the dynamics of complex hydrological processes. The evolution of electrically conductive fluids in porous media is imaged using time-lapse electrical resistance tomography. The related dynamic inversion problem is solved using Bayesian filtering techniques, that is, it is formulated as a sequential state estimation problem in which the target is an evolving posterior probability density of the system state. The dynamical inversion framework is based on the state space representation of the system, which involves the construction of a stochastic evolution model and an observation model. The observation model used in this paper consists of the complete electrode model for ERT, with Archie's law relating saturations to electrical conductivity. The evolution model is an approximate model for simulating flow through partially saturated porous media. Unavoidable modeling and approximation errors in both the observation and evolution models are considered by computing approximate statistics for these errors. These models are then included in the construction of the posterior probability density of the estimated system state. This approximation error method allows the use of approximate - and therefore computationally efficient - observation and evolution models in the Bayesian filtering. We consider a synthetic example and show that the incorporation of an explicit model for the model uncertainties in the state space representation can yield better estimates than a frame-by-frame imaging approach.
NASA Astrophysics Data System (ADS)
Huang, H.; Meng, D. Q.; Lai, X. C.; Liu, T. W.; Long, Y.; Hu, Q. M.
2014-08-01
The combined interatomic pair potentials of TiZrNi, including Morse and Inversion Gaussian, are successfully built by the lattice inversion method. Some experimental controversies on atomic occupancies of sites 6-8 in W-TiZrNi are analyzed and settled with these inverted potentials. According to the characteristics of composition and site preference occupancy of W-TiZrNi, two stable structural models of W-TiZrNi are proposed and the possibilities are partly confirmed by experimental data. The stabilities of W-TiZrNi mostly result from the contribution of Zr atoms to the phonon densities of states in lower frequencies.
ERIC Educational Resources Information Center
Ngu, Bing Hiong; Phan, Huy Phuong
2016-01-01
We examined the use of balance and inverse methods in equation solving. The main difference between the balance and inverse methods lies in the operational line (e.g. +2 on both sides vs -2 becomes +2). Differential element interactivity favours the inverse method because the interaction between elements occurs on both sides of the equation for…
NASA Astrophysics Data System (ADS)
Gustafsson, Ove K. S.; Eriksson, Gunnar; Holm, Peter; Waern, Åsa; von Schoenberg, Pontus; Thaning, Lennart; Nordstrand, Melker; Persson, Rolf
2006-09-01
Radio wave propagation over sea paths is influenced by the local meteorological condition at the atmospheric layer near the surface, especially during ducts. Duct condition can be determined by measurements of local meteorological parameters, by weather forecast models or by using inverse methods. In order to evaluate the feasibility of using inverse methods to retrieve the refractivity profiles a measurement of RF signals and meteorological parameters were carried out at a test site in the Baltic. During the measurements, signal power from two broadcast antennas, one at Visby and one at Vastervik, were received at Musko, an island south of Stockholm. The measurements were performed during the summer 2005 and the data was used to test the software package for inversion methods, SAGA (Seismo Acoustic inversion using Genetic Algorithms, by Peter Gerstoft UCSD, US). Refractivity profiles retrieved by SAGA were compared with the refractivity profiles calculated from measured parameters, during parts of the experiment, from rocket sounding, radio sounding, local meteorological measurements using bulk model calculations, and also obtained by the Swedish operational weather forecast model HIRLAM. Surface based duct height are predicted in relative many situations even though the number of frequencies or antennas height has to be increased to diminish the ambiguous of the refractive index profile.
NASA Astrophysics Data System (ADS)
Ferreira, Carlos; Casari, Pascal; Bouzidi, Rabah; Jacquemin, Frédéric
2006-09-01
The aim of this paper is to investigate the mechanical properties of a PVC foam core and especially the Young modulus profile along a commercialised 50 mm beam thickness. The identification of the Young modulus gradient is realized through the uniaxial compression test of a 50 mm cube sample. The in-plane strain fields of one cube face under loading in both directions (longitudinal and transversal) are achieved using a diffuse light interferometric technique, the speckle interferometry. Next to that, a numerical model is built using finite elements code CAST3M. We choose a multilayer model in order to introduce spatial variation of the mechanical properties. The boundaries conditions are very close to those prescribed in the experimental tests. Finally, the present work shows that the non uniform profile of the Young modulus can be estimated by using a simple inverse method and the finite elements analysis to reproduce the experimental strain field.
A PC-based inverse design method for radial and mixed flow turbomachinery
NASA Technical Reports Server (NTRS)
Skoe, Ivar Helge
1991-01-01
An Inverse Design Method suitable for radial and mixed flow turbomachinery is presented. The codes are based on the streamline curvature concept; therefore, it is applicable for current personal computers from the 286/287 range. In addition to the imposed aerodynamic constraints, mechanical constraints are imposed during the design process to ensure that the resulting geometry satisfies production consideration and that structural considerations are taken into account. By the use of Bezier Curves in the geometric modeling, the same subroutine is used to prepare input for both aero and structural files since it is important to ensure that the geometric data is identical to both structural analysis and production. To illustrate the method, a mixed flow turbine design is shown.
Lippman, Sheri A.; Shade, Starley B.; Hubbard, Alan E.
2011-01-01
Background Intervention effects estimated from non-randomized intervention studies are plagued by biases, yet social or structural intervention studies are rarely randomized. There are underutilized statistical methods available to mitigate biases due to self-selection, missing data, and confounding in longitudinal, observational data permitting estimation of causal effects. We demonstrate the use of Inverse Probability Weighting (IPW) to evaluate the effect of participating in a combined clinical and social STI/HIV prevention intervention on reduction of incident chlamydia and gonorrhea infections among sex workers in Brazil. Methods We demonstrate the step-by-step use of IPW, including presentation of the theoretical background, data set up, model selection for weighting, application of weights, estimation of effects using varied modeling procedures, and discussion of assumptions for use of IPW. Results 420 sex workers contributed data on 840 incident chlamydia and gonorrhea infections. Participators were compared to non-participators following application of inverse probability weights to correct for differences in covariate patterns between exposed and unexposed participants and between those who remained in the intervention and those who were lost-to-follow-up. Estimators using four model selection procedures provided estimates of intervention effect between odds ratio (OR) .43 (95% CI:.22-.85) and .53 (95% CI:.26-1.1). Conclusions After correcting for selection bias, loss-to-follow-up, and confounding, our analysis suggests a protective effect of participating in the Encontros intervention. Evaluations of behavioral, social, and multi-level interventions to prevent STI can benefit by introduction of weighting methods such as IPW. PMID:20375927
An Adaptive Model of Student Performance Using Inverse Bayes
ERIC Educational Resources Information Center
Lang, Charles
2014-01-01
This article proposes a coherent framework for the use of Inverse Bayesian estimation to summarize and make predictions about student behaviour in adaptive educational settings. The Inverse Bayes Filter utilizes Bayes theorem to estimate the relative impact of contextual factors and internal student factors on student performance using time series…
NASA Astrophysics Data System (ADS)
Rosas Carbajal, Marina; Linde, Niklas; Kalscheurer, Thomas; Vrugt, Jasper
2013-04-01
Stochastic inversions based on Markov chain Monte Carlo (MCMC) methods help to characterize the inherent non-uniqueness of non-linear inverse problems. By stating the inverse problem as an inference problem, the emphasis is placed on sampling the posterior probability density function (PDF) of the model parameters, which comprise all possible models that explain the data and satisfy a priori information. The drawback is that for non-linear problems involving many model parameters, MCMC algorithms may take great time to converge. This is why most geophysical applications based on MCMC rely on 1D assumptions. We present here the first fully 2D MCMC inversion of radio magnetotelluric (RMT) and electrical resistivity tomography (ERT) data, using up to 300 model parameters. We demonstrate that stochastic inversion of high-dimensional problems necessitates prior constraints on the model structure to yield meaningful results. In particular, we focus on two popular types of regularization: smoothly varying model parameters and compact anomalies. To do so, we invert not only for the PDF of each model parameter, but also for two hyper-parameters: the variance of the data errors and a trade-off between data fit and model structure. The derived model uncertainties are compared with deterministic most-squares inversions and we analyze how these uncertainties evolve when jointly inverting RMT and ERT data. Finally, we present a field application to characterize the geometry of an aquifer in Sweden. The numerical examples illustrate that model regularization not only decreases the uncertainty of the model parameters, but also accelerates the convergence of the MCMC algorithm. A drawback is that the regularization may lead to posterior PDFs that do not contain features in the true model that are insensitive to data. We also find that joint inversion of different types of geophysical data helps to better constrain the subsurface models. Results of the field data inversions are in
Design optimization of axial flow hydraulic turbine runner: Part I - an improved Q3D inverse method
NASA Astrophysics Data System (ADS)
Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji
2002-06-01
With the aim of constructing a comprehensive design optimization procedure of axial flow hydraulic turbine, an improved quasi-three-dimensional inverse method has been proposed from the viewpoint of system and a set of rotational flow governing equations as well as a blade geometry design equation has been derived. The computation domain is firstly taken from the inlet of guide vane to the far outlet of runner blade in the inverse method and flows in different regions are solved simultaneously. So the influence of wicket gate parameters on the runner blade design can be considered and the difficulty to define the flow condition at the runner blade inlet is surmounted. As a pre-computation of initial blade design on S2m surface is newly adopted, the iteration of S1 and S2m surfaces has been reduced greatly and the convergence of inverse computation has been improved. The present model has been applied to the inverse computation of a Kaplan turbine runner. Experimental results and the direct flow analysis have proved the validation of inverse computation. Numerical investigations show that a proper enlargement of guide vane distribution diameter is advantageous to improve the performance of axial hydraulic turbine runner. Copyright
NASA Astrophysics Data System (ADS)
Cao, Danping; Liao, Wenyuan
2015-03-01
Full waveform inversion (FWI) is a model-based data-fitting technique that has been widely used to estimate model parameters in Geophysics. In this work, we propose an efficient computational approach to solve the FWI of crosswell seismic data. The FWI problem is mathematically formulated as a partial differential equation (PDE)-constrained optimization problem, which is numerically solved using a gradient-based optimization method. The efficiency and accuracy of FWI are mainly determined by the three main components: forward modeling, gradient calculation and model update which usually involves the gradient-based optimization algorithm. Given the large number of iterations needed by FWI, an accurate gradient is critical for the success of FWI, as it will not only speed up the convergence but also increase the accuracy of the solution. However computing the gradient still remains a challenging task even after the adjoint PDE has been derived. Automatic differentiation (AD) tools have been proved very effective in a variety of application areas including Geoscience. In this work we investigated the feasibility of integrating TAPENADE, a powerful AD tool into FWI, so that the FWI workflow is simplified to allow us to focus on the forward modeling and the model updating. In this paper we choose the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method due to its robustness and fast convergence. Numerical experiments have been conducted to demonstrate the effectiveness, efficiency and robustness of the new computational approach for FWI.
Inverse Modeling of Texas NOx Emissions Using Space-Based and Ground-Based NO2 Observations
NASA Technical Reports Server (NTRS)
Tang, Wei; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.
2013-01-01
Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellitebased top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.
An Inverse Model of Three-Dimensional Flow and Transport in Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
Robinson, B. A.; Vrugt, J. A.; Yoon, H.; Zhang, C.; Werth, C. J.; Kitanidis, P. K.; Lichtner, P. C.; Lu, C.
2007-12-01
A three-dimensional flow and transport model was developed to simulate the results of a laboratory-scale experiment in which snapshots of concentration were obtained using magnetic resonance imaging (MRI) during the displacement of tracer through a 14 by 8 by 8 cm flow cell. The medium was deliberately constructed to be heterogeneous with a known spatial correlation structure using sand of five different grain-size distributions. The extremely well characterized flow cell and large, high-precision data set of concentrations during displacement make this a unique experiment for examining the validity of flow and transport models, and for exploring new methods for interpreting large data sets using advanced optimization algorithms. A transport model was constructed by solving the steady state flow equations using the Finite Element Heat and Mass (FEHM) code, using FEHM's particle tracking transport model for simulating tracer migration. The particle tracking model was selected so that precise estimates of the transport parameters could be obtained that are not corrupted by numerical dispersion; a large number of particles (typically one million) were required to provide accuracy. The inverse model included nine uncertain parameters, the five permeability values of the individual sand units, and four dispersion/diffusion parameters. The inverse problem was solved with AMALGAM and DREAM, two recently developed self-adaptive multimethod optimization algorithms. The computations were enabled by performing both the transport model and the optimization loop on a high-performance computing cluster. Computational results indicate that parameter estimates and increased understanding of the behavior of the system can be obtained, and significant improvements in the fit to the data over hand calibration can be achieved, using this inverse modeling approach. The study also illustrates that numerical methods that make effective use of high- performance computing resources and
NASA Astrophysics Data System (ADS)
Wang, Qian; Li, Xingwen; Song, Haoyong; Rong, Mingzhe
2010-04-01
Non-contact magnetic measurement method is an effective way to study the air arc behavior experimentally One of the crucial techniques is to solve an inverse problem for the electromagnetic field. This study is devoted to investigating different algorithms for this kind of inverse problem preliminarily, including the preconditioned conjugate gradient method, penalty function method and genetic algorithm. The feasibility of each algorithm is analyzed. It is shown that the preconditioned conjugate gradient method is valid only for few arc segments, the estimation accuracy of the penalty function method is dependent on the initial conditions, and the convergence of genetic algorithm should be studied further for more segments in an arc current.
Adaptive eigenspace method for inverse scattering problems in the frequency domain
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nahum, Uri
2017-02-01
A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.
Developing a high-resolution CO2 flux inversion model for global and regional scale studies
NASA Astrophysics Data System (ADS)
Maksyutov, S. S.; Janardanan Achari, R.; Oda, T.; Ito, A.; Saito, M.; W Kaiser, J.; Belikov, D.; Ganshin, A.; Valsala, V.; Sasakawa, M.; Machida, T.
2015-12-01
We develop and test an iterative inversion framework that is designed for estimating surface CO2 fluxes at a high spatial resolution using a Lagrangian-Eulerian coupled tracer transport model and atmospheric CO2 data collected by the global in-situ network and satellite observations. In our inverse modeling system, we employ the Lagrangian particle dispersion model FLEXPART that was coupled to the Eulerian atmospheric tracer transport model (NIES-TM). We also derived an adjoint of the coupled model. Weekly corrections to prior fluxes are calculated at a spatial resolution of the FLEXPART-simulated surface flux responses (0.1 degree). Fossil fuel (ODIAC) and biomass burning (GFAS) emissions are given at original model spatial resolutions (0.1 degree), while other fluxes are interpolated from a coarser resolution. The terrestrial biosphere fluxes are simulated with the VISIT model at 0.5 degree resolution. Ocean fluxes are calculated using a 4D-Var assimilation system (OTTM) of the surface pCO2 observations. The flux response functions simulated with FLEXPART are used in forward and adjoint runs of the coupled transport model. To obtain a best fit to the observations we tested a set of optimization algorithms, including quasi-Newtonian algorithms and implicitly restarted Lanczos method. The square root of covariance matrix for surface fluxes is implemented as implicit diffusion operator, while the adjoint of it is derived using automatic code differentiation tool. The prior and posterior flux uncertainties are evaluated using singular vectors of scaled tracer transport operator. The weekly flux uncertainties and flux uncertainty reduction due to assimilating GOSAT XCO2 data were estimated for a period of one year. The model was applied to assimilating one year of Obspack data, and produced satisfactory flux correction results. Regional version of the model was applied to inverse model analysis of the CO2 flux distrubution in West Siberia using continuous observation
NASA Astrophysics Data System (ADS)
Liu, Qing; Zhan, Yong-hong; Yang, Di; Zeng, Chang-e.
2014-11-01
In this paper, we try to find a model that can apply to predict the polarization characteristics of the targets on the ground correctly. In the first place, we give an introduction to several kinds of existing models which are divided into three categories: Empirical models are precise but occupy too much source of computer; Physical-based models can predict the phenomenon of reflection exactly but hardly get the final results; Semi-empirical models have both advantages mentioned above and avoid their disadvantages effectively. Then we make an analysis of the Priest-Germer (PG) pBRDF model, one of semi-empirical models, which is suitable for our study. The methods of parameters inversing and testing are proposed based on this model and the test system from which we can get enough data to verify the accuracy of the model is designed independently. At last, we make a simulation of the whole process of the parameters inversing based on PG pBRDF model. From the analysis of the simulation curves, we briefly know the direction we go in the following work to make an amendment.
Local Bathymetry Estimation Using Variational Inverse Modeling: A Nested Approach
NASA Astrophysics Data System (ADS)
Almeida, T. G.; Walker, D. T.; Farquharson, G.
2014-12-01
Estimation of subreach river bathymetry from remotely-sensed surface velocity data is presented using variational inverse modeling applied to the 2D depth-averaged, shallow-water equations (SWEs). A nested approach is adopted to focus on obtaining an accurate estimate of bathymetry over a small region of interest within a larger complex hydrodynamic system. This approach reduces computational cost significantly. We begin by constructing a minimization problem with a cost function defined by the error between observed and estimated surface velocities, and then apply the SWEs as a constraint on the velocity field. An adjoint SWE model is developed through the use of Lagrange multipliers, converting the unconstrained minimization problem into a constrained one. The adjoint model solution is used to calculate the gradient of the cost function with respect to bathymetry. The gradient is used in a descent algorithm to determine the bathymetry that yields a surface velocity field that is a best-fit to the observational data. In this application of the algorithm, the 2D depth-averaged flow is computed within a nested framework using Delft3D-FLOW as the forward computational model. First, an outer simulation is generated using discharge rate and other measurements from USGS and NOAA, assuming a uniform bottom-friction coefficient. Then a nested, higher resolution inner model is constructed using open boundary condition data interpolated from the outer model (see figure). Riemann boundary conditions with specified tangential velocities are utilized to ensure a near seamless transition between outer and inner model results. The initial guess bathymetry matches the outer model bathymetry, and the iterative assimilation procedure is used to adjust the bathymetry only for the inner model. The observation data was collected during the ONR Rivet II field exercise for the mouth of the Columbia River near Hammond, OR. A dual beam squinted along-track-interferometric, synthetic
NASA Technical Reports Server (NTRS)
Smith, G. A.; Meyer, G.; Nordstrom, M.
1986-01-01
A new automatic flight control system concept suitable for aircraft with highly nonlinear aerodynamic and propulsion characteristics and which must operate over a wide flight envelope was investigated. This exact model follower inverts a complete nonlinear model of the aircraft as part of the feed-forward path. The inversion is accomplished by a Newton-Raphson trim of the model at each digital computer cycle time of 0.05 seconds. The combination of the inverse model and the actual aircraft in the feed-forward path alloys the translational and rotational regulators in the feedback path to be easily designed by linear methods. An explanation of the model inversion procedure is presented. An extensive set of simulation data for essentially the full flight envelope for a vertical attitude takeoff and landing aircraft (VATOL) is presented. These data demonstrate the successful, smooth, and precise control that can be achieved with this concept. The trajectory includes conventional flight from 200 to 900 ft/sec with path accelerations and decelerations, altitude changes of over 6000 ft and 2g and 3g turns. Vertical attitude maneuvering as a tail sitter along all axes is demonstrated. A transition trajectory from 200 ft/sec in conventional flight to stationary hover in the vertical attitude includes satisfactory operation through lift-cure slope reversal as attitude goes from horizontal to vertical at constant altitude. A vertical attitude takeoff from stationary hover to conventional flight is also demonstrated.
NASA Astrophysics Data System (ADS)
Zhou, Wei; Brossier, Romain; Operto, Stéphane; Virieux, Jean
2015-09-01
Full waveform inversion (FWI) aims to reconstruct high-resolution subsurface models from the full wavefield, which includes diving waves, post-critical reflections and short-spread reflections. Most successful applications of FWI are driven by the information carried by diving waves and post-critical reflections to build the long-to-intermediate wavelengths of the velocity structure. Alternative approaches, referred to as reflection waveform inversion (RWI), have been recently revisited to retrieve these long-to-intermediate wavelengths from short-spread reflections by using some prior knowledge of the reflectivity and a scale separation between the velocity macromodel and the reflectivity. This study presents a unified formalism of FWI, named as Joint FWI, whose aim is to efficiently combine the diving and reflected waves for velocity model building. The two key ingredients of Joint FWI are, on the data side, the explicit separation between the short-spread reflections and the wide-angle arrivals and, on the model side, the scale separation between the velocity macromodel and the short-scale impedance model. The velocity model and the impedance model are updated in an alternate way by Joint FWI and waveform inversion of the reflection data (least-squares migration), respectively. Starting from a crude velocity model, Joint FWI is applied to the streamer seismic data computed in the synthetic Valhall model. While the conventional FWI is stuck into a local minimum due to cycle skipping, Joint FWI succeeds in building a reliable velocity macromodel. Compared with RWI, the use of diving waves in Joint FWI improves the reconstruction of shallow velocities, which translates into an improved imaging at deeper depths. The smooth velocity model built by Joint FWI can be subsequently used as a reliable initial model for conventional FWI to increase the high-wavenumber content of the velocity model.
Affordable and personalized lighting using inverse modeling and virtual sensors
NASA Astrophysics Data System (ADS)
Basu, Chandrayee; Chen, Benjamin; Richards, Jacob; Dhinakaran, Aparna; Agogino, Alice; Martin, Rodney
2014-03-01
Wireless sensor networks (WSN) have great potential to enable personalized intelligent lighting systems while reducing building energy use by 50%-70%. As a result WSN systems are being increasingly integrated in state-ofart intelligent lighting systems. In the future these systems will enable participation of lighting loads as ancillary services. However, such systems can be expensive to install and lack the plug-and-play quality necessary for user-friendly commissioning. In this paper we present an integrated system of wireless sensor platforms and modeling software to enable affordable and user-friendly intelligent lighting. It requires ⇠ 60% fewer sensor deployments compared to current commercial systems. Reduction in sensor deployments has been achieved by optimally replacing the actual photo-sensors with real-time discrete predictive inverse models. Spatially sparse and clustered sub-hourly photo-sensor data captured by the WSN platforms are used to develop and validate a piece-wise linear regression of indoor light distribution. This deterministic data-driven model accounts for sky conditions and solar position. The optimal placement of photo-sensors is performed iteratively to achieve the best predictability of the light field desired for indoor lighting control. Using two weeks of daylight and artificial light training data acquired at the Sustainability Base at NASA Ames, the model was able to predict the light level at seven monitored workstations with 80%-95% accuracy. We estimate that 10% adoption of this intelligent wireless sensor system in commercial buildings could save 0.2-0.25 quads BTU of energy nationwide.
Inverse transport modeling of volcanic sulfur dioxide emissions using large-scale simulations
NASA Astrophysics Data System (ADS)
Heng, Yi; Hoffmann, Lars; Griessbach, Sabine; Rößler, Thomas; Stein, Olaf
2016-05-01
An inverse transport modeling approach based on the concepts of sequential importance resampling and parallel computing is presented to reconstruct altitude-resolved time series of volcanic emissions, which often cannot be obtained directly with current measurement techniques. A new inverse modeling and simulation system, which implements the inversion approach with the Lagrangian transport model Massive-Parallel Trajectory Calculations (MPTRAC) is developed to provide reliable transport simulations of volcanic sulfur dioxide (SO2). In the inverse modeling system MPTRAC is used to perform two types of simulations, i.e., unit simulations for the reconstruction of volcanic emissions and final forward simulations. Both types of transport simulations are based on wind fields of the ERA-Interim meteorological reanalysis of the European Centre for Medium Range Weather Forecasts. The reconstruction of altitude-dependent SO2 emission time series is also based on Atmospheric InfraRed Sounder (AIRS) satellite observations. A case study for the eruption of the Nabro volcano, Eritrea, in June 2011, with complex emission patterns, is considered for method validation. Meteosat Visible and InfraRed Imager (MVIRI) near-real-time imagery data are used to validate the temporal development of the reconstructed emissions. Furthermore, the altitude distributions of the emission time series are compared with top and bottom altitude measurements of aerosol layers obtained by the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) satellite instruments. The final forward simulations provide detailed spatial and temporal information on the SO2 distributions of the Nabro eruption. By using the critical success index (CSI), the simulation results are evaluated with the AIRS observations. Compared to the results with an assumption of a constant flux of SO2 emissions, our inversion approach leads to an improvement
NASA Technical Reports Server (NTRS)
Cerracchio, Priscilla; Gherlone, Marco; Di Sciuva, Marco; Tessler, Alexander
2013-01-01
The marked increase in the use of composite and sandwich material systems in aerospace, civil, and marine structures leads to the need for integrated Structural Health Management systems. A key capability to enable such systems is the real-time reconstruction of structural deformations, stresses, and failure criteria that are inferred from in-situ, discrete-location strain measurements. This technology is commonly referred to as shape- and stress-sensing. Presented herein is a computationally efficient shape- and stress-sensing methodology that is ideally suited for applications to laminated composite and sandwich structures. The new approach employs the inverse Finite Element Method (iFEM) as a general framework and the Refined Zigzag Theory (RZT) as the underlying plate theory. A three-node inverse plate finite element is formulated. The element formulation enables robust and efficient modeling of plate structures instrumented with strain sensors that have arbitrary positions. The methodology leads to a set of linear algebraic equations that are solved efficiently for the unknown nodal displacements. These displacements are then used at the finite element level to compute full-field strains, stresses, and failure criteria that are in turn used to assess structural integrity. Numerical results for multilayered, highly heterogeneous laminates demonstrate the unique capability of this new formulation for shape- and stress-sensing.
An inverse method for estimation of the acoustic intensity in the focused ultrasound field
NASA Astrophysics Data System (ADS)
Yu, Ying; Shen, Guofeng; Chen, Yazhu
2017-03-01
Recently, a new method which based on infrared (IR) imaging was introduced. Authors (A. Shaw, et al and M. R. Myers, et al) have established the relationship between absorber surface temperature and incident intensity during the absorber was irradiated by the transducer. Theoretically, the shorter irradiating time makes estimation more in line with the actual results. But due to the influence of noise and performance constrains of the IR camera, it is hard to identify the difference in temperature with short heating time. An inverse technique is developed to reconstruct the incident intensity distribution using the surface temperature with shorter irradiating time. The algorithm is validated using surface temperature data generated numerically from three-layer model which was developed to calculate the acoustic field in the absorber, the absorbed acoustic energy during the irradiation, and the consequent temperature elevation. To assess the effect of noisy data on the reconstructed intensity profile, in the simulations, the different noise levels with zero mean were superposed on the exact data. Simulation results demonstrate that the inversion technique can provide fairly reliable intensity estimation with satisfactory accuracy.
NASA Astrophysics Data System (ADS)
Nassar, Mohamed K.; Ginn, Timothy R.
2014-08-01
We investigate the effect of computational error on the inversion of a density-dependent flow and transport model, using SEAWAT and UCODE-2005 in an inverse identification of hydraulic conductivity and dispersivity using head and concentration data from a 2-D laboratory experiment. We investigated inversions using three different solution schemes including variation of number of particles and time step length, in terms of the three aspects: the shape and smoothness of the objective function surface, the consequent impacts to the optimization, and the resulting Pareto analyses. This study demonstrates that the inversion is very sensitive to the choice of the forward model solution scheme. In particular, standard finite difference methods provide the smoothest objective function surface; however, this is obtained at the cost of numerical artifacts that can lead to erroneous warping of the objective function surface. Total variation diminishing (TVD) schemes limit these impacts at the cost of more computation time, while the hybrid method of characteristics (HMOC) approach with increased particle numbers and/or reduced time step gives both smoothed and accurate objective function surface. Use of the most accurate methods (TVD and HMOC) did lead to successful inversion of the two parameters; however, with distinct results for Pareto analyses. These results illuminate the sensitivity of the inversion to a number of aspects of the forward solution of the density-driven flow problem and reveal that parameter values may result that are erroneous but that counteract numerical errors in the solution.
Mehl, S.; Hill, M.C.
2002-01-01
Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.
Joint earthquake source inversions using seismo-geodesy and 3-D earth models
NASA Astrophysics Data System (ADS)
Weston, J.; Ferreira, A. M. G.; Funning, G. J.
2014-08-01
A joint earthquake source inversion technique is presented that uses InSAR and long-period teleseismic data, and, for the first time, takes 3-D Earth structure into account when modelling seismic surface and body waves. Ten average source parameters (Moment, latitude, longitude, depth, strike, dip, rake, length, width and slip) are estimated; hence, the technique is potentially useful for rapid source inversions of moderate magnitude earthquakes using multiple data sets. Unwrapped interferograms and long-period seismic data are jointly inverted for the location, fault geometry and seismic moment, using a hybrid downhill Powell-Monte Carlo algorithm. While the InSAR data are modelled assuming a rectangular dislocation in a homogeneous half-space, seismic data are modelled using the spectral element method for a 3-D earth model. The effect of noise and lateral heterogeneity on the inversions is investigated by carrying out realistic synthetic tests for various earthquakes with different faulting mechanisms and magnitude (Mw 6.0-6.6). Synthetic tests highlight the improvement in the constraint of fault geometry (strike, dip and rake) and moment when InSAR and seismic data are combined. Tests comparing the effect of using a 1-D or 3-D earth model show that long-period surface waves are more sensitive than long-period body waves to the change in earth model. Incorrect source parameters, particularly incorrect fault dip angles, can compensate for systematic errors in the assumed Earth structure, leading to an acceptable data fit despite large discrepancies in source parameters. Three real earthquakes are also investigated: Eureka Valley, California (1993 May 17, Mw 6.0), Aiquile, Bolivia (1998 February 22, Mw 6.6) and Zarand, Iran (2005 May 22, Mw 6.5). These events are located in different tectonic environments and show large discrepancies between InSAR and seismically determined source models. Despite the 40-50 km discrepancies in location between previous geodetic and
Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
Zhu, Lin; Dai, Zhenxue; Gong, Huili; Gable, Carl; Teatini, Pietro
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in an accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.
Model-based elastography: a survey of approaches to the inverse elasticity problem.
Doyley, M M
2012-02-07
Elastography is emerging as an imaging modality that can distinguish normal versus diseased tissues via their biomechanical properties. This paper reviews current approaches to elastography in three areas--quasi-static, harmonic and transient--and describes inversion schemes for each elastographic imaging approach. Approaches include first-order approximation methods; direct and iterative inversion schemes for linear elastic; isotropic materials and advanced reconstruction methods for recovering parameters that characterize complex mechanical behavior. The paper's objective is to document efforts to develop elastography within the framework of solving an inverse problem, so that elastography may provide reliable estimates of shear modulus and other mechanical parameters. We discuss issues that must be addressed if model-based elastography is to become the prevailing approach to quasi-static, harmonic and transient elastography: (1) developing practical techniques to transform the ill-posed problem with a well-posed one; (2) devising better forward models to capture the complex mechanical behavior of soft tissues and (3) developing better test procedures to evaluate the performance of modulus elastograms.
Model-based elastography: a survey of approaches to the inverse elasticity problem
NASA Astrophysics Data System (ADS)
Doyley, M. M.
2012-02-01
Elastography is emerging as an imaging modality that can distinguish normal versus diseased tissues via their biomechanical properties. This paper reviews current approaches to elastography in three areas—quasi-static, harmonic and transient—and describes inversion schemes for each elastographic imaging approach. Approaches include first-order approximation methods; direct and iterative inversion schemes for linear elastic; isotropic materials and advanced reconstruction methods for recovering parameters that characterize complex mechanical behavior. The paper's objective is to document efforts to develop elastography within the framework of solving an inverse problem, so that elastography may provide reliable estimates of shear modulus and other mechanical parameters. We discuss issues that must be addressed if model-based elastography is to become the prevailing approach to quasi-static, harmonic and transient elastography: (1) developing practical techniques to transform the ill-posed problem with a well-posed one; (2) devising better forward models to capture the complex mechanical behavior of soft tissues and (3) developing better test procedures to evaluate the performance of modulus elastograms.
NASA Astrophysics Data System (ADS)
Schuster, David M.
1993-04-01
An inverse method has been developed to compute the structural stiffness properties of wings given a specified wing loading and aeroelastic twist distribution. The method directly solves for the bending and torsional stiffness distribution of the wing using a modal representation of these properties. An aeroelastic design problem involving the use of a computational aerodynamics method to optimize the aeroelastic twist distribution of a tighter wing operating at maneuver flight conditions is used to demonstrate the application of the method. This exercise verifies the ability of the inverse scheme to accurately compute the structural stiffness distribution required to generate a specific aeroelastic twist under a specified aeroelastic load.
Method for detecting a pericentric inversion in a chromosome
Lucas, Joe N.
2000-01-01
A method is provided for determining a clastogenic signature of a sample of chromosomes by quantifying a frequency of a first type of chromosome aberration present in the sample; quantifying a frequency of a second, different type of chromosome aberration present in the sample; and comparing the frequency of the first type of chromosome aberration to the frequency of the second type of chromosome aberration. A method is also provided for using that clastogenic signature to identify a clastogenic agent or dosage to which the cells were exposed.
NASA Astrophysics Data System (ADS)
Yadav, V.; Shiga, Y. P.; Michalak, A. M.
2012-12-01
The accurate spatio-temporal quantification of fossil fuel emissions is a scientific challenge. Atmospheric inverse models have the capability to overcome this challenge and provide estimates of fossil fuel emissions. Observational and computational limitations limit current analyses to the estimations of a combined "biospheric flux and fossil-fuel emissions" carbon dioxide (CO2) signal, at coarse spatial and temporal resolution. Even in these coarse resolution inverse models, the disaggregation of a strong biospheric signal form a weaker fossil-fuel signal has proven difficult. The use of multiple tracers (delta 14C, CO, CH4, etc.) has provided a potential path forward, but challenges remain. In this study, we attempt to disaggregate biospheric fluxes and fossil-fuel emissions on the basis of error covariance models rather through tracer based CO2 inversions. The goal is to more accurately define the underlying structure of the two processes by using a stationary exponential covariance model for the biospheric fluxes, in conjunction with a semi-stationary covariance model derived from nightlights for fossil fuel emissions. A non-negativity constraint on fossil fuel emissions is imposed using a data transformation approach embedded in an iterative quasi-linear inverse modeling algorithm. The study is performed for January and June 2008, using the ground-based CO2 measurement network over North America. The quality of disaggregation is examined by comparing the inferred spatial distribution of biospheric fluxes and fossil-fuel emissions in a synthetic-data inversion. In addition to disaggregation of fluxes, the ability of the covariance models derived from nightlights to explain the fossil-fuel emissions over North America is also examined. The simple covariance model proposed in this study is found to improve estimation and disaggregation of fossil-fuel emissions from biospheric fluxes in the tracer-based inverse models.
Computational Methods for Aerodynamic Design (Inverse) and Optimization
1990-01-01
Airfoils with Given Velocity Distribution in Incompressible Flow," J. Aircraft, Vol. 10, 1973, pp. 651-659. 7. Polito, L., "Un Metodo Esatto -per 11 Progetto...and the Simpson rule. Using a panel arrangement method with properly increased panel deusity in regions with comparatively large rv -variations, use of
Computational Methods for Sparse Solution of Linear Inverse Problems
2009-03-01
methods from harmonic analysis [5]. For example, natural images can be approximated with relatively few wavelet coefficients. As a consequence, in many...performed efficiently. For example, the cost of these products is O(N logN) when Φ is constructed from Fourier or wavelet bases. For algorithms that...stream community has proposed efficient algorithms for computing near-optimal histograms and wavelet -packet approximations from compressive samples [4
A Study of Inverse Methods for Processing of Radar Data
2006-10-01
point from each source receiver location. Both ray tracing and eikonal schemes have been used to computer these travel times. A by-product of their... point by point basis, each diffraction point contributing a part of the total signal. Kirchhoff methods pioneered by Bleistein and Cohen at the...with these algorithms their relative merits. Signal point diffractions have known responses and by using time migration or depth migration these can be
Dura-Bernal, Salvador; Li, Kan; Neymotin, Samuel A.; Francis, Joseph T.; Principe, Jose C.; Lytton, William W.
2016-01-01
Neural stimulation can be used as a tool to elicit natural sensations or behaviors by modulating neural activity. This can be potentially used to mitigate the damage of brain lesions or neural disorders. However, in order to obtain the optimal stimulation sequences, it is necessary to develop neural control methods, for example by constructing an inverse model of the target system. For real brains, this can be very challenging, and often unfeasible, as it requires repeatedly stimulating the neural system to obtain enough probing data, and depends on an unwarranted assumption of stationarity. By contrast, detailed brain simulations may provide an alternative testbed for understanding the interactions between ongoing neural activity and external stimulation. Unlike real brains, the artificial system can be probed extensively and precisely, and detailed output information is readily available. Here we employed a spiking network model of sensorimotor cortex trained to drive a realistic virtual musculoskeletal arm to reach a target. The network was then perturbed, in order to simulate a lesion, by either silencing neurons or removing synaptic connections. All lesions led to significant behvaioral impairments during the reaching task. The remaining cells were then systematically probed with a set of single and multiple-cell stimulations, and results were used to build an inverse model of the neural system. The inverse model was constructed using a kernel adaptive filtering method, and was used to predict the neural stimulation pattern required to recover the pre-lesion neural activity. Applying the derived neurostimulation to the lesioned network improved the reaching behavior performance. This work proposes a novel neurocontrol method, and provides theoretical groundwork on the use biomimetic brain models to develop and evaluate neurocontrollers that restore the function of damaged brain regions and the corresponding motor behaviors. PMID:26903796
Novel TMS coils designed using an inverse boundary element method
NASA Astrophysics Data System (ADS)
Cobos Sánchez, Clemente; María Guerrero Rodriguez, Jose; Quirós Olozábal, Ángel; Blanco-Navarro, David
2017-01-01
In this work, a new method to design TMS coils is presented. It is based on the inclusion of the concept of stream function of a quasi-static electric current into a boundary element method. The proposed TMS coil design approach is a powerful technique to produce stimulators of arbitrary shape, and remarkably versatile as it permits the prototyping of many different performance requirements and constraints. To illustrate the power of this approach, it has been used for the design of TMS coils wound on rectangular flat, spherical and hemispherical surfaces, subjected to different constraints, such as minimum stored magnetic energy or power dissipation. The performances of such coils have been additionally described; and the torque experienced by each stimulator in the presence of a main magnetic static field have theoretically found in order to study the prospect of using them to perform TMS and fMRI concurrently. The obtained results show that described method is an efficient tool for the design of TMS stimulators, which can be applied to a wide range of coil geometries and performance requirements.
Hierarchical inverse Gaussian models and multiple testing: application to gene expression data.
Labbe, Aurelie; Thompson, Mary
2005-01-01
Detecting differentially expressed genes in microarray experiments is a topic that has been well studied in the literature. Many hypothesis testing methods have been proposed that rely on strong distributional assumptions for the gene intensities. However, the shape of microarray data may vary substantially from one experiment to another, and model assumptions may be seriously violated in many cases. The literature on microarray data is mainly based on two distributions: the log-normal and the gamma distributions, that often appear to be effective when used in a Bayesian hierarchical framework. However, if a model that fits the data well in a global manner seems attractive, two points should be regarded with attention: the ability of the model to fit the tail of the observed distribution, and its robustness to a wrong specification of the model, in terms of error rates for the hypothesis tests. In order to focus on these aspects, we propose to use Bayesian models involving the inverse Gaussian distribution to describe gene expression data. We show that these models can be good competitors to the traditional Bayesian or random effect gamma or log-normal models in some situations. A multiple testing procedure is then proposed, based on an asymptotic property of the posterior probability of the one-sided alternative hypothesis. We show that the asymptotic property is well approximated for inverse Gaussian models, even when the number of observations available for each test is very small.
A linear model approach for ultrasonic inverse problems with attenuation and dispersion.
Carcreff, Ewen; Bourguignon, Sébastien; Idier, Jérôme; Simon, Laurent
2014-07-01
Ultrasonic inverse problems such as spike train deconvolution, synthetic aperture focusing, or tomography attempt to reconstruct spatial properties of an object (discontinuities, delaminations, flaws, etc.) from noisy and incomplete measurements. They require an accurate description of the data acquisition process. Dealing with frequency-dependent attenuation and dispersion is therefore crucial because both phenomena modify the wave shape as the travel distance increases. In an inversion context, this paper proposes to exploit a linear model of ultrasonic data taking into account attenuation and dispersion. The propagation distance is discretized to build a finite set of radiation impulse responses. Attenuation is modeled with a frequency power law and then dispersion is computed to yield physically consistent responses. Using experimental data acquired from attenuative materials, this model outperforms the standard attenuation-free model and other models of the literature. Because of model linearity, robust estimation methods can be implemented. When matched filtering is employed for single echo detection, the model that we propose yields precise estimation of the attenuation coefficient and of the sound velocity. A thickness estimation problem is also addressed through spike deconvolution, for which the proposed model also achieves accurate results.
NASA Astrophysics Data System (ADS)
Gao, Yingjie; Zhang, Jinhai; Yao, Zhenxing
2016-06-01
The symplectic integration method is popular in high-accuracy numerical simulations when discretizing temporal derivatives; however, it still suffers from time-dispersion error when the temporal interval is coarse, especially for long-term simulations and large-scale models. We employ the inverse time dispersion transform (ITDT) to the third-order symplectic integration method to reduce the time-dispersion error. First, we adopt the pseudospectral algorithm for the spatial discretization and the third-order symplectic integration method for the temporal discretization. Then, we apply the ITDT to eliminate time-dispersion error from the synthetic data. As a post-processing method, the ITDT can be easily cascaded in traditional numerical simulations. We implement the ITDT in one typical exiting third-order symplectic scheme and compare its performances with the performances of the conventional second-order scheme and the rapid expansion method. Theoretical analyses and numerical experiments show that the ITDT can significantly reduce the time-dispersion error, especially for long travel times. The implementation of the ITDT requires some additional computations on correcting the time-dispersion error, but it allows us to use the maximum temporal interval under stability conditions; thus, its final computational efficiency would be higher than that of the traditional symplectic integration method for long-term simulations. With the aid of the ITDT, we can obtain much more accurate simulation results but with a lower computational cost.
Cao, Jianping; Du, Zhengjian; Mo, Jinhan; Li, Xinxiao; Xu, Qiujian; Zhang, Yinping
2016-12-20
Passive sampling is an alternative to active sampling for measuring concentrations of gas-phase volatile organic compounds (VOCs). However, the uncertainty or relative error of the measurements have not been minimized due to the limitations of existing design methods. In this paper, we have developed a novel method, the inverse problem optimization method, to address the problems associated with designing accurate passive samplers. The principle is to determine the most appropriate physical properties of the materials, and the optimal geometry of a passive sampler, by minimizing the relative sampling error based on the mass transfer model of VOCs for a passive sampler. As an example application, we used our proposed method to optimize radial passive samplers for the sampling of benzene and formaldehyde in a normal indoor environment. A new passive sampler, which we have called the Tsinghua Passive Diffusive Sampler (THPDS), for indoor benzene measurement was developed according to the optimized results. Silica zeolite was selected as the sorbent for the THPDS. The measured overall uncertainty of THPDS (22% for benzene) is lower than that of most commercially available passive samplers but is quite a bit larger than the modeled uncertainty (4.8% for benzene, the optimized result), suggesting that further research is required.
ERIC Educational Resources Information Center
Losada, David E.; Barreiro, Alvaro
2003-01-01
Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…
SEASONAL NH 3 EMISSIONS FOR THE CONTINENTAL UNITED STATES: INVERSE MODEL ESTIMATION AND EVALUATION
An inverse modeling study has been conducted here to evaluate a prior estimate of seasonal ammonia (NH_{3}) emissions. The prior estimates were based on a previous inverse modeling study and two other bottom-up inventory studies. The results suggest that the prior estim...
Inverse modeling has been used extensively on the global scale to produce top-down estimates of emissions for chemicals such as CO and CH4. Regional scale air quality studies could also benefit from inverse modeling as a tool to evaluate current emission inventories; however, ...
Homotopy method for inverse design of the bulbous bow of a container ship
NASA Astrophysics Data System (ADS)
Huang, Yu-jia; Feng, Bai-wei; Hou, Guo-xiang; Gao, Liang; Xiao, Mi
2017-03-01
The homotopy method is utilized in the present inverse hull design problem to minimize the wave-making coefficient of a 1300 TEU container ship with a bulbous bow. Moreover, in order to improve the computational efficiency of the algorithm, a properly smooth function is employed to update the homotopy parameter during iteration. Numerical results show that the homotopy method has been successfully applied in the inverse design of the ship hull. This method has an advantage of high performance on convergence and it is credible and valuable for engineering practice.
Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model
NASA Astrophysics Data System (ADS)
Zhu, Hongyu; Petra, Noemi; Stadler, Georg; Isaac, Tobin; Hughes, Thomas J. R.; Ghattas, Omar
2016-07-01
We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection-diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations and model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov-Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems - i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian - we study the
NASA Astrophysics Data System (ADS)
Verrelst, J.; Rivera, J. P.; Leonenko, G.; Alonso, L.; Moreno, J.
2012-04-01
Radiative transfer (RT) modeling plays a key role for earth observation (EO) because it is needed to design EO instruments and to develop and test inversion algorithms. The inversion of a RT model is considered as a successful approach for the retrieval of biophysical parameters because of being physically-based and generally applicable. However, to the broader community this approach is considered as laborious because of its many processing steps and expert knowledge is required to realize precise model parameterization. We have recently developed a radiative transfer toolbox ARTMO (Automated Radiative Transfer Models Operator) with the purpose of providing in a graphical user interface (GUI) essential models and tools required for terrestrial EO applications such as model inversion. In short, the toolbox allows the user: i) to choose between various plant leaf and canopy RT models (e.g. models from the PROSPECT and SAIL family, FLIGHT), ii) to choose between spectral band settings of various air- and space-borne sensors or defining own sensor settings, iii) to simulate a massive amount of spectra based on a look up table (LUT) approach and storing it in a relational database, iv) to plot spectra of multiple models and compare them with measured spectra, and finally, v) to run model inversion against optical imagery given several cost options and accuracy estimates. In this work ARTMO was used to tackle some well-known problems related to model inversion. According to Hadamard conditions, mathematical models of physical phenomena are mathematically invertible if the solution of the inverse problem to be solved exists, is unique and depends continuously on data. This assumption is not always met because of the large number of unknowns and different strategies have been proposed to overcome this problem. Several of these strategies have been implemented in ARTMO and were here analyzed to optimize the inversion performance. Data came from the SPARC-2003 dataset
Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.
2015-01-01
The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.
Three-dimensional inverse modelling of magnetic anomaly sources based on a genetic algorithm
NASA Astrophysics Data System (ADS)
Montesinos, Fuensanta G.; Blanco-Montenegro, Isabel; Arnoso, José
2016-04-01
We present a modelling method to estimate the 3-D geometry and location of homogeneously magnetized sources from magnetic anomaly data. As input information, the procedure needs the parameters defining the magnetization vector (intensity, inclination and declination) and the Earth's magnetic field direction. When these two vectors are expected to be different in direction, we propose to estimate the magnetization direction from the magnetic map. Then, using this information, we apply an inversion approach based on a genetic algorithm which finds the geometry of the sources by seeking the optimum solution from an initial population of models in successive iterations through an evolutionary process. The evolution consists of three genetic operators (selection, crossover and mutation), which act on each generation, and a smoothing operator, which looks for the best fit to the observed data and a solution consisting of plausible compact sources. The method allows the use of non-gridded, non-planar and inaccurate anomaly data and non-regular subsurface partitions. In addition, neither constraints for the depth to the top of the sources nor an initial model are necessary, although previous models can be incorporated into the process. We show the results of a test using two complex synthetic anomalies to demonstrate the efficiency of our inversion method. The application to real data is illustrated with aeromagnetic data of the volcanic island of Gran Canaria (Canary Islands).
Variable soft sphere molecular model for inverse-power-law or Lennard-Jones potential
NASA Astrophysics Data System (ADS)
Koura, Katsuhisa; Matsumoto, Hiroaki
1991-10-01
The variable soft sphere (VSS) molecular model is introduced for both the viscosity and diffusion cross sections (coefficients) to be consistent with those of the inverse-power-law (IPL) or Lennard-Jones (LJ) potential. The VSS model has almost the same analytical and computational simplicity (computation time) as the variable hard sphere (VHS) model in the Monte Carlo simulation of rarefied gas flows. The null-collision Monte Carlo method is used to make comparative calculations for the molecular diffusion in a heat-bath gas and the normal shock wave structure in a simple gas. For the most severe test of the VSS model for the IPL potential, the softest practical model corresponding to the Maxwell molecule is chosen. The agreement in the molecular diffusion and shock wave structure between the VSS model and the IPL or LJ potential is remarkably good.
Inverse freezing in a cluster Ising spin-glass model with antiferromagnetic interactions.
Silva, C F; Zimmer, F M; Magalhaes, S G; Lacroix, C
2012-11-01
Inverse freezing is analyzed in a cluster spin-glass (SG) model that considers infinite-range disordered interactions between magnetic moments of different clusters (intercluster interaction) and short-range antiferromagnetic coupling J(1) between Ising spins of the same cluster (intracluster interaction). The intercluster disorder J is treated within a mean-field theory by using a framework of one-step replica symmetry breaking. The effective model obtained by this treatment is computed by means of an exact diagonalization method. With the results we build phase diagrams of temperature T/J versus J(1)/J for several sizes of clusters n(s) (number of spins in the cluster). The phase diagrams show a second-order transition from the paramagnetic phase to the SG order at the freezing temperature T(f) when J(1)/J is small. The increase in J(1)/J can then destroy the SG phase. It decreases T(f)/J and introduces a first-order transition. In addition, inverse freezing can arise at a certain range of J(1)/J and large enough n(s). Therefore, the nontrivial frustration generated by disorder and short-range antiferromagnetic coupling can introduce inverse freezing spontaneously.
NASA Technical Reports Server (NTRS)
Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak
2012-01-01
A semi-empirical algorithm for the retrieval of soil moisture, root mean square (RMS) height and biomass from polarimetric SAR data is explained and analyzed in this paper. The algorithm is a simplification of the distorted Born model. It takes into account the physical scattering phenomenon and has three major components: volume, double-bounce and surface. This simplified model uses the three backscattering coefficients ( sigma HH, sigma HV and sigma vv) at low-frequency (P-band). The inversion process uses the Levenberg-Marquardt non-linear least-squares method to estimate the structural parameters. The estimation process is entirely explained in this paper, from initialization of the unknowns to retrievals. A sensitivity analysis is also done where the initial values in the inversion process are varying randomly. The results show that the inversion process is not really sensitive to initial values and a major part of the retrievals has a root-mean-square error lower than 5% for soil moisture, 24 Mg/ha for biomass and 0.49 cm for roughness, considering a soil moisture of 40%, roughness equal to 3cm and biomass varying from 0 to 500 Mg/ha with a mean of 161 Mg/ha
Preliminary gravity inversion model of Frenchman Flat Basin, Nevada Test Site, Nevada
Phelps, Geoffrey A.; Graham, Scott E.
2002-01-01
The depth of the basin beneath Frenchman Flat is estimated using a gravity inversion method. Gamma-gamma density logs from two wells in Frenchman Flat constrained the density profiles used to create the gravity inversion model. Three initial models were considered using data from one well, then a final model is proposed based on new information from the second well. The preferred model indicates that a northeast-trending oval-shaped basin underlies Frenchman Flat at least 2,100 m deep, with a maximum depth of 2,400 m at its northeast end. No major horst and graben structures are predicted. Sensitivity analysis of the model indicates that each parameter contributes the same magnitude change to the model, up to 30 meters change in depth for a 1% change in density, but some parameters affect a broader area of the basin. The horizontal resolution of the model was determined by examining the spacing between data stations, and was set to 500 square meters.
Preliminary gravity inversion model of Frenchman Flat Basin, Nevada Test Site, Nevada
Phelps, G.A.; Graham, S.E.
2002-10-01
The depth of the basin beneath Frenchman Flat is estimated using a gravity inversion method. Gamma-gamma density logs from two wells in Frenchman Flat constrained the density profiles used to create the gravity inversion model. Three initial models were considered using data from one well, then a final model is proposed based on new information from the second well. The preferred model indicates that a northeast-trending oval-shaped basin underlies Frenchman Flat at least 2,100 m deep, with a maximum depth of 2,400 m at its northeast end. No major horst and graben structures are predicted. Sensitivity analysis of the model indicates that each parameter contributes the same magnitude change to the model, up to 30 meters change in depth for a 1% change in density, but some parameters affect a broader area of the basin. The horizontal resolution of the model was determined by examining the spacing between data stations, and was set to 500 square meters.
Hassaballah, Abdallah I.; Hassan, Mohsen A.; Mardi, Azizi N.; Hamdi, Mohd
2013-01-01
The determination of the myocardium’s tissue properties is important in constructing functional finite element (FE) models of the human heart. To obtain accurate properties especially for functional modeling of a heart, tissue properties have to be determined in vivo. At present, there are only few in vivo methods that can be applied to characterize the internal myocardium tissue mechanics. This work introduced and evaluated an FE inverse method to determine the myocardial tissue compressibility. Specifically, it combined an inverse FE method with the experimentally-measured left ventricular (LV) internal cavity pressure and volume versus time curves. Results indicated that the FE inverse method showed good correlation between LV repolarization and the variations in the myocardium tissue bulk modulus K (K = 1/compressibility), as well as provided an ability to describe in vivo human myocardium material behavior. The myocardium bulk modulus can be effectively used as a diagnostic tool of the heart ejection fraction. The model developed is proved to be robust and efficient. It offers a new perspective and means to the study of living-myocardium tissue properties, as it shows the variation of the bulk modulus throughout the cardiac cycle. PMID:24367544
NASA Astrophysics Data System (ADS)
Fortin, Will F. J.
The utility and meaning of a geophysical dataset is dependent on good interpretation informed by high-quality data, processing, and attribute examination via technical methodologies. Active source marine seismic reflection data contains a great deal of information in the location, phase, and amplitude of both pre- and post-stack seismic reflections. Using pre- and post-stack data, this work has extracted useful information from marine reflection seismic data in novel ways in both the oceanic water column and the sub-seafloor geology. In chapter 1 we develop a new method for estimating oceanic turbulence from a seismic image. This method is tested on synthetic seismic data to show the method's ability to accurately recover both distribution and levels of turbulent diffusivity. Then we apply the method to real data offshore Costa Rica where we observe lee waves. Our results find elevated diffusivities near the seafloor as well as above the lee waves five times greater than surrounding waters and 50 times greater than open ocean diffusivities. Chapter 2 investigates subsurface geology in the Cascadia Subduction Zone and outlines a workflow for using pre-stack waveform inversion to produce highly detailed velocity models and seismic images. Using a newly developed inversion code, we achieve better imaging results as compared to the product of a standard, user-intensive method for building a velocity model. Our results image the subduction interface ~30 km farther landward than previous work and better images faults and sedimentary structures above the oceanic plate as well as in the accretionary prism. The resultant velocity model is highly detailed, inverted every 6.25 m with ~20 m vertical resolution, and will be used to examine the role of fluids in the subduction system. These results help us to better understand the natural hazards risks associated with the Cascadia Subduction Zone. Chapter 3 returns to seismic oceanography and examines the dynamics of nonlinear
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Multiple tail models including inverse measures for structural design under uncertainties
NASA Astrophysics Data System (ADS)
Ramu, Palaniappan
Sampling-based reliability estimation with expensive computer models may be computationally prohibitive due to a large number of required simulations. One way to alleviate the computational expense is to extrapolate reliability estimates from observed levels to unobserved levels. Classical tail modeling techniques provide a class of models to enable this extrapolation using asymptotic theory by approximating the tail region of the cumulative distribution function (CDF). This work proposes three alternate tail extrapolation techniques including inverse measures that can complement classical tail modeling. The proposed approach, multiple tail models, applies the two classical and three alternate extrapolation techniques simultaneously to estimate inverse measures at the extrapolation regions and use the median as the best estimate. It is observed that the range of the five estimates can be used as a good approximation of the error associated with the median estimate. Accuracy and computational efficiency are competing factors in selecting sample size. Yet, as our numerical studies reveal, the accuracy lost to the reduction of computational power is very small in the proposed method. The method is demonstrated on standard statistical distributions and complex engineering examples.
2014-08-19
finite element method, performance verification on experimental data, imaging of explosive devices, comparison with the classical Krein equation method...of the globally convergent numerical method of this project and the classical Krein equation method. It was established that while the first method...of a long standing problem about uniqueness of a phaseless 3-d inverse problem of quantum scattering. This was an open question since the publication
NASA Technical Reports Server (NTRS)
Smith, C. B.
1982-01-01
The Fymat analytic inversion method for retrieving a particle-area distribution function from anomalous diffraction multispectral extinction data and total area is generalized to the case of a variable complex refractive index m(lambda) near unity depending on spectral wavelength lambda. Inversion tests are presented for a water-haze aerosol model. An upper-phase shift limit of 5 pi/2 retrieved an accurate peak area distribution profile. Analytical corrections using both the total number and area improved the inversion.
TH-A-9A-06: Inverse Planning of Gamma Knife Radiosurgery Using Natural Physical Models
Riofrio, D; Ma, L; Zhou, J; Luan, S
2014-06-15
Purpose: Treatment-planning systems rely on computer intensive optimization algorithms in order to provide radiation dose localization. We are investigating a new optimization paradigm based on natural physical modeling and simulations, which tend to evolve in time and find the minimum energy state. In our research, we aim to match physical models with radiation therapy inverse planning problems, where the minimum energy state coincides with the optimal solution. As a prototype study, we have modeled the inverse planning of Gamma Knife radiosurgery using the dynamic interactions between charged particles and demonstrate the potential of the paradigm. Methods: For inverse planning of Gamma Knife radiosurgery: (1) positive charges are uniformly placed on the surface of tumors and critical structures. (2) The Gamma Knife dose kernels of 4mm, 8mm and 16mm radii are modeled as geometric objects with variable charges. (3) The number of shots per each kernel radii is obtained by solving a constrained integer-linear problem. (4) The shots are placed into the tumor volume and move under electrostatic forces. The simulation is performed until internal forces are zero or maximum iterations are reached. (5) Finally, non-negative least squares (NNLS) is used to calculate the beam-on times for each shot. Results: A 3D C-shaped tumor surrounding a spherical critical structure was used for testing the new optimization paradigm. These tests showed that charges spread out evenly covering the tumor while keeping distance from the critical structure, resulting in a high quality plan. Conclusion: We have developed a new paradigm for dose optimization based on the simulation of physical models. As prototype studies, we applied electrostatic models to Gamma Knife radiosurgery and demonstrated the potential of the new paradigm. Further research and fine-tuning of the model are underway. NSF CBET-0853157.
Neuman, S; Glascoe, L; Kosovic, B; Dyer, K; Hanley, W; Nitao, J; Gordon, R
2005-11-03
The rapid identification of contaminant plume sources and their characteristics in urban environments can greatly enhance emergency response efforts. Source identification based on downwind concentration measurements is complicated by the presence of building obstacles that can cause flow diversion and entrainment. While high-resolution computational fluid dynamics (CFD) simulations are available for predicting plume evolution in complex urban geometries, such simulations require large computational effort. We make use of an urban puff model, the Defence Science Technology Laboratory's (Dstl) Urban Dispersion Model (UDM), which employs empirically based puff splitting techniques. UDM enables rapid urban dispersion simulations by combining traditional Gaussian puff modeling with empirically deduced mixing and entrainment approximations. Here we demonstrate the preliminary reconstruction of an atmospheric release event using stochastic sampling algorithms and Bayesian inference together with the rapid UDM urban puff model based on point measurements of concentration. We consider source inversions for both a prototype isolated building and for observations and flow conditions taken during the Joint URBAN 2003 field campaign at Oklahoma City. The Markov Chain Monte Carlo (MCMC) stochastic sampling method is used to determine likely source term parameters and considers both measurement and forward model errors. It should be noted that the stochastic methodology is general and can be used for time-varying release rates and flow conditions as well as nonlinear dispersion problems. The results of inversion indicate the probability of a source being at a particular location with a particular release rate. Uncertainty in observed data, or lack of sufficient data, is inherently reflected in the shape and size of the probability distribution of source term parameters. Although developed and used independently, source inversion with both UDM and a finite-element CFD code can be
Cunefare, Kenneth A; Biesel, Van B; Tran, John; Rye, Ryan; Graf, Aaron; Holdhusen, Mark; Albanese, Anne-Marie
2003-02-01
Qualification of anechoic chambers is intended to demonstrate that the chamber supports the intended free-field environment within some permissible tolerance bounds. Key qualification issues include the method used to obtain traverse data, the analysis method for the data, and the use of pure tone or broadband noise as the chamber excitation signal. This paper evaluates the relative merits of continuous versus discrete traverses, of fixed versus optimal reference analysis of the traverse data, and of the use of pure tone versus broadband signals. The current practice of using widely space discrete sampling along a traverse is shown to inadequately sample the complexity of the sound field extant with pure tone traverses, but is suitable for broadband traverses. Continuous traverses, with spatial resolution on the order of 15% of the wavelength at the frequency of interest, are shown to be necessary to fully resolve the spatial complexity of pure tone qualifications. The use of an optimal reference method for computing the deviations from inverse square law is shown to significantly improve the apparent performance of the chamber for pure tone qualifications. Finally, the use of broadband noise as the test signal, as compared to pure tone traverses over the same span, is demonstrated to be a marginal indicator of chamber performance.
Numerical solution of 2D-vector tomography problem using the method of approximate inverse
NASA Astrophysics Data System (ADS)
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-01
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
Inverse dynamics modelling of upper-limb tremor, with cross-correlation analysis
Ketteringham, Laurence P.; Neild, Simon A.; Hyde, Richard A.; Jones, Rosie J.S.; Davies-Smith, Angela M.
2014-01-01
A method to characterise upper-limb tremor using inverse dynamics modelling in combination with cross-correlation analyses is presented. A 15 degree-of-freedom inverse dynamics model is used to estimate the joint torques required to produce the measured limb motion, given a set of estimated inertial properties for the body segments. The magnitudes of the estimated torques are useful when assessing patients or evaluating possible intervention methods. The cross-correlation of the estimated joint torques is proposed to gain insight into how tremor in one limb segment interacts with tremor in another. The method is demonstrated using data from a single patient presenting intention tremor because of multiple sclerosis. It is shown that the inertial properties of the body segments can be estimated with sufficient accuracy using only the patient's height and weight as a priori knowledge, which ensures the method's practicality and transferability to clinical use. By providing a more detailed, objective characterisation of patient-specific tremor properties, the method is expected to improve the selection, design and assessment of treatment options on an individual basis. PMID:26609379
Goal Directed Model Inversion: Learning Within Domain Constraints
NASA Technical Reports Server (NTRS)
Colombano, Silvano P.; Compton, Michael; Raghavan, Bharathi; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Goal Directed Model Inversion (GDMI) is an algorithm designed to generalize supervised learning to the case where target outputs are not available to the learning system. The output of the learning system becomes the input to some external device or transformation, and only the output of this device or transformation can be compared to a desired target. The fundamental driving mechanism of GDMI is to learn from success. Given that a wrong outcome is achieved, one notes that the action that produced that outcome "would have been right if the outcome had been the desired one." The algorithm makes use of these intermediate "successes" to achieve the final goal. A unique and potentially very important feature of this algorithm is the ability to modify the output of the learning module to force upon it a desired syntactic structure. This differs from ordinary supervised learning in the following way: in supervised learning the exact desired output pattern must be provided. In GDMI instead, it is possible to require simply that the output obey certain rules, i.e., that it "make sense" in some way determined by the knowledge domain. The exact pattern that will achieve the desired outcome is then found by the system. The ability to impose rules while allowing the system to search for its own answers in the context of neural networks is potentially a major breakthrough in two ways: 1) it may allow the construction of networks that can incorporate immediately some important knowledge, i.e. would not need to learn everything from scratch as normally required at present, and 2) learning and searching would be limited to the areas where it is necessary, thus facilitating and speeding up the process. These points are illustrated with examples from robotic path planning and parametric design.
Goal Directed Model Inversion: Learning Within Domain Constraints
NASA Technical Reports Server (NTRS)
Colombano, Silvano P.; Compton, Michael; Raghavan, Bharathi; Friedland, Peter (Technical Monitor)
1994-01-01
Goal Directed Model Inversion (GDMI) is an algorithm designed to generalize supervised learning to the case where target outputs are not available to the learning system. The output of the learning system becomes the input to some external device or transformation, and only the output of this device or transformation can be compared to a desired target. The fundamental driving mechanism of GDMI is to learn from success. Given that a wrong outcome is achieved, one notes that the action that produced that outcome "would have been right if the outcome had been the desired one." The algorithm makes use of these intermediate "successes" to achieve the final goal. A unique and potentially very important feature of this algorithm is the ability to modify the output of the learning module to force upon it a desired syntactic structure. This differs from ordinary supervised learning in the following way: in supervised learning the exact desired output pattern must be provided. In GDMI instead, it is possible to require simply that the output obey certain rules, i.e., that it "make sense" in some way determined by the knowledge domain. The exact pattern that will achieve the desired outcome is then found by the system. The ability to impose rules while allowing the system to search for its own answers in the context of neural networks is potentially a major breakthrough in two ways: (1) it may allow the construction of networks that can incorporate immediately some important knowledge, i.e., would not need to learn everything from scratch as normally required at present; and (2) learning and searching would be limited to the areas where it is necessary, thus facilitating and speeding up the process. These points are illustrated with examples from robotic path planning and parametric design.
Method of Minimax Optimization in the Coefficient Inverse Heat-Conduction Problem
NASA Astrophysics Data System (ADS)
Diligenskaya, A. N.; Rapoport, É. Ya.
2016-07-01
Consideration has been given to the inverse problem on identification of a temperature-dependent thermal-conductivity coefficient. The problem was formulated in an extremum statement as a problem of search for a quantity considered as the optimum control of an object with distributed parameters, which is described by a nonlinear homogeneous spatially one-dimensional Fourier partial equation with boundary conditions of the second kind. As the optimality criterion, the authors used the error (minimized on the time interval of observation) of uniform approximation of the temperature computed on the object's model at an assigned point of the segment of variation in the spatial variable to its directly measured value. Pre-parametrization of the sought control action, which a priori records its description accurate to assigning parameters of representation in the class of polynomial temperature functions, ensured the reduction of the problem under study to a problem of parametric optimization. To solve the formulated problem, the authors used an analytical minimax-optimization method taking account of the alternance properties of the sought optimum solutions based on which the algorithm of computation of the optimum values of the sought parameters is reduced to a system (closed for these unknowns) of equations fixing minimax deviations of the calculated values of temperature from those observed on the time interval of identification. The obtained results confirm the efficiency of the proposed method for solution of a certain range of applied problems. The authors have studied the influence of the coordinate of a point of temperature measurement on the exactness of solution of the inverse problem.
Forward and inverse effects of the complete electrode model in neonatal EEG.
Pursiainen, S; Lew, S; Wolters, C H
2017-03-01
This paper investigates finite element method-based modeling in the context of neonatal electroencephalography (EEG). In particular, the focus lies on electrode boundary conditions. We compare the complete electrode model (CEM) with the point electrode model (PEM), which is the current standard in EEG. In the CEM, the voltage experienced by an electrode is modeled more realistically as the integral average of the potential distribution over its contact surface, whereas the PEM relies on a point value. Consequently, the CEM takes into account the subelectrode shunting currents, which are absent in the PEM. In this study, we aim to find out how the electrode voltage predicted by these two models differ, if standard size electrodes are attached to a head of a neonate. Additionally, we study voltages and voltage variation on electrode surfaces with two source locations: 1) next to the C6 electrode and 2) directly under the Fz electrode and the frontal fontanel. A realistic model of a neonatal head, including a skull with fontanels and sutures, is used. Based on the results, the forward simulation differences between CEM and PEM are in general small, but significant outliers can occur in the vicinity of the electrodes. The CEM can be considered as an integral part of the outer head model. The outcome of this study helps understanding volume conduction of neonatal EEG, since it enlightens the role of advanced skull and electrode modeling in forward and inverse computations.NEW & NOTEWORTHY The effect of the complete electrode model on electroencephalography forward and inverse computations is explored. A realistic neonatal head model, including a skull structure with fontanels and sutures, is used. The electrode and skull modeling differences are analyzed and compared with each other. The results suggest that the complete electrode model can be considered as an integral part of the outer head model. To achieve optimal source localization results, accurate electrode
Forward and inverse effects of the complete electrode model in neonatal EEG
Lew, S.; Wolters, C. H.
2016-01-01
This paper investigates finite element method-based modeling in the context of neonatal electroencephalography (EEG). In particular, the focus lies on electrode boundary conditions. We compare the complete electrode model (CEM) with the point electrode model (PEM), which is the current standard in EEG. In the CEM, the voltage experienced by an electrode is modeled more realistically as the integral average of the potential distribution over its contact surface, whereas the PEM relies on a point value. Consequently, the CEM takes into account the subelectrode shunting currents, which are absent in the PEM. In this study, we aim to find out how the electrode voltage predicted by these two models differ, if standard size electrodes are attached to a head of a neonate. Additionally, we study voltages and voltage variation on electrode surfaces with two source locations: 1) next to the C6 electrode and 2) directly under the Fz electrode and the frontal fontanel. A realistic model of a neonatal head, including a skull with fontanels and sutures, is used. Based on the results, the forward simulation differences between CEM and PEM are in general small, but significant outliers can occur in the vicinity of the electrodes. The CEM can be considered as an integral part of the outer head model. The outcome of this study helps understanding volume conduction of neonatal EEG, since it enlightens the role of advanced skull and electrode modeling in forward and inverse computations. NEW & NOTEWORTHY The effect of the complete electrode model on electroencephalography forward and inverse computations is explored. A realistic neonatal head model, including a skull structure with fontanels and sutures, is used. The electrode and skull modeling differences are analyzed and compared with each other. The results suggest that the complete electrode model can be considered as an integral part of the outer head model. To achieve optimal source localization results, accurate electrode
A combined direct/inverse three-dimensional transonic wing design method for vector computers
NASA Technical Reports Server (NTRS)
Weed, R. A.; Carlson, L. A.; Anderson, W. K.
1984-01-01
A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.
Estimation of root water uptake as a sink term by inverse modeling
NASA Astrophysics Data System (ADS)
Hu, Yao; Schwichtenberg, Guido; Samaniego, Luis; Attinger, Sabine; Hildebrandt, Anke
2010-05-01
Modeling water uptake by plant roots is essential to improve our understanding of the impact of ecosystems on hydrological cycle and climate. However, no measurement devices enable us to measure water uptake directly. Consequently, root water uptake has to be inferred by numerical methods (e.g. inverse modeling). This kind of numerical inversion is further complicated by the fact that vertical water fluxes between measurement points and water uptake by roots occur simultaneously in the soil matrix during daytime, and are difficult to separate. In order to tackle the challenge to quantify the water uptake, we split our study into two parts: First, we calibrate our soil model to estimate soil parameters during the winter time. Second, we estimate the water uptake as a sink term during daytime in summer, while assuming our soil hydraulic parameters to be known a priori. The solution is then checked during the nighttime. For the first step, we use geostatistical interpolation techniques to derive the soil texture fields and use pedotransfer functions to specify the ranges of the soil hydraulic parameters. We then obtain optimal soil parameter sets by combining a Richards model with a global optimization algorithm. For the second step, we use the day-night differences of water content changes to derive likely root water uptake depths and profiles. Although many state-of-the-art approaches use root spatial distribution functions to allocate plants transpiration over the soil profile, we decide to follow a different approach. In our model any layer in the soil column may contribute a certain percent to the total water uptake. We will compare this approach with another inverse modeling approach, which infers water uptake by using root distribution parameters. We expect that this new approach will offer us an opportunity to gain better understanding of vertical soil water flow and root water uptake for the several plots of differing plant diversity in the Jena Biodiversity
A trade-off solution between model resolution and covariance in surface-wave inversion
Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.
2010-01-01
Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.
Bedos, Carole; Rousseau-Djabri, Marie-France; Loubet, Benjamin; Durand, Brigitte; Flura, Dominique; Briand, Olivier; Barriuso, Enrique
2010-04-01
Few data sets of pesticide volatilization from plants at the field scale are available. In this work, we report measurements of fenpropidin and chlorothalonil volatilization on a wheat field using the aerodynamic gradient (AG) method and an inverse dispersion modeling approach (using the FIDES model). Other data necessary to run volatilization models are also reported: measured application dose, crop interception, plant foliage residue, upwind concentrations, and meteorological conditions. The comparison of the AG and inverse modeling methods proved the latter to be reliable and hence suitable for estimating volatilization rates with minimized costs. Different diurnal/nocturnal volatilization patterns were observed: fenpropidin volatilization peaked on the application day and then decreased dramatically, while chlorothalonil volatilization remained fairly stable over a week-long period. Cumulated emissions after 31 h reached 3.5 g ha(-1) and 5 g ha(-1), respectively (0.8% and 0.6% of the theoretical application dose). A larger difference in volatilization rates was expected given differences in vapor pressure, and for fenpropidin, volatilization should have continued given that 80% of the initial amount remained on plant foliage for 6 days. We thus ask if vapor pressure alone can accurately estimate volatilization just after application and then question the state of foliar residue. We identified adsorption, formulation, and extraction techniques as relevant explanations.
A musculoskeletal shoulder model based on pseudo-inverse and null-space optimization.
Terrier, Alexandre; Aeberhard, Martin; Michellod, Yvan; Mullhaupt, Philippe; Gillet, Denis; Farron, Alain; Pioletti, Dominique P
2010-11-01
The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa.
Sun, Yu; Hou, Zhangshuan; Huang, Maoyi; Tian, Fuqiang; Leung, Lai-Yung R.
2013-12-10
This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) - Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to the different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a mesh adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.
Vishnuvardhan, J; Krishnamurthy, C V; Balasubramaniam, Krishnan
2009-02-01
A novel blind inversion method using Lamb wave S(0) and A(0) mode velocities is proposed for the complete determination of elastic moduli, material symmetries, as well as principal plane orientations of anisotropic plates. The approach takes advantage of genetic algorithm, introduces the notion of "statistically significant" elastic moduli, and utilizes their sensitivities to velocity data to reconstruct the elastic moduli. The unknown material symmetry and the principal planes are then evaluated using the method proposed by Cowin and Mehrabadi [Q. J. Mech. Appl. Math. 40, 451-476 (1987)]. The blind inversion procedure was verified using simulated ultrasonic velocity data sets on materials with transversely isotropic, orthotropic, and monoclinic symmetries. A modified double ring configuration of the single transmitter and multiple receiver compact array was developed to experimentally validate the blind inversion approach on a quasi-isotropic graphite-epoxy composite plate. This technique finds application in the area of material characterization and structural health monitoring of anisotropic platelike structures.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
Freezing Time Estimation for a Cylindrical Food Using an Inverse Method
NASA Astrophysics Data System (ADS)
Hu, Yao Xing; Mihori, Tomoo; Watanabe, Hisahiko
Most of the published methods for estimating the freezing time require thermal properties of the product and any relevant heat transfer coefficients between the product and the cooling medium. However, the difficulty of obtaining thermal data for use in industrial freezing system of food has been pointed out. We have developed a new procedure for estimating the time to freeze a food of a slab by using the inverse method, which does not require the knowledge of thermal properties of the food being frozen. The method of applying inverse method to estimation of freezing time depends on the shape of the body to be frozen. In this paper, we explored the method of applying inverse method to the food body of cylindrical shape, using selected explicit expressions to describe the temperature profile. The temperature profile was found to be successfully approximated by a logarithmic function, with which an approximate equation to describe the freezing time was derived. An inversion procedure of estimating freezing time associated with the approximate equation, was validated via a numerical experiment.
Neural-network-based speed controller for induction motors using inverse dynamics model
NASA Astrophysics Data System (ADS)
Ahmed, Hassanein S.; Mohamed, Kamel
2016-08-01
Artificial Neural Networks (ANNs) are excellent tools for controller design. ANNs have many advantages compared to traditional control methods. These advantages include simple architecture, training and generalization and distortion insensitivity to nonlinear approximations and nonexact input data. Induction motors have many excellent features, such as simple and rugged construction, high reliability, high robustness, low cost, minimum maintenance, high efficiency, and good self-starting capabilities. In this paper, we propose a neural-network-based inverse model for speed controllers for induction motors. Simulation results show that the ANNs have a high tracing capability.
NASA Astrophysics Data System (ADS)
Ita, B. I.; Ehi-Eromosele, C. O.; Edobor-Osoh, A.; Ikeuba, A. I.
2014-11-01
By using the Nikiforov-Uvarov (NU) method, the Schrödinger equation has been solved for the interaction of inversely quadratic Hellmann (IQHP) and inversely quadratic potential (IQP) for any angular momentum quantum number, l. The energy eigenvalues and their corresponding eigenfunctions have been obtained in terms of Laguerre polynomials. Special cases of the sum of these potentials have been considered and their energy eigenvalues also obtained.
NASA Astrophysics Data System (ADS)
Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.
2012-12-01
We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.
2015-07-01
This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated
An inverse method to recover the SFR and reddening properties from spectra of galaxies
NASA Astrophysics Data System (ADS)
Vergely, J.-L.; Lançon, A.; Mouhcine
2002-11-01
We develop a non-parametric inverse method to investigate the star formation rate, the metallicity evolution and the reddening properties of galaxies based on their spectral energy distributions (SEDs). This approach allows us to clarify the level of information present in the data, depending on its signal-to-noise ratio. When low resolution SEDs are available in the ultraviolet, optical and near-IR wavelength ranges together, we conclude that it is possible to constrain the star formation rate and the effective dust optical depth simultaneously with a signal-to-noise ratio of 25. With excellent signal-to-noise ratios, the age-metallicity relation can also be constrained. We apply this method to the well-known nuclear starburst in the interacting galaxy NGC 7714. We focus on deriving the star formation history and the reddening law. We confirm that classical extinction models cannot provide an acceptable simultaneous fit of the SED and the lines. We also confirm that, with the adopted population synthesis models and in addition to the current starburst, an episode of enhanced star formation that started more than 200 Myr ago is required. As the time elapsed since the last interaction with NGC 7715, based on dynamical studies, is about 100 Myr, our result reinforces the suggestion that this interaction might not have been the most important event in the life of NGC 7714.
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto
2015-12-28
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730–3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
NASA Astrophysics Data System (ADS)
Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto
2015-12-01
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730-3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
Uplift histories of Africa and Australia from linear inverse modeling of drainage inventories
NASA Astrophysics Data System (ADS)
Rudge, John F.; Roberts, Gareth G.; White, Nicky J.; Richardson, Christopher N.
2015-05-01
We describe and apply a linear inverse model which calculates spatial and temporal patterns of uplift rate by minimizing the misfit between inventories of observed and predicted longitudinal river profiles. Our approach builds upon a more general, nonlinear, optimization model, which suggests that shapes of river profiles are dominantly controlled by upstream advection of kinematic waves of incision produced by spatial and temporal changes in regional uplift rate. Here we use the method of characteristics to solve a version of this problem. A damped, nonnegative, least squares approach is developed that permits river profiles to be inverted as a function of uplift rate. An important benefit of a linearized treatment is low computational cost. We have tested our algorithm by inverting 957 river profiles from both Africa and Australia. For each continent, the drainage network was constructed from a digital elevation model. The fidelity of river profiles extracted from this network was carefully checked using satellite imagery. River profiles were inverted many times to systematically investigate the trade-off between model misfit and smoothness. Spatial and temporal patterns of both uplift rate and cumulative uplift were calibrated using independent geologic and geophysical observations. Uplift patterns suggest that the topography of Africa and Australia grew in Cenozoic times. Inverse modeling of large inventories of river profiles demonstrates that drainage networks contain coherent signals that record the regional growth of elevation.
Combined tomographic forward and inverse modeling of active seismic refraction profiling data
NASA Astrophysics Data System (ADS)
Koulakov, I.; Kopp, H.
2008-12-01
We present a new code for combined forward and inverse tomographic modeling based on first-arrival travel times of active seismic refraction profiling data (PROFIT - Profile Forward and Inverse Tomographic modeling). The main features of the algorithm involve the original version of bending ray tracing, parameterization based on nodes, variable grid size definition determined by the ray density, and regularization of the inversion. The key purpose of applying the PROFIT code is rather not in solely producing the tomographic image of a continuous velocity field, but in creating a geologically reasonable synthetic model. This model then includes first-order velocity changes representing petrophysical boundaries and is thus better suited for a geological-tectonic interpretation than its smoothed tomographic counterpart. After performing forward and inverse modeling, the synthetic model will reproduce a congeneric model to the tomographic inversion result of the observed data. We demonstrate the working ability of the code using two marine datasets acquired in the Musicians Seamount Province (Pacific Ocean). The results of the tomographic inversion clearly resolve the dominating extrusive volcanism. In addition, the combined forward and inverse approach tests a large variety of synthetic models to fit the observed data tomography. Along both profiles, the preferred structural model includes a strong positive velocity anomaly extending into the seamount edifice. We suggest that this anomaly pattern represents secondary intrusive processes, which are only revealed by the combined tomographic forward and inverse modeling and could not be resolved by exclusively applying a tomographic inversion. In addition, we present examples of imaging salt domes in the Precaspian oil province as well as a higher-resolution field study that was conducted as a preinvestigative study for tunnel construction to demonstrate the capability of the code in different regimes and on different
NASA Astrophysics Data System (ADS)
Lamarche-Gagnon, Marc-Etienne; Vetel, Jerome
2016-11-01
Several methods can be used when one needs to measure wall shear stress in a fluid flow. Yet, it is known that a precise shear measurement is seldom met, mostly when both time and space resolutions are required. The electrodiffusion method lies on the mass transfer between a redox couple contained in an electrolyte and an electrode flush mounted to a wall. Similarly to the heat transfer measured by a hot wire anemometer, the mass transfer can be related to the fluid's wall shear rate. When coupled with a numerical post-treatment by the so-called inverse method, precise instantaneous wall shear rate measurements can be obtained. With further improvements, it has the potential to be effective in highly fluctuating three-dimensional flows. We present developments of the inverse method to two-component shear rate measurements, that is shear magnitude and direction. This is achieved with the use of a three-segment electrodiffusion probe. Validation tests of the inverse method are performed in an oscillating plane Poiseuille flow at moderate pulse frequencies, which also includes reverse flow phases, and in the vicinity of a separation point where the wall shear stress experiences local inversion in a controlled separated flow.
NASA Astrophysics Data System (ADS)
Reddy, K. S.; Somasundharam, S.
2016-09-01
In this work, inverse heat conduction problem (IHCP) involving the simultaneous estimation of principal thermal conductivities (kxx,kyy,kzz ) and specific heat capacity of orthotropic materials is solved by using surrogate forward model. Uniformly distributed random samples for each unknown parameter is generated from the prior knowledge about these parameters and Finite Volume Method (FVM) is employed to solve the forward problem for temperature distribution with space and time. A supervised machine learning technique- Gaussian Process Regression (GPR) is used to construct the surrogate forward model with the available temperature solution and randomly generated unknown parameter data. The statistical and machine learning toolbox available in MATLAB R2015b is used for this purpose. The robustness of the surrogate model constructed using GPR is examined by carrying out the parameter estimation for 100 new randomly generated test samples at a measurement error of ±0.3K. The temperature measurement is obtained by adding random noise with the mean at zero and known standard deviation (σ = 0.1) to the FVM solution of the forward problem. The test results show that Mean Percentage Deviation (MPD) of all test samples for all parameters is < 10%.
An inverse finite element method for determining the anisotropic properties of the cornea.
Nguyen, T D; Boyce, B L
2011-06-01
An inverse finite element method was developed to determine the anisotropic properties of bovine cornea from an in vitro inflation experiment. The experiment used digital image correlation (DIC) to measure the three-dimensional surface geometry and displacement field of the cornea at multiple pressures. A finite element model of a bovine cornea was developed using the DIC measured surface geometry of the undeformed specimen. The model was applied to determine five parameters of an anisotropic hyperelastic model that minimized the error between the measured and computed surface displacement field and to investigate the sensitivity of the measured bovine inflation response to variations in the anisotropic properties of the cornea. The results of the parameter optimization revealed that the collagen structure of bovine cornea exhibited a high degree of anisotropy in the limbus region, which agreed with recent histological findings, and a transversely isotropic central region. The parameter study showed that the bovine corneal response to the inflation experiment was sensitive to the shear modulus of the matrix at pressures below the intraocular pressure, the properties of the collagen lamella at higher pressures, and the degree of anisotropy in the limbus region. It was not sensitive to a weak collagen anisotropy in the central region.
NASA Astrophysics Data System (ADS)
Agata, R.; Ichimura, T.; Hori, T.; Hirahara, K.; Hashimoto, C.; Hori, M.
2015-12-01
Inverse analysis of the coseismic/postseismic slip using postseismic deformation observation data is an important topic in geodetic inversion. Inverse analysis method may be improved by using numerical simulation (e.g. finite element (FE) method) of viscoelastic deformation, the model of which is of high-fidelity to the available high-resolution crustal data. The authors had been developing a large-scale simulation method using such FE high-fidelity models (HFM), assuming use of K computer, the current fastest supercomputer in Japan. In this study, we developed an inverse analysis method incorporating HFM, in which the asthenosphere viscosity and fault slip are estimated simultaneously, since the value of viscosity in the simulation is not trivial. We carried out numerical experiments using synthetic crustal deformation data. Based on Ichimura et al. (2013), we constructed an HFM in the domain of 2048x1536x850 km, which includes the Tohoku region in northeast Japan. We used the data set of JTOPO30 (2003), Koketsu et al. (2008) and CAMP standard model (Hashimoto et al. 2004) for the model geometry. The HFM is currently in 2km resolution, resulting in 0.5 billion degrees-of-freedom. The figure shows the overview of HFM. Synthetic crustal deformation data of three years after an earthquake in the location of GEONET, GPS/A observation points, and S-net were used. Inverse analysis was formulated as minimization of L2 norm of the difference between the FE simulation results and the observation data with respect to viscosity and fault slip, combining quasi-Newton algorithm with adjoint method. Coseismic slip was expressed by superposition of 53 subfaults, with four viscoelastic layers. We carried out 90 forward simulations, and the 57 parameters converged to the true values. Due to the fast computation method, it took only five hours using 2048 nodes (1/40 of entire resource) of K computer. In the future, we would like to also consider estimation of after slip and apply
NASA Astrophysics Data System (ADS)
Kirby, Jon F.
2014-09-01
The effective elastic thickness (Te) is a geometric measure of the flexural rigidity of the lithosphere, which describes the resistance to bending under the application of applied, vertical loads. As such, it is likely that its magnitude has a major role in governing the tectonic evolution of both continental and oceanic plates. Of the several ways to estimate Te, one has gained popularity in the 40 years since its development because it only requires gravity and topography data, both of which are now readily available and provide excellent coverage over the Earth and even the rocky planets and moons of the solar system. This method, the ‘inverse spectral method’, develops measures of the relationship between observed gravity and topography data in the spatial frequency (wavenumber) domain, namely the admittance and coherence. The observed measures are subsequently inverted against the predictions of thin, elastic plate models, giving estimates of Te and other lithospheric parameters. This article provides a review of inverse spectral methodology and the studies that have used it. It is not, however, concerned with the geological or geodynamic significance or interpretation of Te, nor does it discuss and compare Te results from different methods in different provinces. Since the three main aspects of the subject are thin elastic plate flexure, spectral analysis, and inversion methods, the article broadly follows developments in these. The review also covers synthetic plate modelling, and concludes with a summary of the controversy currently surrounding inverse spectral methods, whether or not the large Te values returned in cratonic regions are artefacts of the method, or genuine observations.
Seismology on a Comet: Calibration Measurements, Modeling and Inversion
NASA Astrophysics Data System (ADS)
Faber, C.; Hoppe, J.; Knapmeyer, M.; Fischer, H.; Seidensticker, K. J.
2011-12-01
The Mission Rosetta was launched to comet 67P/Churyumov-Gerasimenko in 2004. It will finally reach the comet and will deliver the Lander Philae at the surface of the nucleus in November 2014. The Lander carries ten experiments, one of which is the Surface Electric Sounding and Acoustic Monitoring Experiment (SESAME). Part of this experiment is the Comet Acoustic Surface Sounding Experiment (CASSE) housed in the three feet of the lander. The primary goal of CASSE is to determine the elastic parameters of the surface material, like the Young's modulus and the Poisson ratio. Additional goals are the determination of shallow structure, quantification of porosity, and the location of activity spots and thermally and impact caused cometary activity. We conduct calibration measurements with accelerometers identical to the flight model. The goal of these measurements is to develop inversion procedures for travel times and to estimate the expected accuracy that CASSE can achieve in terms of elastic wave velocity, elastic parameters, and source location. The experiments are conducted mainly on sandy soil, in dry, wet or frozen conditions, and apart from buildings with their reflecting walls and artificial noise sources. We expect that natural sources, like thermal cracking at sunrise and sunset, can be located to an accuracy of about 10 degrees in direction and a few decimeters (1σ) in distance if occurring within the sensor triangle and from first arrivals alone. The accuracy of the direction is essentially independent of the distance, whereas distance determination depends critically on the identification of later arrivals. Determination of elastic wave velocities on the comet will be conducted with controlled sources at known positions and are likely to achieve an accuracy of σ=15% for the velocity of the first arriving wave. Limitations are due to the fixed source-receiver geometry and the wavelength emitted by the CASSE piezo-ceramic sources. In addition to the
Inverse modelling of radionuclide release rates using gamma dose rate observations
NASA Astrophysics Data System (ADS)
Hamburger, Thomas; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian
2014-05-01
Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. The hazardous consequences reach out on a national and continental scale. Environmental measurements and methods to model the transport and dispersion of the released radionuclides serve as a platform to assess the regional impact of nuclear accidents - both, for research purposes and, more important, to determine the immediate threat to the population. However, the assessments of the regional radionuclide activity concentrations and the individual exposure to radiation dose underlie several uncertainties. For example, the accurate model representation of wet and dry deposition. One of the most significant uncertainty, however, results from the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source terms of severe nuclear accidents may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on rather rough estimates of released key radionuclides given by the operators. Precise measurements are mostly missing due to practical limitations during the accident. Inverse modelling can be used to realise a feasible estimation of the source term (Davoine and Bocquet, 2007). Existing point measurements of radionuclide activity concentrations are therefore combined with atmospheric transport models. The release rates of radionuclides at the accident site are then obtained by improving the agreement between the modelled and observed concentrations (Stohl et al., 2012). The accuracy of the method and hence of the resulting source term depends amongst others on the availability, reliability and the resolution in time and space of the observations. Radionuclide activity concentrations are observed on a
Inverse Method for Identification of Material Parameters Directly from Milling Experiments
NASA Astrophysics Data System (ADS)
Maurel, A.; Michel, G.; Thibaud, S.; Fontaine, M.; Gelin, J. C.
2007-04-01
An identification procedure for the determination of material parameters that are used for the FEM simulation of High Speed Machining processes is proposed. This procedure is based on the coupling of a numerical identification procedure and FEM simulations of milling operations. The experimental data result directly from measurements performed during milling experiments. A special device has been instrumented and calibrated to perform force and torque measures, directly during machining experiments in using a piezoelectric dynamometer and a high frequency charge amplifier. The forces and torques are stored and low pass filtered if necessary, and these data provide the main basis for the identification procedure which is based on coupling 3D FEM simulations of milling and optimization/identification algorithms. The identification approach is mainly based on the Surfaces Response Method in the material parameters space, coupled to a sensitivity analysis. A Moving Least Square Approximation method is used to accelerate the identification process. The material behaviour is described from Johnson-Cook law. A fracture model is also added to consider chip formation and separation. The FEM simulations of milling are performed using explicit ALE based FEM code. The inverse method of identification is here applied on a 304L stainless steel and the first results are presented.
Inverse Method for Identification of Material Parameters Directly from Milling Experiments
Maurel, A.; Michel, G.; Thibaud, S.; Fontaine, M.; Gelin, J. C.
2007-04-07
An identification procedure for the determination of material parameters that are used for the FEM simulation of High Speed Machining processes is proposed. This procedure is based on the coupling of a numerical identification procedure and FEM simulations of milling operations. The experimental data result directly from measurements performed during milling experiments. A special device has been instrumented and calibrated to perform force and torque measures, directly during machining experiments in using a piezoelectric dynamometer and a high frequency charge amplifier. The forces and torques are stored and low pass filtered if necessary, and these data provide the main basis for the identification procedure which is based on coupling 3D FEM simulations of milling and optimization/identification algorithms. The identification approach is mainly based on the Surfaces Response Method in the material parameters space, coupled to a sensitivity analysis. A Moving Least Square Approximation method is used to accelerate the identification process. The material behaviour is described from Johnson-Cook law. A fracture model is also added to consider chip formation and separation. The FEM simulations of milling are performed using explicit ALE based FEM code. The inverse method of identification is here applied on a 304L stainless steel and the first results are presented.
NASA Astrophysics Data System (ADS)
Shirai, T.; Ishizawa, M.; Zhuravlev, R.; Ganshin, A.; Belikov, D.; Saito, M.; Oda, T.; Valsala, V.; Dlugokencky, E. J.; Tans, P. P.; Maksyutov, S. S.
2013-12-01
Global monthly CO2 flux distributions for 2001-2011 were estimated using an atmospheric inverse modeling system, which is based on combination of two transport models, called GELCA (Global Eulerian-Lagrangian Coupled Atmospheric model). This coupled model approach has several advantages over inversions to a single model alone: the use of Lagrangian particle dispersion model (LPDM) to simulate the transport in the vicinity of the observation points enables us to avoid numerical diffusion of Eulerian models, and is suitable to represent observations at high spatial and temporal resolutions. The global background concentration field generated by an Eulerian model is used as time-variant boundary conditions for an LPDM that performs backward simulations from each receptor point (observation event). In the GELCA inversion system, National Institute for Environmental Studies-Transport Model (NIES-TM) version 8.1i was used as an Eulerian global transport model coupled with FLEXPART version 8.0 as an LPDM. The meteorological fields for driving both models were taken from JMA Climate Data Assimilation System (JCDAS) with a spatial resolution of 1.25° x 1.25°, 40 vertical levels and a temporal resolution of 6 hours. Our prior CO2 fluxes consist of daily terrestrial biospheric fluxes, monthly oceanic fluxes, monthly biomass burning emissions, and monthly fossil fuel CO2 emissions. We employed a Kalman Smoother optimization technique with fixed lag of 3 months, estimating monthly CO2 fluxes for 42 land and 22 ocean regions. We have been using two different global networks of CO2 observations. The Observation Package (ObsPack) data products contain more measurement information in space and time than the NOAA global cooperative air sampling network which basically consists of approximately weekly sampling at background sites. The global total flux and its large-scale distribution optimized with two different global observation networks agreed overall with other previous
NASA Astrophysics Data System (ADS)
Davoine, X.; Bocquet, M.
2007-03-01
The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).
NASA Astrophysics Data System (ADS)
Davoine, X.; Bocquet, M.
2007-01-01
The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).
Inverse modeling of Asian (222)Rn flux using surface air (222)Rn concentration.
Hirao, Shigekazu; Yamazawa, Hiromi; Moriizumi, Jun
2010-11-01
When used with an atmospheric transport model, the (222)Rn flux distribution estimated in our previous study using soil transport theory caused underestimation of atmospheric (222)Rn concentrations as compared with measurements in East Asia. In this study, we applied a Bayesian synthesis inverse method to produce revised estimates of the annual (222)Rn flux density in Asia by using atmospheric (222)Rn concentrations measured at seven sites in East Asia. The Bayesian synthesis inverse method requires a prior estimate of the flux distribution and its uncertainties. The atmospheric transport model MM5/HIRAT and our previous estimate of the (222)Rn flux distribution as the prior value were used to generate new flux estimates for the eastern half of the Eurasian continent dividing into 10 regions. The (222)Rn flux densities estimated using the Bayesian inversion technique were generally higher than the prior flux densities. The area-weighted average (222)Rn flux density for Asia was estimated to be 33.0 mBq m(-2) s(-1), which is substantially higher than the prior value (16.7 mBq m(-2) s(-1)). The estimated (222)Rn flux densities decrease with increasing latitude as follows: Southeast Asia (36.7 mBq m(-2) s(-1)); East Asia (28.6 mBq m(-2) s(-1)) including China, Korean Peninsula and Japan; and Siberia (14.1 mBq m(-2) s(-1)). Increase of the newly estimated fluxes in Southeast Asia, China, Japan, and the southern part of Eastern Siberia from the prior ones contributed most significantly to improved agreement of the model-calculated concentrations with the atmospheric measurements. The sensitivity analysis of prior flux errors and effects of locally exhaled (222)Rn showed that the estimated fluxes in Northern and Central China, Korea, Japan, and the southern part of Eastern Siberia were robust, but that in Central Asia had a large uncertainty.
Improving GNSS-R sea level determination through inverse modeling of SNR data
NASA Astrophysics Data System (ADS)
Strandberg, Joakim; Hobiger, Thomas; Haas, Rüdiger
2016-08-01
This paper presents a new method for retrieving sea surface heights from Global Navigation Satellite Systems reflectometry (GNSS-R) data by inverse modeling of SNR observations from a single geodetic receiver. The method relies on a B-spline representation of the temporal sea level variations in order to account for its continuity. The corresponding B-spline coefficients are determined through a nonlinear least squares fit to the SNR data, and a consistent choice of model parameters enables the combination of multiple GNSS in a single inversion process. This leads to a clear increase in precision of the sea level retrievals which can be attributed to a better spatial and temporal sampling of the reflecting surface. Tests with data from two different coastal GNSS sites and comparison with colocated tide gauges show a significant increase in precision when compared to previously used methods, reaching standard deviations of 1.4 cm at Onsala, Sweden, and 3.1 cm at Spring Bay, Tasmania.
Doughty, Christine A.
1996-05-01
The hydrologic properties of heterogeneous geologic media are estimated by simultaneously inverting multiple observations from well-test data. A set of pressure transients observed during one or more interference tests is compared to the corresponding values obtained by numerically simulating the tests using a mathematical model. The parameters of the mathematical model are varied and the simulation repeated until a satisfactory match to the observed pressure transients is obtained, at which point the model parameters are accepted as providing a possible representation of the hydrologic property distribution. Restricting the search to parameters that represent fractal hydrologic property distributions can improve the inversion process. Far fewer parameters are needed to describe heterogeneity with a fractal geometry, improving the efficiency and robustness of the inversion. Additionally, each parameter set produces a hydrologic property distribution with a hierarchical structure, which mimics the multiple scales of heterogeneity often seen in natural geological media. Application of the IFS inverse method to synthetic interference-test data shows that the method reproduces the synthetic heterogeneity successfully for idealized heterogeneities, for geologically-realistic heterogeneities, and when the pressure data includes noise.
An approximate factorization method for inverse medium scattering with unknown buried objects
NASA Astrophysics Data System (ADS)
Qu, Fenglong; Yang, Jiaqing; Zhang, Bo
2017-03-01
This paper is concerned with the inverse problem of scattering of time-harmonic acoustic waves by an inhomogeneous medium with different kinds of unknown buried objects inside. By constructing a sequence of operators which are small perturbations of the far-field operator in a suitable way, we prove that each operator in this sequence has a factorization satisfying the Range Identity. We then develop an approximate factorization method for recovering the support of the inhomogeneous medium from the far-field data. Finally, numerical examples are provided to illustrate the practicability of the inversion algorithm.
Terekhov, Alexander V; Zatsiorsky, Vladimir M
2011-02-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423-453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem.
Zatsiorsky, Vladimir M.
2011-01-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907