An efficient and flexible Abel-inversion method for noisy data
NASA Astrophysics Data System (ADS)
Antokhin, Igor I.
2016-08-01
We propose an efficient and flexible method for solving Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization on itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.
A new Abel inversion by means of the integrals of an input function with noise
NASA Astrophysics Data System (ADS)
Li, Xian-Fang; Huang, Li; Huang, Yong
2007-01-01
Abel's integral equations arise in many areas of natural science and engineering, particularly in plasma diagnostics. This paper proposes a new and effective approximation of the inversion of Abel transform. This algorithm can be simply implemented by symbolic computation, and moreover an nth-order approximation reduces to the exact solution when it is a polynomial in r2 of degree less than or equal to n. Approximate Abel inversion is expressed in terms of integrals of input measurement data; so the suggested approach is stable for experimental data with random noise. An error analysis of the approximation of Abel inversion is given. Finally, several test examples used frequently in plasma diagnostics are given to illustrate the effectiveness and stability of this method.
Bayesian Abel Inversion in Quantitative X-Ray Radiography
Howard, Marylesa; Fowler, Michael; Luttman, Aaron; Mitchell, Stephen E.; Hock, Margaret C.
2016-05-19
A common image formation process in high-energy X-ray radiography is to have a pulsed power source that emits X-rays through a scene, a scintillator that absorbs X-rays and uoresces in the visible spectrum in response to the absorbed photons, and a CCD camera that images the visible light emitted from the scintillator. The intensity image is related to areal density, and, for an object that is radially symmetric about a central axis, the Abel transform then gives the object's volumetric density. Two of the primary drawbacks to classical variational methods for Abel inversion are their sensitivity to the type andmore » scale of regularization chosen and the lack of natural methods for quantifying the uncertainties associated with the reconstructions. In this work we cast the Abel inversion problem within a statistical framework in order to compute volumetric object densities from X-ray radiographs and to quantify uncertainties in the reconstruction. A hierarchical Bayesian model is developed with a likelihood based on a Gaussian noise model and with priors placed on the unknown density pro le, the data precision matrix, and two scale parameters. This allows the data to drive the localization of features in the reconstruction and results in a joint posterior distribution for the unknown density pro le, the prior parameters, and the spatial structure of the precision matrix. Results of the density reconstructions and pointwise uncertainty estimates are presented for both synthetic signals and real data from a U.S. Department of Energy X-ray imaging facility.« less
Fast algorithm for computing the Abel inversion integral in broadband reflectometry
Nunes, F.D.
1995-10-01
The application of the Hansen--Jablokow recursive technique is proposed for the numerical computation of the Abel inversion integral which is used in ({ital O}-mode) frequency-modulated broadband reflectometry to evaluate plasma density profiles. Compared to the usual numerical methods the recursive algorithm allows substantial time savings that can be important when processing massive amounts of data aiming to control the plasma in real time. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.
NASA Astrophysics Data System (ADS)
Dasch, Cameron J.
1992-03-01
It is shown that the Abel inversion, onion-peeling, and filtered backprojection methods can be intercompared without assumptions about the object being deconvolved. If the projection data are taken at equally spaced radial positions, the deconvolved field is given by weighted sums of the projections divided by the data spacing. The weighting factors are independent of the data spacing. All the methods are remarkably similar and have Abelian behavior: the field at a radial location is primarily determined by the weighted differences of a few projections around the radial position. Onion-peeling and an Abel inversion using two-point interpolation are similar. When the Shepp-Logan filtered backprojection method is reduced to one dimension, it is essentially identical to an Abel inversion using three-point interpolation. The weighting factors directly determine the relative noise performance: the three-point Abel inversion is the best, while onion peeling is the worst with approximately twice the noise. Based on ease of calculation, robustness, and noise, the three-point Abel inversion is recommended.
Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.
Dick, Bernhard
2014-01-14
A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.
Improved Abel transform inversion: First application to COSMIC/FORMOSAT-3
NASA Astrophysics Data System (ADS)
Aragon-Angel, A.; Hernandez-Pajares, M.; Juan, J.; Sanz, J.
2007-05-01
In this paper the first results of Ionospheric Tomographic inversion are presented, using the Improved Abel Transform on the COSMIC/FORMOSAT-3 constellation of 6 LEO satellites, carrying on-board GPS receivers.[- 4mm] The Abel transform inversion is a wide used technique which in the ionospheric context makes it possible to retrieve electron densities as a function of height based of STEC (Slant Total Electron Content) data gathered from GPS receivers on board of LEO (Low Earth Orbit) satellites. Within this precise use, the classical approach of the Abel inversion is based on the assumption of spherical symmetry of the electron density in the vicinity of an occultation, meaning that the electron content varies in height but not horizontally. In particular, one implication of this assumption is that the VTEC (Vertical Total Electron Content) is a constant value for the occultation region. This assumption may not always be valid since horizontal ionospheric gradients (a very frequent feature in some ionosphere problematic areas such as the Equatorial region) could significantly affect the electron profiles. [- 4mm] In order to overcome this limitation/problem of the classical Abel inversion, a studied improvement of this technique can be obtained by assuming separability in the electron density (see Hernández-Pajares et al. 2000). This means that the electron density can be expressed by the multiplication of VTEC data and a shape function which assumes all the height dependency in it while the VTEC data keeps the horizontal dependency. Actually, it is more realistic to assume that this shape fuction depends only on the height and to use VTEC information to take into account the horizontal variation rather than considering spherical symmetry in the electron density function as it has been carried out in the classical approach of the Abel inversion.[-4mm] Since the above mentioned improved Abel inversion technique has already been tested and proven to be a useful
Serre duality, Abel's theorem, and Jacobi inversion for supercurves over a thick superpoint
NASA Astrophysics Data System (ADS)
Rothstein, Mitchell J.; Rabin, Jeffrey M.
2015-04-01
The principal aim of this paper is to extend Abel's theorem to the setting of complex supermanifolds of dimension 1 | q over a finite-dimensional local supercommutative C-algebra. The theorem is proved by establishing a compatibility of Serre duality for the supercurve with Poincaré duality on the reduced curve. We include an elementary algebraic proof of the requisite form of Serre duality, closely based on the account of the reduced case given by Serre in Algebraic groups and class fields, combined with an invariance result for the topology on the dual of the space of répartitions. Our Abel map, taking Cartier divisors of degree zero to the dual of the space of sections of the Berezinian sheaf, modulo periods, is defined via Penkov's characterization of the Berezinian sheaf as the cohomology of the de Rham complex of the sheaf D of differential operators. We discuss the Jacobi inversion problem for the Abel map and give an example demonstrating that if n is an integer sufficiently large that the generic divisor of degree n is linearly equivalent to an effective divisor, this need not be the case for all divisors of degree n.
NASA Astrophysics Data System (ADS)
Jackiewicz, Jason
2009-09-01
With the rapid advances in sophisticated solar modeling and the abundance of high-quality solar pulsation data, efficient and robust inversion techniques are crucial for seismic studies. We present some aspects of an efficient Fourier Optimally Localized Averaging (OLA) inversion method with an example applied to time-distance helioseismology.
Jackiewicz, Jason
2009-09-16
With the rapid advances in sophisticated solar modeling and the abundance of high-quality solar pulsation data, efficient and robust inversion techniques are crucial for seismic studies. We present some aspects of an efficient Fourier Optimally Localized Averaging (OLA) inversion method with an example applied to time-distance helioseismology.
An inversion method for cometary atmospheres
NASA Astrophysics Data System (ADS)
Hubert, B.; Opitom, C.; Hutsemékers, D.; Jehin, E.; Munhoven, G.; Manfroid, J.; Bisikalo, D. V.; Shematovich, V. I.
2016-10-01
Remote observation of cometary atmospheres produces a measurement of the cometary emissions integrated along the line of sight. This integration is the so-called Abel transform of the local emission rate. The observation is generally interpreted under the hypothesis of spherical symmetry of the coma. Under that hypothesis, the Abel transform can be inverted. We derive a numerical inversion method adapted to cometary atmospheres using both analytical results and least squares fitting techniques. This method, derived under the usual hypothesis of spherical symmetry, allows us to retrieve the radial distribution of the emission rate of any unabsorbed emission, which is the fundamental, physically meaningful quantity governing the observation. A Tikhonov regularization technique is also applied to reduce the possibly deleterious effects of the noise present in the observation and to warrant that the problem remains well posed. Standard error propagation techniques are included in order to estimate the uncertainties affecting the retrieved emission rate. Several theoretical tests of the inversion techniques are carried out to show its validity and robustness. In particular, we show that the Abel inversion of real data is only weakly sensitive to an offset applied to the input flux, which implies that the method, applied to the study of a cometary atmosphere, is only weakly dependent on uncertainties on the sky background which has to be subtracted from the raw observations of the coma. We apply the method to observations of three different comets observed using the TRAPPIST telescope: 103P/ Hartley 2, F6/ Lemmon and A1/ Siding Spring. We show that the method retrieves realistic emission rates, and that characteristic lengths and production rates can be derived from the emission rate for both CN and C2 molecules. We show that the retrieved characteristic lengths can differ from those obtained from a direct least squares fitting over the observed flux of radiation, and
NASA Astrophysics Data System (ADS)
Katgert, P.; Murdin, P.
2000-11-01
Abell clusters are the most conspicuous groupings of galaxies identified by George Abell on the plates of the first photographic survey made with the SCHMIDT TELESCOPE at Mount Palomar in the 1950s. Sometimes, the term Abell clusters is used as a synonym of nearby, optically selected galaxy clusters....
NASA Astrophysics Data System (ADS)
Huestis, D. L.
Forward integration calculation of air mass, refraction, and time delay requires care even for very smooth model atmospheres. The literature abounds in examples of injudicious approximations, assumptions, transformations, variable substitutions, and failures to verify that the formulas work with unlimited accuracy for simple cases and also survive challenges from mathematically pathological but physically realizable cases. A few years ago we addressed the problem of evaluation of the Chapman function for attenuation along a straight line path in an exponential atmosphere. In this presentation we will describe issues and approaches for integration over light paths curved by refraction. The inverse problem, determining the altitude profile of mass density (index of refraction) or the concentration of an individual chemical species (absorption), from occultation data, also has its mathematically interesting (i.e., difficult) aspects. Now we automatically have noise and thus statistical analysis is just as important as calculus and numerical analysis. Here we will describe a new approach of least-squares fitting occultation data to an expansion over compact basis functions. This approach, which avoids numerical differentiation and singular integrals, was originally developed to analyze laboratory imaging data.Forward integration calculation of air mass, refraction, and time delay requires care even for very smooth model atmospheres. The literature abounds in examples of injudicious approximations, assumptions, transformations, variable substitutions, and failures to verify that the formulas work with unlimited accuracy for simple cases and also survive challenges from mathematically pathological but physically realizable cases. A few years ago we addressed the problem of evaluation of the Chapman function for attenuation along a straight line path in an exponential atmosphere. In this presentation we will describe issues and approaches for integration over light paths
An exact inverse method for subsonic flows
NASA Technical Reports Server (NTRS)
Daripa, Prabir
1988-01-01
A new inverse method for the aerodynamic design of airfoils is presented for subcritical flows. The pressure distribution in this method can be prescribed as a function of the arclength of the still unknown body. It is shown that this inverse problem is mathematically equivalent to solving only one nonlinear boundary value problem subject to known Dirichlet data on the boundary.
Tsunami waveform inversion by adjoint methods
NASA Astrophysics Data System (ADS)
Pires, Carlos; Miranda, Pedro M. A.
2001-09-01
An adjoint method for tsunami waveform inversion is proposed, as an alternative to the technique based on Green's functions of the linear long wave model. The method has the advantage of being able to use the nonlinear shallow water equations, or other appropriate equation sets, and to optimize an initial state given as a linear or nonlinear function of any set of free parameters. This last facility is used to perform explicit optimization of the focal fault parameters, characterizing the initial sea surface displacement of tsunamigenic earthquakes. The proposed methodology is validated with experiments using synthetic data, showing the possibility of recovering all relevant details of a tsunami source from tide gauge observations, providing that the adjoint method is constrained in an appropriate manner. It is found, as in other methods, that the inversion skill of tsunami sources increases with the azimuthal and temporal coverage of assimilated tide gauge stations; furthermore, it is shown that the eigenvalue analysis of the Hessian matrix of the cost function provides a consistent and useful methodology to choose the subset of independent parameters that can be inverted with a given dataset of observations and to evaluate the error of the inversion process. The method is also applied to real tide gauge series, from the tsunami of the February 28, 1969, Gorringe Bank earthquake, suggesting some reasonable changes to the assumed focal parameters of that event. It is suggested that the method proposed may be able to deal with transient tsunami sources such as those generated by submarine landslides.
An efficient method for inverse problems
NASA Technical Reports Server (NTRS)
Daripa, Prabir
1987-01-01
A new inverse method for aerodynamic design of subcritical airfoils is presented. The pressure distribution in this method can be prescribed in a natural way, i.e. as a function of arclength of the as yet unknown body. This inverse problem is shown to be mathematically equivalent to solving a single nonlinear boundary value problem subject to known Dirichlet data on the boundary. The solution to this problem determines the airfoil, the free stream Mach number M(sub x) and the upstream flow direction theta(sub x). The existence of a solution for any given pressure distribution is discussed. The method is easy to implement and extremely efficient. We present a series of results for which comparisons are made with the known airfoils.
Regeneration of stochastic processes: an inverse method
NASA Astrophysics Data System (ADS)
Ghasemi, F.; Peinke, J.; Sahimi, M.; Rahimi Tabar, M. R.
2005-10-01
We propose a novel inverse method that utilizes a set of data to construct a simple equation that governs the stochastic process for which the data have been measured, hence enabling us to reconstruct the stochastic process. As an example, we analyze the stochasticity in the beat-to-beat fluctuations in the heart rates of healthy subjects as well as those with congestive heart failure. The inverse method provides a novel technique for distinguishing the two classes of subjects in terms of a drift and a diffusion coefficients which behave completely differently for the two classes of subjects, hence potentially providing a novel diagnostic tool for distinguishing healthy subjects from those with congestive heart failure, even at the early stages of the disease development.
A Bayesian method for microseismic source inversion
NASA Astrophysics Data System (ADS)
Pugh, D. J.; White, R. S.; Christie, P. A. F.
2016-08-01
Earthquake source inversion is highly dependent on location determination and velocity models. Uncertainties in both the model parameters and the observations need to be rigorously incorporated into an inversion approach. Here, we show a probabilistic Bayesian method that allows formal inclusion of the uncertainties in the moment tensor inversion. This method allows the combination of different sets of far-field observations, such as P-wave and S-wave polarities and amplitude ratios, into one inversion. Additional observations can be included by deriving a suitable likelihood function from the uncertainties. This inversion produces samples from the source posterior probability distribution, including a best-fitting solution for the source mechanism and associated probability. The inversion can be constrained to the double-couple space or allowed to explore the gamut of moment tensor solutions, allowing volumetric and other non-double-couple components. The posterior probability of the double-couple and full moment tensor source models can be evaluated from the Bayesian evidence, using samples from the likelihood distributions for the two source models, producing an estimate of whether or not a source is double-couple. Such an approach is ideally suited to microseismic studies where there are many sources of uncertainty and it is often difficult to produce reliability estimates of the source mechanism, although this can be true of many other cases. Using full-waveform synthetic seismograms, we also show the effects of noise, location, network distribution and velocity model uncertainty on the source probability density function. The noise has the largest effect on the results, especially as it can affect other parts of the event processing. This uncertainty can lead to erroneous non-double-couple source probability distributions, even when no other uncertainties exist. Although including amplitude ratios can improve the constraint on the source probability
Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems
Benzi, M.; Tuma, M.
1996-12-31
A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.
Inverse polynomial reconstruction method in DCT domain
NASA Astrophysics Data System (ADS)
Dadkhahi, Hamid; Gotchev, Atanas; Egiazarian, Karen
2012-12-01
The discrete cosine transform (DCT) offers superior energy compaction properties for a large class of functions and has been employed as a standard tool in many signal and image processing applications. However, it suffers from spurious behavior in the vicinity of edge discontinuities in piecewise smooth signals. To leverage the sparse representation provided by the DCT, in this article, we derive a framework for the inverse polynomial reconstruction in the DCT expansion. It yields the expansion of a piecewise smooth signal in terms of polynomial coefficients, obtained from the DCT representation of the same signal. Taking advantage of this framework, we show that it is feasible to recover piecewise smooth signals from a relatively small number of DCT coefficients with high accuracy. Furthermore, automatic methods based on minimum description length principle and cross-validation are devised to select the polynomial orders, as a requirement of the inverse polynomial reconstruction method in practical applications. The developed framework can considerably enhance the performance of the DCT in sparse representation of piecewise smooth signals. Numerical results show that denoising and image approximation algorithms based on the proposed framework indicate significant improvements over wavelet counterparts for this class of signals.
An inverse problem by boundary element method
Tran-Cong, T.; Nguyen-Thien, T.; Graham, A.L.
1996-02-01
Boundary Element Methods (BEM) have been established as useful and powerful tools in a wide range of engineering applications, e.g. Brebbia et al. In this paper, we report a particular three dimensional implementation of a direct boundary integral equation (BIE) formulation and its application to numerical simulations of practical polymer processing operations. In particular, we will focus on the application of the present boundary element technology to simulate an inverse problem in plastics processing.by extrusion. The task is to design profile extrusion dies for plastics. The problem is highly non-linear due to material viscoelastic behaviours as well as unknown free surface conditions. As an example, the technique is shown to be effective in obtaining the die profiles corresponding to a square viscoelastic extrudate under different processing conditions. To further illustrate the capability of the method, examples of other non-trivial extrudate profiles and processing conditions are also given.
Abel's Theorem Simplifies Reduction of Order
ERIC Educational Resources Information Center
Green, William R.
2011-01-01
We give an alternative to the standard method of reduction or order, in which one uses one solution of a homogeneous, linear, second order differential equation to find a second, linearly independent solution. Our method, based on Abel's Theorem, is shorter, less complex and extends to higher order equations.
NASA Astrophysics Data System (ADS)
Trigub, R. M.
2015-08-01
We study the convergence of linear means of the Fourier series \\sumk=-∞+∞λk,\\varepsilon\\hat{f}_keikx of a function f\\in L1 \\lbrack -π,π \\rbrack to f(x) as \\varepsilon\\searrow0 at all points at which the derivative \\bigl(\\int_0^xf(t) dt\\bigr)' exists (i.e. at the d-points). Sufficient conditions for the convergence are stated in terms of the factors \\{λk,\\varepsilon\\} and, in the case of λk,\\varepsilon=\\varphi(\\varepsilon k), in terms of the condition that the functions \\varphi and x\\varphi'(x) belong to the Wiener algebra A( R). We also study a new problem concerning the convergence of means of the Abel-Poisson type, \\sumk=-∞^∞r\\psi(\\vert k\\vert)\\hat{f}_keikx, as r\
Application of the least-squares inversion method: Fourier series versus waveform inversion
NASA Astrophysics Data System (ADS)
Min, Dong-Joo; Shin, Jungkyun; Shin, Changsoo
2015-11-01
We describe an implicit link between waveform inversion and Fourier series based on inversion methods such as gradient, Gauss-Newton, and full Newton methods. Fourier series have been widely used as a basic concept in studies on seismic data interpretation, and their coefficients are obtained in the classical Fourier analysis. We show that Fourier coefficients can also be obtained by inversion algorithms, and compare the method to seismic waveform inversion algorithms. In that case, Fourier coefficients correspond to model parameters (velocities, density or elastic constants), whereas cosine and sine functions correspond to components of the Jacobian matrix, that is, partial derivative wavefields in seismic inversion. In the classical Fourier analysis, optimal coefficients are determined by the sensitivity of a given function to sine and cosine functions. In the inversion method for Fourier series, Fourier coefficients are obtained by measuring the sensitivity of residuals between given functions and test functions (defined as the sum of weighted cosine and sine functions) to cosine and sine functions. The orthogonal property of cosine and sine functions makes the full or approximate Hessian matrix become a diagonal matrix in the inversion for Fourier series. In seismic waveform inversion, the Hessian matrix may or may not be a diagonal matrix, because partial derivative wavefields correlate with each other to some extent, making them semi-orthogonal. At the high-frequency limits, however, the Hessian matrix can be approximated by either a diagonal matrix or a diagonally-dominant matrix. Since we usually deal with relatively low frequencies in seismic waveform inversion, it is not diagonally dominant and thus it is prohibitively expensive to compute the full or approximate Hessian matrix. By interpreting Fourier series with the inversion algorithms, we note that the Fourier series can be computed at an iteration step using any inversion algorithms such as the
A method for obtaining coefficients of compositional inverse generating functions
NASA Astrophysics Data System (ADS)
Kruchinin, Dmitry V.; Shablya, Yuriy V.; Kruchinin, Vladimir V.; Shelupanov, Alexander A.
2016-06-01
The aim of this paper is to show how to obtain expressions for coefficients of compositional inverse generating functions in explicit way. The method is based on the Lagrange inversion theorem and composita of generating functions. Also we give a method of obtaining expressions for coefficients of reciprocal generating functions and consider some examples.
An inverse design method for 2D airfoil
NASA Astrophysics Data System (ADS)
Liang, Zhi-Yong; Cui, Peng; Zhang, Gen-Bao
2010-03-01
The computational method for aerodynamic design of aircraft is applied more universally than before, in which the design of an airfoil is a hot problem. The forward problem is discussed by most relative papers, but inverse method is more useful in practical designs. In this paper, the inverse design of 2D airfoil was investigated. A finite element method based on the variational principle was used for carrying out. Through the simulation, it was shown that the method was fit for the design.
A Higher Order Iterative Method for Computing the Drazin Inverse
Soleymani, F.; Stanimirović, Predrag S.
2013-01-01
A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper. PMID:24222747
The Filtered Abel Transform and Its Application in Combustion Diagnostics
NASA Technical Reports Server (NTRS)
Simons, Stephen N. (Technical Monitor); Yuan, Zeng-Guang
2003-01-01
Many non-intrusive combustion diagnosis methods generate line-of-sight projections of a flame field. To reconstruct the spatial field of the measured properties, these projections need to be deconvoluted. When the spatial field is axisymmetric, commonly used deconvolution method include the Abel transforms, the onion peeling method and the two-dimensional Fourier transform method and its derivatives such as the filtered back projection methods. This paper proposes a new approach for performing the Abel transform method is developed, which possesses the exactness of the Abel transform and the flexibility of incorporating various filters in the reconstruction process. The Abel transform is an exact method and the simplest among these commonly used methods. It is evinced in this paper that all the exact reconstruction methods for axisymmetric distributions must be equivalent to the Abel transform because of its uniqueness and exactness. Detailed proof is presented to show that the two dimensional Fourier methods when applied to axisymmetric cases is identical to the Abel transform. Discrepancies among various reconstruction method stem from the different approximations made to perform numerical calculations. An equation relating the spectrum of a set of projection date to that of the corresponding spatial distribution is obtained, which shows that the spectrum of the projection is equal to the Abel transform of the spectrum of the corresponding spatial distribution. From the equation, if either the projection or the distribution is bandwidth limited, the other is also bandwidth limited, and both have the same bandwidth. If the two are not bandwidth limited, the Abel transform has a bias against low wave number components in most practical cases. This explains why the Abel transform and all exact deconvolution methods are sensitive to high wave number noises. The filtered Abel transform is based on the fact that the Abel transform of filtered projection data is equal
ERIC Educational Resources Information Center
Brown, Malcolm
2009-01-01
Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…
Methods for solving ill-posed inverse problems
NASA Astrophysics Data System (ADS)
Alifanov, O. M.
1983-11-01
Various approaches to the solution of inverse problems of heat conduction are reviewed, including direct analytical and numerical methods, the method of iterative regularization, and algebraic and numerical methods regularized in accordance with the variational principle. The method of iterative regularization is shown to be the most versatile of the above approaches. The basic principles of this method are briefly examined, and methods are proposed for computing the gradients of discrepancies. An approach is proposed to the iterative solution of inverse problems with a specified order of smoothness.
Geophysical Inversion With Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
We are investigating the use of Pareto multi-objective global optimization (PMOGO) methods to solve numerically complicated geophysical inverse problems. PMOGO methods can be applied to highly nonlinear inverse problems, to those where derivatives are discontinuous or simply not obtainable, and to those were multiple minima exist in the problem space. PMOGO methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. This allows a more complete assessment of the possibilities and provides opportunities to calculate statistics regarding the likelihood of particular model features. We are applying PMOGO methods to four classes of inverse problems. The first are discrete-body problems where the inversion determines values of several parameters that define the location, orientation, size and physical properties of an anomalous body represented by a simple shape, for example a sphere, ellipsoid, cylinder or cuboid. A PMOGO approach can determine not only the optimal shape parameters for the anomalous body but also the optimal shape itself. Furthermore, when one expects several anomalous bodies in the subsurface, a PMOGO inversion approach can determine an optimal number of parameterized bodies. The second class of inverse problems are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The third class of problems are lithological inversions, which are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the fourth class, surface geometry inversions, we consider a fundamentally different type of problem in which a model comprises wireframe surfaces representing contacts between rock units. The physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. Surface geometry inversion can be
Methods for solving of inverse heat conduction problems
NASA Astrophysics Data System (ADS)
Kobilskaya, E.; Lyashenko, V.
2016-10-01
A general mathematical model of the high-temperature thermodiffusion that occurs in a limited environment is considered. Based on this model a formulation of inverse problems for homogeneous and inhomogeneous parabolic equations is proposed. The inverse problem aims at identifying one or several unknown parameters of the mathematical model. These parameters allow maintaining the required temperature distribution and concentration of distribution of substance in the whole area or in part. For each case (internal, external heat source or a combination) the appropriate method for solving the inverse problem is proposed.
Noncoherent matrix inversion methods for Scansar processing
NASA Astrophysics Data System (ADS)
Dendal, Didier
1995-11-01
The aim of this work is to develop some algebraic reconstruction techniques for low resolution power SAR imagery, as in the Scansar or QUICKLOOK imaging modes. The traditional reconstruction algorithms are indeed not well fit to low resolution power purposes, since Fourier constraints impose a computational load of the same order as the one of the usual SAR azimuthal resolution. Furthermore, the range migration balancing is superfluous, as it does not cover a tenth of the resolution cell in the less favorable situations. There are several possibilities for using matrices in the azimuthal direction. The most direct alternative leads to a matrix inversion. Unfortunately, the numerical conditioning of the problem is far from being excellent, since each line of the matrix is an image of the antenna radiating pattern with a shift between two successive lines corresponding to the distance covered by the SAR between two pulses transmission (a few meters for satellite ERS1). We'll show how it is possible to turn a very ill conditioned problem into an equivalent one, but without any divergence risk, by a technique of successive decimation by two (resolution power increased by two at each step). This technique leads to very small square matrices (two lines and two columns), the good numeric conditioning of which is certified by a well-known theorem of numerical analysis. The convergence rate of the process depends on the circumstances (mainly the distance between two impulses transmissions) and on the required accuracy, but five or six iterations already give excellent results. The process is applicable at four or five levels (number of decimations) which corresponds to initial matrices of 16 by 16 or 32 by 32. The azimuth processing is performed on the basis of the projection function concept (tomographic analogy of radar principles). This integrated information results from classical coherent range compression. The aperture synthesis is obtained by non-coherent processing
Improved hybrid iterative optimization method for seismic full waveform inversion
NASA Astrophysics Data System (ADS)
Wang, Yi; Dong, Liang-Guo; Liu, Yu-Zhu
2013-06-01
In full waveform inversion (FWI), Hessian information of the misfit function is of vital importance for accelerating the convergence of the inversion; however, it usually is not feasible to directly calculate the Hessian matrix and its inverse. Although the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) or Hessian-free inexact Newton (HFN) methods are able to use approximate Hessian information, the information they collect is limited. The two methods can be interlaced because they are able to provide Hessian information for each other; however, the performance of the hybrid iterative method is dependent on the effective switch between the two methods. We have designed a new scheme to realize the dynamic switch between the two methods based on the decrease ratio (DR) of the misfit function (objective function), and we propose a modified hybrid iterative optimization method. In the new scheme, we compare the DR of the two methods for a given computational cost, and choose the method with a faster DR. Using these steps, the modified method always implements the most efficient method. The results of Marmousi and over thrust model testings indicate that the convergence with our modified method is significantly faster than that in the L-BFGS method with no loss of inversion quality. Moreover, our modified outperforms the enriched method by a little speedup of the convergence. It also exhibits better efficiency than the HFN method.
Estimating surface acoustic impedance with the inverse method.
Piechowicz, Janusz
2011-01-01
Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics. PMID:21939599
Homogenization method based on the inverse problem
Tota, A.; Makai, M.
2013-07-01
We present a method for deriving homogeneous multi-group cross sections to replace a heterogeneous region's multi-group cross sections; providing that the fluxes and the currents on the external boundary, and the region averaged fluxes are preserved. The method is developed using diffusion approximation to the neutron transport equation in a symmetrical slab geometry. Assuming that the boundary fluxes are given, two response matrices (RMs) can be defined. The first derives the boundary current from the boundary flux, the second derives the flux integral over the region from the boundary flux. Assuming that these RMs are known, we present a formula which reconstructs the multi-group cross-section matrix and the diffusion coefficients from the RMs of a homogeneous slab. Applying this formula to the RMs of a slab with multiple homogeneous regions yields a homogenization method; which produce such homogenized multi-group cross sections and homogenized diffusion coefficients, that the fluxes and the currents on the external boundary, and the region averaged fluxes are preserved. The method is based on the determination of the eigenvalues and the eigenvectors of the RMs. We reproduce the four-group cross section matrix and the diffusion constants from the RMs in numerical examples. We give conditions for replacing a heterogeneous region by a homogeneous one so that the boundary current and the region-averaged flux are preserved for a given boundary flux. (authors)
Use of ABIC and Invention of Inversion Methods
NASA Astrophysics Data System (ADS)
Fukahata, Y.; Yagi, Y.
2014-12-01
Bayesian inference is a powerful tool in inversion analyses of geophysical problems, because observed data are commonly inaccurate and insufficient in these problems. In Bayesian inference, we always encounter a problem in determining the relative weight between observed data and prior information. ABIC (Akaike's Bayesian Information Criterion) gives a useful solution to this problem particularly for linear inverse problems, by maximizing the marginal likelihood for the relative weight. In general, we subjectively construct a Bayesian model, which consists of a family of parametric models with different values of the relative weight giving different parametric models; ABIC enables us to objectively select a specific model among the parametric models. In principle, ABIC gives us an inverse solution that mostly follows observed data when we have enough amount of data with good accuracy, and gives us an inverse solution that mostly follows prior information when observed data are insufficient and/or inaccurate (see the attached image). In inversion analyses using ABIC, we do not manually adjust the relative weight. Hence, we quite easily obtain geophysically unrealistic results. Because of that, someone may think that inversion analyses using ABIC is difficult in dealing with or even unreliable. However, this characteristic is an excellent point of ABIC. If we obtain a geophysically unrealistic result, this implies that some problems are hidden in the inversion method. In this talk, we show an example of the invention of inversion methods inspired by ABIC: the importance of covariance components including modeling errors. As shown by this example, we can get closer to the true solution not by manually adjusting the relative weight to obtain a seemingly good-looking result, but by determining the relative weight statistically. It is a harder way to determine the relative weight statistically, but we should pursue this way to understand geophysical problems more
Internal dynamics of Abell 1240: a galaxy cluster with symmetric double radio relics
NASA Astrophysics Data System (ADS)
Barrena, R.; Girardi, M.; Boschin, W.; Dasí, M.
2009-08-01
Context: The mechanisms giving rise to diffuse radio emission in galaxy clusters, and in particular their connection with cluster mergers, are still debated. Aims: We aim to obtain new insights into the internal dynamics of the cluster Abell 1240, which appears to contain two roughly symmetric radio relics, separated by ~2 h_70-1 Mpc. Methods: Our analysis is based mainly on redshift data for 145 galaxies mostly acquired at the Telescopio Nazionale Galileo and on new photometric data acquired at the Isaac Newton Telescope. We also use X-ray data from the Chandra archive and photometric data from the Sloan Digital Sky Survey (Data Release 7). We combine galaxy velocities and positions to select 89 cluster galaxies and analyze the internal dynamics of the Abell 1237 + Abell 1240 cluster complex, Abell 1237 being a close companion of Abell 1240 in its southern direction. Results: We estimate similar redshifts for Abell 1237 and Abell 1240, < z > = 0.1935 and < z > = 0.1948, respectively. For Abell 1237, we estimate a line-of-sight (LOS) velocity dispersion of σV ~ 740 km s-1and a mass of M ~ 6 × 1014 h_70-1 M⊙. For Abell 1240, we estimate a LOS σV ~ 870 km s-1and a mass in the range M ~ 0.9-1.9 × 1015 h_70-1 M⊙, which takes account of its complex dynamics. Abell 1240 is shown to have a bimodal structure with two galaxy clumps roughly aligned along its N-S direction, the same as defined by the elongation of its X-ray surface brightness and the axis of symmetry of the relics. The two brightest galaxies of Abell 1240, associated with the northern and southern clumps, are separated by a LOS rest-frame velocity difference Vrf ~ 400 km s-1and a projected distance D ~ 1.2 h_70-1 Mpc. The two-body model agrees with the hypothesis that we are looking at a cluster merger that occurred largely in the plane of the sky, the two galaxy clumps being separated by a rest-frame velocity difference Vrf ~ 2000 km s-1at a time of 0.3 Gyr after the crossing core, while Abell 1237
The SOLA method for helioseismic inversion
NASA Astrophysics Data System (ADS)
Pijpers, F. P.; Thompson, M. J.
1994-01-01
The Subtractive Optimally Localized Averages (SOLA) method is a versatile and efficient technique for inverting helioseismic data. The SOLA method is based on explicit construction of Backus-Gilbert averaging kernels, but whereas the more usual formulations of the optimally localized averages (OLA) method use a multiplicative penalty function to localize the kernels, the distinctive idea of SOLA is that one specifies a desired target form for the kernels and then minimizes the integrated squared difference between the kernels and the target form. This allows great versatility in the choice of target form, and furthermore SOLA has the significant advantage of being computationally more efficient than the usual OLA formulations. A Gaussian target function is a useful choice, and we use the example of determining the Sun's internal rotation to explore how the parameter values (such as the Gaussian's width) should best be chosen. Some alternatives to using a Gaussian function as target function are discussed and applied to artificial data in a blind experiment. In particular we show that it is possible to invert directly for the gradient of the rotation. This may be of interest if there are localized large gradients in the rotation rate.
A time domain sampling method for inverse acoustic scattering problems
NASA Astrophysics Data System (ADS)
Guo, Yukun; Hömberg, Dietmar; Hu, Guanghui; Li, Jingzhi; Liu, Hongyu
2016-06-01
This work concerns the inverse scattering problems of imaging unknown/inaccessible scatterers by transient acoustic near-field measurements. Based on the analysis of the migration method, we propose efficient and effective sampling schemes for imaging small and extended scatterers from knowledge of time-dependent scattered data due to incident impulsive point sources. Though the inverse scattering problems are known to be nonlinear and ill-posed, the proposed imaging algorithms are totally "direct" involving only integral calculations on the measurement surface. Theoretical justifications are presented and numerical experiments are conducted to demonstrate the effectiveness and robustness of our methods. In particular, the proposed static imaging functionals enhance the performance of the total focusing method (TFM) and the dynamic imaging functionals show analogous behavior to the time reversal inversion but without solving time-dependent wave equations.
Development of an inverse method for coastal risk management
NASA Astrophysics Data System (ADS)
Idier, D.; Rohmer, J.; Bulteau, T.; Delvallée, E.
2013-04-01
Recent flooding events, like Katrina (USA, 2005) or Xynthia (France, 2010), illustrate the complexity of coastal systems and the limits of traditional flood risk analysis. Among other questions, these events raised issues such as: "how to choose flooding scenarios for risk management purposes?", "how to make a society more aware and prepared for such events?" and "which level of risk is acceptable to a population?". The present paper aims at developing an inverse approach that could seek to address these three issues. The main idea of the proposed method is the inversion of the usual risk assessment steps: starting from the maximum acceptable hazard level (defined by stakeholders as the one leading to the maximum tolerable consequences) to finally obtain the return period of this threshold. Such an "inverse" approach would allow for the identification of all the offshore forcing conditions (and their occurrence probability) inducing a threat for critical assets of the territory, such information being of great importance for coastal risk management. This paper presents the first stage in developing such a procedure. It focuses on estimation (through inversion of the flooding model) of the offshore conditions leading to the acceptable hazard level, estimation of the return period of the associated combinations, and thus of the maximum acceptable hazard level. A first application for a simplified case study (based on real data), located on the French Mediterranean coast, is presented, assuming a maximum acceptable hazard level. Even if only one part of the full inverse method has been developed, we demonstrate how the inverse method can be useful in (1) estimating the probability of exceeding the maximum inundation height for identified critical assets, (2) providing critical offshore conditions for flooding in early warning systems, and (3) raising awareness of stakeholders and eventually enhance preparedness for future flooding events by allowing them to assess
Geostatistical joint inversion of seismic and potential field methods
NASA Astrophysics Data System (ADS)
Shamsipour, Pejman; Chouteau, Michel; Giroux, Bernard
2016-04-01
Interpretation of geophysical data needs to integrate different types of information to make the proposed model geologically realistic. Multiple data sets can reduce uncertainty and non-uniqueness present in separate geophysical data inversions. Seismic data can play an important role in mineral exploration, however processing and interpretation of seismic data is difficult due to complexity of hard-rock geology. On the other hand, the recovered model from potential field methods is affected by inherent non uniqueness caused by the nature of the physics and by underdetermination of the problem. Joint inversion of seismic and potential field data can mitigate weakness of separate inversion of these methods. A stochastic joint inversion method based on geostatistical techniques is applied to estimate density and velocity distributions from gravity and travel time data. The method fully integrates the physical relations between density-gravity, on one hand, and slowness-travel time, on the other hand. As a consequence, when the data are considered noise-free, the responses from the inverted slowness and density data exactly reproduce the observed data. The required density and velocity auto- and cross-covariance are assumed to follow a linear model of coregionalization (LCM). The recent development of nonlinear model of coregionalization could also be applied if needed. The kernel function for the gravity method is obtained by the closed form formulation. For ray tracing, we use the shortest-path methods (SPM) to calculate the operation matrix. The jointed inversion is performed on structured grid; however, it is possible to extend it to use unstructured grid. The method is tested on two synthetic models: a model consisting of two objects buried in a homogeneous background and a model with stochastic distribution of parameters. The results illustrate the capability of the method to improve the inverted model compared to the separate inverted models with either gravity
Solving inverse problems of identification type by optimal control methods
Lenhart, S.; Protopopescu, V.; Jiongmin Yong
1997-06-01
Inverse problems of identification type for nonlinear equations are considered within the framework of optimal control theory. The rigorous solution of any particular problem depends on the functional setting, type of equation, and unknown quantity (or quantities) to be determined. Here the authors present only the general articulations of the formalism. Compared to classical regularization methods (e.g. Tikhonov coupled with optimization schemes), their approach presents several advantages, namely: (i) a systematic procedure to solve inverse problems of identification type; (ii) an explicit expression for the approximations of the solution; and (iii) a convenient numerical solution of these approximations.
Direct inversion methods for spectral amplitude modulation of femtosecond pulses.
Delgado-Aguillón, Jesús; Garduño-Mejía, Jesús; López-Téllez, Juan Manuel; Bruce, Neil C; Rosete-Aguilar, Martha; Román-Moreno, Carlos Jesús; Ortega-Martínez, Roberto
2014-04-01
In the present work, we applied an amplitude-spatial light modulator to shape the spectral amplitude of femtosecond pulses in a single step, without an iterative algorithm, by using an inversion method defined as the generalized retardance function. Additionally, we also present a single step method to shape the intensity profile defined as the influence matrix. Numerical and experimental results are presented for both methods.
Indium oxide inverse opal films synthesized by structure replication method
NASA Astrophysics Data System (ADS)
Amrehn, Sabrina; Berghoff, Daniel; Nikitin, Andreas; Reichelt, Matthias; Wu, Xia; Meier, Torsten; Wagner, Thorsten
2016-04-01
We present the synthesis of indium oxide (In2O3) inverse opal films with photonic stop bands in the visible range by a structure replication method. Artificial opal films made of poly(methyl methacrylate) (PMMA) spheres are utilized as template. The opal films are deposited via sedimentation facilitated by ultrasonication, and then impregnated by indium nitrate solution, which is thermally converted to In2O3 after drying. The quality of the resulting inverse opal film depends on many parameters; in this study the water content of the indium nitrate/PMMA composite after drying is investigated. Comparison of the reflectance spectra recorded by vis-spectroscopy with simulated data shows a good agreement between the peak position and calculated stop band positions for the inverse opals. This synthesis is less complex and highly efficient compared to most other techniques and is suitable for use in many applications.
Joint Geophysical Inversion With Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelievre, P. G.; Bijani, R.; Farquharson, C. G.
2015-12-01
Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.
Kılıç, Emre Eibert, Thomas F.
2015-05-01
An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems. Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained.
An equivalent source inversion method for imaging complex structures
NASA Astrophysics Data System (ADS)
Munk, Jens
Accurate subsurface imaging is of interest to geophysicists, having applications in geological mapping, underground void detection, ground contaminant mapping and land mine detection. The mathematical framework necessary to generate images of the subsurface from measurements of these fields describe the inverse problem, which is generally ill-posed and non-linear. Target scattering from an electromagnetic excitation results in a non-linear formulation, which is usually linearized using a weak scattering approximation. The equivalent source inversion method, in contrast, does not rely on a weak scattering approximation. The method combines the unknown total field and permittivity contrast into a single unknown distribution of "equivalent sources". Once determined, these sources are used to obtain an estimate of the total fields within the target or scatterer. The final step in the inversion is to use these fields in obtaining the desired physical property. Excellent reconstructions are obtained when the target is illuminated using multiple look angles and frequencies. Target reconstructions are further enhanced using various iterative algorithms. The general formulation of the method allow it to be used in conjunction with a number of geophysical applications. Specifically, the method can be applied to any geophysical technique incorporating a measured response to a known induced input. This is illustrated by formulating the method within resistivity electrical prospecting.
NASA Astrophysics Data System (ADS)
Ansari, R.; Campagne, J. E.; Colom, P.; Ferrari, C.; Magneville, Ch.; Martin, J. M.; Moniez, M.; Torrentó, A. S.
2016-02-01
We have observed regions of three galaxy clusters at z˜[0.06÷0.09] (Abell85, Abell1205, Abell2440) with the Nançay radiotelescope (NRT) to search for 21 cm emission and to fully characterize the FPGA based BAORadio digital backend. We have tested the new BAORadio data acquisition system by observing sources in parallel with the NRT standard correlator (ACRT) back-end over several months. BAORadio enables wide band instantaneous observation of the [1250,1500] MHz frequency range, as well as the use of powerful RFI mitigation methods thanks to its fine time sampling. A number of questions related to instrument stability, data processing and calibration are discussed. We have obtained the radiometer curves over the integration time range [0.01,10 000] seconds and we show that sensitivities of few mJy over most of the wide frequency band can be reached with the NRT. It is clearly shown that in blind line search, which is the context of H I intensity mapping for Baryon Acoustic Oscillations, the new acquisition system and processing pipeline outperforms the standard one. We report a positive detection of 21 cm emission at 3 σ-level from galaxies in the outer region of Abell85 at ≃1352 MHz (14400 km/s) corresponding to a line strength of ≃0.8 Jy km/s. We also observe an excess power around ≃1318 MHz (21600 km/s), although at lower statistical significance, compatible with emission from Abell1205 galaxies. Detected radio line emissions have been cross matched with optical catalogs and we have derived hydrogen mass estimates.
Recent developments in the inversion by the method of relaxation
NASA Technical Reports Server (NTRS)
Chahine, M. T.
1972-01-01
The relaxation method for inverse solution of the full radiative transfer equation is generalized to solve for all the atmospheric parameters that appear in the integrand as functions or functionals, without any a priori information about the expected solution. Illustrations are presented using the 7.5 micron CH4 band for determining temperature profiles in the Jovian atmosphere, and the 6.3 micron band for determining the water vapor mixing ratio in the earth's atmosphere.
An Efficient Inverse Aerodynamic Design Method For Subsonic Flows
NASA Technical Reports Server (NTRS)
Milholen, William E., II
2000-01-01
Computational Fluid Dynamics based design methods are maturing to the point that they are beginning to be used in the aircraft design process. Many design methods however have demonstrated deficiencies in the leading edge region of airfoil sections. The objective of the present research is to develop an efficient inverse design method which is valid in the leading edge region. The new design method is a streamline curvature method, and a new technique is presented for modeling the variation of the streamline curvature normal to the surface. The new design method allows the surface coordinates to move normal to the surface, and has been incorporated into the Constrained Direct Iterative Surface Curvature (CDISC) design method. The accuracy and efficiency of the design method is demonstrated using both two-dimensional and three-dimensional design cases.
NASA Astrophysics Data System (ADS)
Gladwin Pradeep, R.; Chandrasekar, V. K.; Mohanasubha, R.; Senthilvelan, M.; Lakshmanan, M.
2016-07-01
We identify contact transformations which linearize the given equations in the Riccati and Abel chains of nonlinear scalar and coupled ordinary differential equations to the same order. The identified contact transformations are not of Cole-Hopf type and are new to the literature. The linearization of Abel chain of equations is also demonstrated explicitly for the first time. The contact transformations can be utilized to derive dynamical symmetries of the associated nonlinear ODEs. The wider applicability of identifying this type of contact transformations and the method of deriving dynamical symmetries by using them is illustrated through two dimensional generalizations of the Riccati and Abel chains as well.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem. PMID:27505357
The quantum inverse scattering method with anyonic grading
NASA Astrophysics Data System (ADS)
Batchelor, M. T.; Foerster, A.; Guan, X.-W.; Links, J.; Zhou, H.-Q.
2008-11-01
We formulate the quantum inverse scattering method for the case of anyonic grading. This provides a general framework for constructing integrable models describing interacting hard-core anyons. Through this method we reconstruct the known integrable model of hard core anyons associated with the XXX model, and as a new application we construct the anyonic t - J model. The energy spectrum for each model is derived by means of a generalization of the algebraic Bethe ansatz. The grading parameters implementing the anyonic signature give rise to sector-dependent phase factors in the Bethe ansatz equations.
Application of the hybrid method to inverse heat conduction problems
NASA Astrophysics Data System (ADS)
Chen, Han-Taw; Chang, Shiuh-Ming
1990-04-01
The hybrid method involving the combined use of Laplace transform method and the FEM method is considerably powerful for solving one-dimensional linear heat conduction problems. In the present method, the time-dependent terms are removed from the problem using the Laplace transform method, and then the FEM is applied to the space domain. The transformed temperature is inverted numerically to obtain the result in the physical quantity. The estimation of the surface heat flux or temperature from transient measured temperatures inside the solid agrees well with the analytical solution of the direct problem without Beck's sensitivity analysis and a least-square criterion. Due to no time step, the present method can directly calculate the surface conditions of an inverse problem without step by step computation in the time domain until the specific time is reached.
Determination of transient fluid temperature using the inverse method
NASA Astrophysics Data System (ADS)
Jaremkiewicz, Magdalena
2014-03-01
This paper proposes an inverse method to obtain accurate measurements of the transient temperature of fluid. A method for unit step and linear rise of temperature is presented. For this purpose, the thermometer housing is modelled as a full cylindrical element (with no inner hole), divided into four control volumes. Using the control volume method, the heat balance equations can be written for each of the nodes for each of the control volumes. Thus, for a known temperature in the middle of the cylindrical element, the distribution of temperature in three nodes and heat flux at the outer surface were obtained. For a known value of the heat transfer coefficient the temperature of the fluid can be calculated using the boundary condition. Additionally, results of experimental research are presented. The research was carried out during the start-up of an experimental installation, which comprises: a steam generator unit, an installation for boiler feed water treatment, a tray-type deaerator, a blow down flashvessel for heat recovery, a steam pressure reduction station, a boiler control system and a steam header made of martensitic high alloy P91 steel. Based on temperature measurements made in the steam header using the inverse method, accurate measurements of the transient temperature of the steam were obtained. The results of the calculations are compared with the real temperature of the steam, which can be determined for a known pressure and enthalpy.
Using Inverse Problem Methods with Surveillance Data in Pneumococcal Vaccination
Sutton, Karyn L.; Banks, H. T.; Castillo-Chavez, Carlos
2010-01-01
The design and evaluation of epidemiological control strategies is central to public health policy. While inverse problem methods are routinely used in many applications, this remains an area in which their use is relatively rare, although their potential impact is great. We describe methods particularly relevant to epidemiological modeling at the population level. These methods are then applied to the study of pneumococcal vaccination strategies as a relevant example which poses many challenges common to other infectious diseases. We demonstrate that relevant yet typically unknown parameters may be estimated, and show that a calibrated model may used to assess implemented vaccine policies through the estimation of parameters if vaccine history is recorded along with infection and colonization information. Finally, we show how one might determine an appropriate level of refinement or aggregation in the age-structured model given age-stratified observations. These results illustrate ways in which the collection and analysis of surveillance data can be improved using inverse problem methods. PMID:20209093
NASA Astrophysics Data System (ADS)
Hohage, Thorsten
1997-10-01
Convergence and logarithmic convergence rates of the iteratively regularized Gauss - Newton method in a Hilbert space setting are proven provided a logarithmic source condition is satisfied. This method is applied to an inverse potential and an inverse scattering problem, and the source condition is interpreted as a smoothness condition in terms of Sobolev spaces for the case where the domain is a circle. Numerical experiments yield convergence and convergence rates of the form expected by our general convergence theorem.
NASA Astrophysics Data System (ADS)
Rezaie, Mohammad; Moradzadeh, Ali; Kalate, Ali Nejati; Aghajani, Hamid
2016-09-01
Inversion of gravity data is one of the important steps in the interpretation of practical data. One of the most interesting geological frameworks for gravity data inversion is the detection of sharp boundaries between orebody and host rocks. The focusing inversion is able to reconstruct a sharp image of the geological target. This technique can be efficiently applied for the quantitative interpretation of gravity data. In this study, a new reweighted regularized method for the 3D focusing inversion technique based on Lanczos bidiagonalization method is developed. The inversion results of synthetic data show that the new method is faster than common reweighted regularized conjugate gradient method to produce an acceptable solution for focusing inverse problem. The new developed inversion scheme is also applied for inversion of the gravity data collected over the San Nicolas Cu-Zn orebody in Zacatecas State, Mexico. The inversion results indicate a remarkable correlation with the true structure of the orebody that is achieved from drilling data.
Estimates of tropical bromoform emissions using an inversion method
NASA Astrophysics Data System (ADS)
Ashfold, M. J.; Harris, N. R. P.; Manning, A. J.; Robinson, A. D.; Warwick, N. J.; Pyle, J. A.
2014-01-01
Bromine plays an important role in ozone chemistry in both the troposphere and stratosphere. When measured by mass, bromoform (CHBr3) is thought to be the largest organic source of bromine to the atmosphere. While seaweed and phytoplankton are known to be dominant sources, the size and the geographical distribution of CHBr3 emissions remains uncertain. Particularly little is known about emissions from the Maritime Continent, which have usually been assumed to be large, and which appear to be especially likely to reach the stratosphere. In this study we aim to reduce this uncertainty by combining the first multi-annual set of CHBr3 measurements from this region, and an inversion process, to investigate systematically the distribution and magnitude of CHBr3 emissions. The novelty of our approach lies in the application of the inversion method to CHBr3. We find that local measurements of a short-lived gas like CHBr3 can be used to constrain emissions from only a relatively small, sub-regional domain. We then obtain detailed estimates of CHBr3 emissions within this area, which appear to be relatively insensitive to the assumptions inherent in the inversion process. We extrapolate this information to produce estimated emissions for the entire tropics (defined as 20° S-20° N) of 225 Gg CHBr3 yr-1. The ocean in the area we base our extrapolations upon is typically somewhat shallower, and more biologically productive, than the tropical average. Despite this, our tropical estimate is lower than most other recent studies, and suggests that CHBr3 emissions in the coastline-rich Maritime Continent may not be stronger than emissions in other parts of the tropics.
Inverse method for estimating shear stress in machining
NASA Astrophysics Data System (ADS)
Burns, T. J.; Mates, S. P.; Rhorer, R. L.; Whitenton, E. P.; Basak, D.
2016-01-01
An inverse method is presented for estimating shear stress in the work material in the region of chip-tool contact along the rake face of the tool during orthogonal machining. The method is motivated by a model of heat generation in the chip, which is based on a two-zone contact model for friction along the rake face, and an estimate of the steady-state flow of heat into the cutting tool. Given an experimentally determined discrete set of steady-state temperature measurements along the rake face of the tool, it is shown how to estimate the corresponding shear stress distribution on the rake face, even when no friction model is specified.
Moving lattice kinks and pulses: an inverse method.
Flach, S; Zolotaryuk, Y; Kladko, K
1999-05-01
We develop a general mapping from given kink or pulse shaped traveling-wave solutions including their velocity to the equations of motion on one-dimensional lattices which support these solutions. We apply this mapping-by definition an inverse method-to acoustic solitons in chains with nonlinear intersite interactions, nonlinear Klein-Gordon chains, reaction-diffusion equations, and discrete nonlinear Schrödinger systems. Potential functions can be found in a unique way provided the pulse shape is reflection symmetric and pulse and kink shapes are at least C2 functions. For kinks we discuss the relation of our results to the problem of a Peierls-Nabarro potential and continuous symmetries. We then generalize our method to higher dimensional lattices for reaction-diffusion systems. We find that increasing also the number of components easily allows for moving solutions.
Simple method for the synthesis of inverse patchy colloids
NASA Astrophysics Data System (ADS)
van Oostrum, P. D. J.; Hejazifar, M.; Niedermayer, C.; Reimhult, E.
2015-06-01
Inverse patchy colloids (IPC's) have recently been introduced as a conceptually simple model to study the phase-behavior of heterogeneously charged units. This class of patchy particles is referred to as inverse to highlight that the patches repel each other in contrast to the attractive interactions of conventional patches. IPCs demonstrate a complex interplay between attractions and repulsions that depend on their patch size and charge, their relative orientations as well as on charge of the substrate below; the resulting wide array of different types of aggregates that can be formed motivates their fabrication and use as model system. We present a novel method that does not rely on clean-room facilities and that is easily scalable to modify the surface of colloidal particles to create two polar regions with the opposite charge with respect to that of the equatorial region. The patch size is characterized by electron microscopy and fluorescently labeled to facilitate using confocal microscopy to study their phase behavior. We show that the pH can be used to tune the charges of the IPCs thus offering a tool to steer the self assembly.
Simple method for the synthesis of inverse patchy colloids.
van Oostrum, P D J; Hejazifar, M; Niedermayer, C; Reimhult, E
2015-06-17
Inverse patchy colloids (IPC's) have recently been introduced as a conceptually simple model to study the phase-behavior of heterogeneously charged units. This class of patchy particles is referred to as inverse to highlight that the patches repel each other in contrast to the attractive interactions of conventional patches. IPCs demonstrate a complex interplay between attractions and repulsions that depend on their patch size and charge, their relative orientations as well as on charge of the substrate below; the resulting wide array of different types of aggregates that can be formed motivates their fabrication and use as model system. We present a novel method that does not rely on clean-room facilities and that is easily scalable to modify the surface of colloidal particles to create two polar regions with the opposite charge with respect to that of the equatorial region. The patch size is characterized by electron microscopy and fluorescently labeled to facilitate using confocal microscopy to study their phase behavior. We show that the pH can be used to tune the charges of the IPCs thus offering a tool to steer the self assembly.
Estimates of tropical bromoform emissions using an inversion method
NASA Astrophysics Data System (ADS)
Ashfold, M. J.; Harris, N. R. P.; Manning, A. J.; Robinson, A. D.; Warwick, N. J.; Pyle, J. A.
2013-08-01
Bromine plays an important role in ozone chemistry in both the troposphere and stratosphere. When measured by mass, bromoform (CHBr3) is thought to be the largest organic source of bromine to the atmosphere. While seaweed and phytoplankton are known to be dominant sources, the size and the geographical distribution of CHBr3 emissions remains uncertain. Particularly little is known about emissions from the Maritime Continent, which have usually been assumed to be large, and which appear to be especially likely to reach the stratosphere. In this study we aim to use the first multi-annual set of CHBr3 measurements from this region, and an inversion method, to reduce this uncertainty. We find that local measurements of a short-lived gas like CHBr3 can only be used to constrain emissions from a relatively small, sub-regional domain. We then obtain detailed estimates of both the distribution and magnitude of CHBr3 emissions within this area. Our estimates appear to be relatively insensitive to the assumptions inherent in the inversion process. We extrapolate this information to produce estimated emissions for the entire tropics (defined as 20° S-20° N) of 225 GgCHBr3 y-1. This estimate is consistent with other recent studies, and suggests that CHBr3 emissions in the coastline-rich Maritime Continent may not be stronger than emissions in other parts of the tropics.
Optimized halftoning using dot diffusion and methods for inverse halftoning.
Mese, M; Vaidyanathan, P P
2000-01-01
Unlike the error diffusion method, the dot diffusion method for digital halftoning has the advantage of pixel-level parallelism. However, the image quality offered by error diffusion is still regarded as superior to most of the other known methods. We show how the dot diffusion method can be improved by optimization of the so-called class matrix. By taking the human visual characteristics into account we show that such optimization consistently results in images comparable to error diffusion, without sacrificing the pixel-level parallelism. Adaptive dot diffusion is also introduced and then a mathematical description of dot diffusion is derived. Furthermore, inverse halftoning of dot diffused images is discussed and two methods are proposed. The first one uses projection onto convex sets (POCS) and the second one uses wavelets. Of these methods, the wavelet method does not make use of the knowledge of the class matrix. Embedded multiresolution dot diffusion is also discussed, which is useful for rendering at different resolutions and transmitting images progressively.
MASS SUBSTRUCTURE IN ABELL 3128
McCleary, J.; Dell’Antonio, I.; Huwe, P.
2015-05-20
We perform a detailed two-dimensional weak gravitational lensing analysis of the nearby (z = 0.058) galaxy cluster Abell 3128 using deep ugrz imaging from the Dark Energy Camera (DECam). We have designed a pipeline to remove instrumental artifacts from DECam images and stack multiple dithered observations without inducing a spurious ellipticity signal. We develop a new technique to characterize the spatial variation of the point-spread function that enables us to circularize the field to better than 0.5% and thereby extract the intrinsic galaxy ellipticities. By fitting photometric redshifts to sources in the observation, we are able to select a sample of background galaxies for weak-lensing analysis free from low-redshift contaminants. Photometric redshifts are also used to select a high-redshift galaxy subsample with which we successfully isolate the signal from an interloping z = 0.44 cluster. We estimate the total mass of Abell 3128 by fitting the tangential ellipticity of background galaxies with the weak-lensing shear profile of a Navarro–Frenk–White (NFW) halo and also perform NFW fits to substructures detected in the 2D mass maps of the cluster. This study yields one of the highest resolution mass maps of a low-z cluster to date and is the first step in a larger effort to characterize the redshift evolution of mass substructures in clusters.
Mass Substructure in Abell 3128
NASA Astrophysics Data System (ADS)
McCleary, J.; dell'Antonio, I.; Huwe, P.
2015-05-01
We perform a detailed two-dimensional weak gravitational lensing analysis of the nearby (z = 0.058) galaxy cluster Abell 3128 using deep ugrz imaging from the Dark Energy Camera (DECam). We have designed a pipeline to remove instrumental artifacts from DECam images and stack multiple dithered observations without inducing a spurious ellipticity signal. We develop a new technique to characterize the spatial variation of the point-spread function that enables us to circularize the field to better than 0.5% and thereby extract the intrinsic galaxy ellipticities. By fitting photometric redshifts to sources in the observation, we are able to select a sample of background galaxies for weak-lensing analysis free from low-redshift contaminants. Photometric redshifts are also used to select a high-redshift galaxy subsample with which we successfully isolate the signal from an interloping z = 0.44 cluster. We estimate the total mass of Abell 3128 by fitting the tangential ellipticity of background galaxies with the weak-lensing shear profile of a Navarro-Frenk-White (NFW) halo and also perform NFW fits to substructures detected in the 2D mass maps of the cluster. This study yields one of the highest resolution mass maps of a low-z cluster to date and is the first step in a larger effort to characterize the redshift evolution of mass substructures in clusters.
Gradient-based methods for full waveform inversion
NASA Astrophysics Data System (ADS)
Métivier, L.; Brossier, R.; Operto, S.; Virieux, J.
2012-12-01
The minimization of the distance between recorded and synthetic seismograms (namely misfit function) for the reconstruction of subsurface velocity models leads to large-scale non-linear inverse problems. These problems are generally solved using gradient-based methods, such as the (preconditioned) steepest-descent method or the (preconditioned) non-linear conjugate gradient method, Gauss-Newton approach and more recently the l-BFGS quasi-Newton method. Except the Gauss-Newton approach, these methods only require the capability of computing (and storing) the gradient of the misfit function, efficiently performed through the adjoint state method, leading to the resolution of one forward problem and one adjoint problem per source. However, the inverse Hessian operator could be considered for compensating from target illumination variations coming from acquisition geometry and medium velocity variations. This operator acts as a filter in the model space when velocity is updated. For example, the l-BFGS method estimates an approximation of the inverse of the Hessian from gradients of previous iterations without no significant extra computational costs. Gauss-Newton approximation of the Hessian not only adds an extra computational cost but also neglects multi-scattering effects. Exact Newton methods will consider multi-scattering effects and may be more accurate than the l-BFGS approximation. For such investigation, we shall introduce the second-order adjoint formulation for the efficient estimation of the product of the Hessian operator and any vector in the model space. Using this product, we may update the velocity model through the resolution of the linear system associated with the computation of the Newton descent direction using a "matrix free" iterative linear solver such as the conjugate gradient method. This implementation could be performed for Newton approaches (and also the Gauss-Newton approximation) and requires an additional state and adjoint
A comparison of lidar inversion methods for cirrus applications
NASA Technical Reports Server (NTRS)
Elouragini, Salem; Flamant, Pierre H.
1992-01-01
Several methods for inverting the lidar equation are suggested as means to derive the cirrus optical properties (beta backscatter, alpha extinction coefficients, and delta optical depth) at one wavelength. The lidar equation can be inverted in a linear or logarithmic form; either solution assumes a linear relationship: beta = kappa(alpha), where kappa is the lidar ratio. A number of problems prevent us from calculating alpha (or beta) with a good accuracy. Some of these are as follows: (1) the multiple scattering effect (most authors neglect it); (2) an absolute calibration of the lidar system (difficult and sometimes not possible); (3) lack of accuracy on the lidar ratio k (taken as constant, but in fact it varies with range and cloud species); and (4) the determination of boundary condition for logarithmic solution which depends on signal to noise ration (SNR) at cloud top. An inversion in a linear form needs an absolute calibration of the system. In practice one uses molecular backscattering below the cloud to calibrate the system. This method is not permanent because the lower atmosphere turbidity is variable. For a logarithmic solution, a reference extinction coefficient (alpha(sub f)) at cloud top is required. Several methods to determine alpha(sub f) were suggested. We tested these methods at low SNR. This led us to propose two new methods referenced as S1 and S2.
Inversion of lidar signals with the slope method.
Kunz, G J; de Leeuw, G
1993-06-20
In homogeneous atmospheres, backscatter and extinction coefficients are commonly determined by the inversion of lidar signals by using the slope method, i.e., from a linear least-squares fit to the logarithm on the range-compensated lidar return. We investigate the accuracy of this method. A quantitative analysis is presented of the influence of white noise and atmospheric extinction on the accuracy of the slope method and on the maximum range of lidar systems. To meet this objective, we simulate lidar signals with extinction coefficients ranging from 10(-3) km(-1) to 10 km(-1) with different signal-to-noise ratios. It is shown that the backscatter coefficient can be determined by using the slope method with an ccuracy of better than ~ 10% if the extinction coefficient is smaller than 1 km(-1) and the signal-to-noise ratio is better than ~ 1000. The accuracy in the calculated extinction coefficient is only better than ~ 10% if the extinction is larger than 1 km(-1) and the signal-to-noise ratio is better than ~2000. If th atmospheric extinction coefficient is smaller than 0.1 km(-1), then it is not possible to invert the extinction from lidar measurements with an accuracy of 10% or better unless the signal-to-noise ratio isunrealistically high.
Methodes entropiques appliquees au probleme inverse en magnetoencephalographie
NASA Astrophysics Data System (ADS)
Lapalme, Ervig
2005-07-01
This thesis is devoted to biomagnetic source localization using magnetoencephalography. This problem is known to have an infinite number of solutions. So methods are required to take into account anatomical and functional information on the solution. The work presented in this thesis uses the maximum entropy on the mean method to constrain the solution. This method originates from statistical mechanics and information theory. This thesis is divided into two main parts containing three chapters each. The first part reviews the magnetoencephalographic inverse problem: the theory needed to understand its context and the hypotheses for simplifying the problem. In the last chapter of this first part, the maximum entropy on the mean method is presented: its origins are explained and also how it is applied to our problem. The second part is the original work of this thesis presenting three articles; one of them already published and two others submitted for publication. In the first article, a biomagnetic source model is developed and applied in a theoretical con text but still demonstrating the efficiency of the method. In the second article, we go one step further towards a realistic modelization of the cerebral activation. The main priors are estimated using the magnetoencephalographic data. This method proved to be very efficient in realistic simulations. In the third article, the previous method is extended to deal with time signals thus exploiting the excellent time resolution offered by magnetoencephalography. Compared with our previous work, the temporal method is applied to real magnetoencephalographic data coming from a somatotopy experience and results agree with previous physiological knowledge about this kind of cognitive process.
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
Research on inverse methods and optimization in Italy
NASA Technical Reports Server (NTRS)
Larocca, Francesco
1991-01-01
The research activities in Italy on inverse design and optimization are reviewed. The review is focused on aerodynamic aspects in turbomachinery and wing section design. Inverse design of blade rows and ducts of turbomachinery in subsonic and transonic regime are illustrated by the Politecnico di Torino and turbomachinery industry (FIAT AVIO).
NASA Astrophysics Data System (ADS)
Zhang, Chengjiao; Li, Xiaojie; Yang, Chenchen
2016-07-01
This paper introduces a modified method of characteristics and its application in forward and inversion simulations of underwater explosion. Compared with standard method of characteristics which is appropriate to homoentripic flow problem, the modified method can be also used to deal with isentropic flow problem such as underwater explosion. Underwater explosion of spherical TNT and composition B explosives are simulated by using the modified method, respectively. Peak pressures and flow field pressures are obtained, and they are coincident with those from empirical formulas. The comparison demonstrates the modified is feasible and reliable in underwater explosion simulation. Based on the modified method, inverse difference schemes and inverse method are introduced. Combined with the modified, the inverse schemes can be used to deal with gas-water interface inversion of underwater explosion. Inversion simulations of underwater explosion of the explosives are performed in water, and equation of state (EOS) of detonation product is not needed. The peak pressures from the forward simulations are provided as boundary conditions in the inversion simulations. Inversion interfaces are obtained and they are mainly in good agreement with those from the forward simulations in near field. The comparison indicates the inverse method and the inverse difference schemes are reliable and reasonable in interface inversion simulation.
Asteroid spin and shape modelling using two lightcurve inversion methods
NASA Astrophysics Data System (ADS)
Marciniak, Anna; Bartczak, Przemyslaw; Konstanciak, Izabella; Dudzinski, Grzegorz; Mueller, Thomas G.; Duffard, Rene
2016-10-01
We are conducting an observing campaign to counteract strong selection effects in photometric studies of asteroids. Our targets are long-period (P>12 hours) and low-amplitude (a_max<0.25 mag) asteroids, that although numerous, have poor lightcurve datasets (Marciniak et al. 2015, PSS 118, 256). As a result such asteroids are very poorly studied in terms of their spins and shapes. Our campaign targets a sample of around 100 bright (H<11 mag) main belt asteroids sharing both of these features, resulting in a few tens of new composite lightcurves each year. At present the data gathered so far allowed to construct detailed models for the shape and spin for about ten targets.In this study we perform spin and shape modelling using two lightcurve inversion methods: convex inversion (Kaasalainen et al. 2001, Icarus, 153, 37) and nonconvex SAGE modelling algorithm (Shaping Asteroids with Genetic Evolution, Bartczak et al. 2014, MNRAS, 443, 1802). These two methods are independent from each other, and are based on different assumptions for the shape.Thus, the results obtained on the same datasets provide a cross-check of both the methods and the resulting spin and shape models. The results for the spin solutions are highly consistent, and the shape models are similar, though the ones from SAGE algorithm provide more details of the surface features. Nonconvex shape produced by SAGE have been compared with direct images from spacecrafts and the first results for targets like Eros or Lutetia (Batczak et al. 2014, ACM conf. 29B) provide a high level of agreement.Another way of validation is the shape model comparison with the asteroid shape contours obtained using different techniques (like the stellar occultation timings or adaptive optics imaging) or against data in thermal infrared range gathered by ground and space-bound observatories. The thermal data could provide assignment of size and albedo, but also can help to resolve spin-pole ambiguities. In special cases, the
Frequency-domain elastic full-waveform multiscale inversion method based on dual-level parallelism
NASA Astrophysics Data System (ADS)
Li, Yuan-Yuan; Li, Zhen-Chun; Zhang, Kai; Zhang, Xuan
2015-12-01
The complexity of an elastic wavefield increases the nonlinearity of inversion. To some extent, multiscale inversion decreases the nonlinearity of inversion and prevents it from falling into local extremes. A multiscale strategy based on the simultaneous use of frequency groups and layer stripping method based on damped wave field improves the stability of inversion. A dual-level parallel algorithm is then used to decrease the computational cost and improve practicability. The seismic wave modeling of a single frequency and inversion in a frequency group are computed in parallel by multiple nodes based on multifrontal massively parallel sparse direct solver and MPI. Numerical tests using an overthrust model show that the proposed inversion algorithm can effectively improve the stability and accuracy of inversion by selecting the appropriate inversion frequency and damping factor in lowfrequency seismic data.
Noncommutative Inverse Scattering Method for the Kontsevich System
NASA Astrophysics Data System (ADS)
Arthamonov, Semeon
2015-09-01
We formulate an analog of Inverse Scattering Method for integrable systems on noncommutative associative algebras. In particular, we define Hamilton flows, Casimir elements and noncommutative analog of the Lax matrix. The noncommutative Lax element generates infinite family of commuting Hamilton flows on an associative algebra. The proposed approach to integrable systems on associative algebras satisfies certain universal property, in particular, it incorporates both classical and quantum integrable systems as well as provides a basis for further generalization. We motivate our definition by explicit construction of noncommutative analog of Lax matrix for a system of differential equations on associative algebra recently proposed by Kontsevich. First, we present these equations in the Hamilton form by defining a bracket of Loday type on the group algebra of the free group with two generators. To make the definition more constructive, we utilize (with certain generalizations) the Van den Bergh approach to Loday brackets via double Poisson brackets. We show that there exists an infinite family of commuting flows generated by the noncommutative Lax element.
Nonlinear inversion of pre-stack seismic data using variable metric method
NASA Astrophysics Data System (ADS)
Zhang, Fanchang; Dai, Ronghuo
2016-06-01
At present, the routine method to perform AVA (Amplitude Variation with incident Angle) inversion is based on the assumption that the ratio of S-wave velocity to P-wave velocity γ is a constant. However, this simplified assumption does not always hold, and it is necessary to use nonlinear inversion method to solve it. Based on Bayesian theory, the objective function for nonlinear AVA inversion is established and γ is considered as an unknown model parameter. Then, variable metric method with a strategy of periodically variational starting point is used to solve the nonlinear AVA inverse problem. The proposed method can keep the inverted reservoir parameters approach to the actual solution and has been performed on both synthetic and real data. The inversion results suggest that the proposed method can solve the nonlinear inverse problem and get accurate solutions even without the knowledge of γ.
Computational methods for inverse problems in geophysics: inversion of travel time observations
Pereyra, V.; Keller, H.B.; Lee, W.H.K.
1980-01-01
General ways of solving various inverse problems are studied for given travel time observations between sources and receivers. These problems are separated into three components: (a) the representation of the unknown quantities appearing in the model; (b) the nonlinear least-squares problem; (c) the direct, two-point ray-tracing problem used to compute travel time once the model parameters are given. Novel software is described for (b) and (c), and some ideas given on (a). Numerical results obtained with artificial data and an implementation of the algorithm are also presented. ?? 1980.
Kinugawa, Tohru
2014-02-15
This paper presents a simple but nontrivial generalization of Abel's mechanical problem, based on the extended isochronicity condition and the superposition principle. There are two primary aims. The first one is to reveal the linear relation between the transit-time T and the travel-length X hidden behind the isochronicity problem that is usually discussed in terms of the nonlinear equation of motion (d{sup 2}X)/(dt{sup 2}) +(dU)/(dX) =0 with U(X) being an unknown potential. Second, the isochronicity condition is extended for the possible Abel-transform approach to designing the isochronous trajectories of charged particles in spectrometers and/or accelerators for time-resolving experiments. Our approach is based on the integral formula for the oscillatory motion by Landau and Lifshitz [Mechanics (Pergamon, Oxford, 1976), pp. 27–29]. The same formula is used to treat the non-periodic motion that is driven by U(X). Specifically, this unknown potential is determined by the (linear) Abel transform X(U) ∝ A[T(E)], where X(U) is the inverse function of U(X), A=(1/√(π))∫{sub 0}{sup E}dU/√(E−U) is the so-called Abel operator, and T(E) is the prescribed transit-time for a particle with energy E to spend in the region of interest. Based on this Abel-transform approach, we have introduced the extended isochronicity condition: typically, τ = T{sub A}(E) + T{sub N}(E) where τ is a constant period, T{sub A}(E) is the transit-time in the Abel type [A-type] region spanning X > 0 and T{sub N}(E) is that in the Non-Abel type [N-type] region covering X < 0. As for the A-type region in X > 0, the unknown inverse function X{sub A}(U) is determined from T{sub A}(E) via the Abel-transform relation X{sub A}(U) ∝ A[T{sub A}(E)]. In contrast, the N-type region in X < 0 does not ensure this linear relation: the region is covered with a predetermined potential U{sub N}(X) of some arbitrary choice, not necessarily obeying the Abel-transform relation. In discussing
New Y-function based MOSFET parameter extraction method from weak to strong inversion range
NASA Astrophysics Data System (ADS)
Henry, J. B.; Rafhay, Q.; Cros, A.; Ghibaudo, G.
2016-09-01
A new Y-function based MOSFET parameter extraction method is proposed. This method relies on explicit expressions of inversion charge and drain current versus Yc(=Qi√Cgc)-function and Y(=Id/√gm)-function, respectively, applicable from weak to strong inversion range. It enables a robust MOSFET parameter extraction even for low gate voltage overdrive, whereas conventional extraction techniques relying on strong inversion approximation fail.
Semilocal Convergence Theorem for the Inverse-Free Jarratt Method under New Hölder Conditions
Zhao, Yueqing; Lin, Rongfei; Šmarda, Zdenek; Khan, Yasir; Chen, Jinbiao; Wu, Qingbiao
2015-01-01
Under the new Hölder conditions, we consider the convergence analysis of the inverse-free Jarratt method in Banach space which is used to solve the nonlinear operator equation. We establish a new semilocal convergence theorem for the inverse-free Jarratt method and present an error estimate. Finally, three examples are provided to show the application of the theorem. PMID:25884027
NASA Astrophysics Data System (ADS)
Jiang, Mingfeng; Xia, Ling; Shou, Guofa; Tang, Min
2007-03-01
Computing epicardial potentials from body surface potentials constitutes one form of ill-posed inverse problem of electrocardiography (ECG). To solve this ECG inverse problem, the Tikhonov regularization and truncated singular-value decomposition (TSVD) methods have been commonly used to overcome the ill-posed property by imposing constraints on the magnitudes or derivatives of the computed epicardial potentials. Such direct regularization methods, however, are impractical when the transfer matrix is large. The least-squares QR (LSQR) method, one of the iterative regularization methods based on Lanczos bidiagonalization and QR factorization, has been shown to be numerically more reliable in various circumstances than the other methods considered. This LSQR method, however, to our knowledge, has not been introduced and investigated for the ECG inverse problem. In this paper, the regularization properties of the Krylov subspace iterative method of LSQR for solving the ECG inverse problem were investigated. Due to the 'semi-convergence' property of the LSQR method, the L-curve method was used to determine the stopping iteration number. The performance of the LSQR method for solving the ECG inverse problem was also evaluated based on a realistic heart-torso model simulation protocol. The results show that the inverse solutions recovered by the LSQR method were more accurate than those recovered by the Tikhonov and TSVD methods. In addition, by combing the LSQR with genetic algorithms (GA), the performance can be improved further. It suggests that their combination may provide a good scheme for solving the ECG inverse problem.
Wave-Propagation Modeling and Inversion Using Frequency-Domain Integral Equation Methods
NASA Astrophysics Data System (ADS)
Strickland, Christopher E.
Full waveform inverse methods describe the full physics of wave propagation and can potentially overcome the limitations of ray theoretic methods. This work explores the use of integral equation based methods for simulation and inversion and illustrates their potential for computationally demanding problems. A frequency-domain integral equation approach to simulate wave-propagation in heterogeneous media and solve the inverse wave-scattering problem will be presented for elastic, acoustic, and electromagnetic systems. The method will be illustrated for georadar (ground- or ice-penetrating radar) applications and compared to results obtained using ray theoretic methods. In order to tackle the non-linearity of the problem, the inversion incorporates a broad range of frequencies to stabilize the solution. As with most non-linear inversion methods, a starting model that reasonably approximates the true model is critical to convergence of the algorithm. To improve the starting model, a variable reference inversion technique is developed that allows the background reference medium to vary for each source-receiver data pair and is less restrictive than using a single reference medium for the entire dataset. The reference medium can be assumed homogeneous (although different for each data point) to provide a computationally efficient, single-step, frequency-domain inversion approach that incorporates finite frequency effects not captured by ray based methods. The inversion can then be iterated on to further refine the solution.
Rapid Inversion of Angular Deflection Data for Certain Axisymmetric Refractive Index Distributions
NASA Technical Reports Server (NTRS)
Rubinstein, R.; Greenberg, P. S.
1994-01-01
Certain functions useful for representing axisymmetric refractive-index distributions are shown to have exact solutions for Abel transformation of the resulting angular deflection data. An advantage of this procedure over direct numerical Abel inversion is that least-squares curve fitting is a smoothing process that reduces the noise sensitivity of the computation
NASA Astrophysics Data System (ADS)
Jiang, Jun
This dissertation summarizes a procedure to design blades with finite thickness in three dimensions. In this inverse method, the prescribed quantities are the blade pressure loading shape, the inlet and outlet spanwise distributions of swirl, and the blade thickness distributions, and the primary calculated quantity is the blade geometry. The method is formulated in the fully inverse mode for design of three-dimensional blades in rotational and compressible flows whereby the blade shape is determined iteratively using the flow tangency condition along the blade surfaces. This technique is demonstrated here in the first instance for the design of two-dimensional cascaded and three-dimensional blades with finite thickness in inviscid and incompressible flows. In addition, the incoming flow is assumed irrotational so that the only vorticity present in the flowfield is the blade bound and shed vorticities. Design calculations presented for two-dimensional cascaded blades include an inlet guide vane, an impulse turbine blade, and a compressor blade. Consistency check is carried out for these cascaded blade design calculations using a panel analysis method and the analytical solution for the Gostelow profile. Free-vortex design results are also shown for fully three-dimensional blades with finite thickness such as an inlet guide vane, a rotor of axial-flow pumps, and a high-flow-coefficient pump inducer with design parameters typically found in industrial applications. These three-dimensional inverse design results are verified using Adamczyk's inviscid code.
Comparative study of inversion methods of three-dimensional NMR and sensitivity to fluids
NASA Astrophysics Data System (ADS)
Tan, Maojin; Wang, Peng; Mao, Keyu
2014-04-01
Three-dimensional nuclear magnetic resonance (3D NMR) logging can simultaneously measure transverse relaxation time (T2), longitudinal relaxation time (T1), and diffusion coefficient (D). These parameters can be used to distinguish fluids in the porous reservoirs. For 3D NMR logging, the relaxation mechanism and mathematical model, Fredholm equation, are introduced, and the inversion methods including Singular Value Decomposition (SVD), Butler-Reeds-Dawson (BRD), and Global Inversion (GI) methods are studied in detail, respectively. During one simulation test, multi-echo CPMG sequence activation is designed firstly, echo trains of the ideal fluid models are synthesized, then an inversion algorithm is carried on these synthetic echo trains, and finally T2-T1-D map is built. Futhermore, SVD, BRD, and GI methods are respectively applied into a same fluid model, and the computing speed and inversion accuracy are compared and analyzed. When the optimal inversion method and matrix dimention are applied, the inversion results are in good aggreement with the supposed fluid model, which indicates that the inversion method of 3D NMR is applieable for fluid typing of oil and gas reservoirs. Additionally, the forward modeling and inversion tests are made in oil-water and gas-water models, respectively, the sensitivity to the fluids in different magnetic field gradients is also examined in detail. The effect of magnetic gradient on fluid typing in 3D NMR logging is stuied and the optimal manetic gradient is choosen.
NASA Astrophysics Data System (ADS)
Grigoriev, M.; Babich, L.
2015-09-01
The article represents the main noninvasive methods of heart electrical activity examination, theoretical bases of solution of electrocardiography inverse problem, application of different methods of heart examination in clinical practice, and generalized achievements in this sphere in global experience.
Nonlocal symmetries of Riccati and Abel chains and their similarity reductions
NASA Astrophysics Data System (ADS)
Bruzon, M. S.; Gandarias, M. L.; Senthilvelan, M.
2012-02-01
We study nonlocal symmetries and their similarity reductions of Riccati and Abel chains. Our results show that all the equations in Riccati chain share the same form of nonlocal symmetry. The similarity reduced Nth order ordinary differential equation (ODE), N = 2, 3, 4, …, in this chain yields (N - 1)th order ODE in the same chain. All the equations in the Abel chain also share the same form of nonlocal symmetry (which is different from the one that exist in Riccati chain) but the similarity reduced Nth order ODE, N = 2, 3, 4, …, in the Abel chain always ends at the (N - 1)th order ODE in the Riccati chain. We describe the method of finding general solution of all the equations that appear in these chains from the nonlocal symmetry.
A method for determining void arrangements in inverse opals.
Blanford, C F; Carter, C B; Stein, A
2004-12-01
The periodic arrangement of voids in ceramic materials templated by colloidal crystal arrays (inverse opals) has been analysed by transmission electron microscopy. Individual particles consisting of an approximately spherical array of at least 100 voids were tilted through 90 degrees along a single axis within the transmission electron microscope. The bright-field images of these particles at high-symmetry points, their diffractograms calculated by fast Fourier transforms, and the transmission electron microscope goniometer angles were compared with model face-centred cubic, body-centred cubic, hexagonal close-packed, and simple cubic lattices in real and reciprocal space. The spatial periodicities were calculated for two-dimensional projections. The systematic absences in these diffractograms differed from those found in diffraction patterns from three-dimensional objects. The experimental data matched only the model face-centred cubic lattice, so it was concluded that the packing of the voids (and, thus, the polymer spheres that composed the original colloidal crystals) was face-centred cubic. In face-centred cubic structures, the stacking-fault displacement vector is a/6<211> . No stacking faults were observed when viewing the inverse opal structure along the orthogonal <110>-type directions, eliminating the possibility of a random hexagonally close-packed structure for the particles observed. This technique complements synchrotron X-ray scattering work on colloidal crystals by allowing both real-space and reciprocal-space analysis to be carried out on a smaller cross-sectional area.
Fast 3D inversion of airborne gravity-gradiometry data using Lanczos bidiagonalization method
NASA Astrophysics Data System (ADS)
Meng, Zhaohai; Li, Fengting; Zhang, Dailei; Xu, Xuechun; Huang, Danian
2016-09-01
We developed a new fast inversion method for to process and interpret airborne gravity gradiometry data, which was based on Lanczos bidiagonalization algorithm. Here, we describe the application of this new 3D gravity gradiometry inversion method to recover a subsurface density distribution model from the airborne measured gravity gradiometry anomalies. For this purpose, the survey area is divided into a large number of rectangular cells with each cell possessing a constant unknown density. It is well known that the solution of large linear gravity gradiometry is an ill-posed problem since using the smoothest inversion method is considerably time consuming. We demonstrate that the Lanczos bidiagonalization method can be an appropriate algorithm to solve a Tikhonov solver time cost function for resolving the large equations within a short time. Lanczos bidiagonalization is designed to make the very large gravity gradiometry forward modeling matrices to become low-rank, which will considerably reduce the running time of the inversion method. We also use a weighted generalized cross validation method to choose the appropriate Tikhonov parameter to improve inversion results. The inversion incorporates a model norm that allows us to attain the smoothing and depth of the solution; in addition, the model norm counteracts the natural decay of the kernels, which concentrate at shallow depths. The method is applied on noise-contaminated synthetic gravity gradiometry data to demonstrate its suitability for large 3D gravity gradiometry data inversion. The airborne gravity gradiometry data from the Vinton Salt Dome, USE, were considered as a case study. The validity of the new method on real data is discussed with reference to the Vinton Dome inversion result. The intermediate density values in the constructed model coincide well with previous results and geological information. This demonstrates the validity of the gravity gradiometry inversion method.
Photometric Observations of the Binary Nuclei of Three Abell Planetary Nebulae
NASA Astrophysics Data System (ADS)
Afşar, M.; Ibanoǧlu, C.
2004-07-01
CCD photometric observations of the three Abell planetary nebulae (Abell 63, Abell 46 and Abell 41) nuclei are presented. These systems are binary systems which allow us to derive model-independent parameters. Also the results of the light curve solution of UU Sge (binary nucleus of Abell 63) are discussed.
Method for the preparation of metal colloids in inverse micelles and product preferred by the method
Wilcoxon, Jess P.
1992-01-01
A method is provided for preparing catalytic elemental metal colloidal particles (e.g. gold, palladium, silver, rhodium, iridium, nickel, iron, platinum, molybdenum) or colloidal alloy particles (silver/iridium or platinum/gold). A homogeneous inverse micelle solution of a metal salt is first formed in a metal-salt solvent comprised of a surfactant (e.g. a nonionic or cationic surfactant) and an organic solvent. The size and number of inverse micelles is controlled by the proportions of the surfactant and the solvent. Then, the metal salt is reduced (by chemical reduction or by a pulsed or continuous wave UV laser) to colloidal particles of elemental metal. After their formation, the colloidal metal particles can be stabilized by reaction with materials that permanently add surface stabilizing groups to the surface of the colloidal metal particles. The sizes of the colloidal elemental metal particles and their size distribution is determined by the size and number of the inverse micelles. A second salt can be added with further reduction to form the colloidal alloy particles. After the colloidal elemental metal particles are formed, the homogeneous solution distributes to two phases, one phase rich in colloidal elemental metal particles and the other phase rich in surfactant. The colloidal elemental metal particles from one phase can be dried to form a powder useful as a catalyst. Surfactant can be recovered and recycled from the phase rich in surfactant.
Application of direct inverse analogy method (DIVA) and viscous design optimization techniques
NASA Technical Reports Server (NTRS)
Greff, E.; Forbrich, D.; Schwarten, H.
1991-01-01
A direct-inverse approach to the transonic design problem was presented in its initial state at the First International Conference on Inverse Design Concepts and Optimization in Engineering Sciences (ICIDES-1). Further applications of the direct inverse analogy (DIVA) method to the design of airfoils and incremental wing improvements and experimental verification are reported. First results of a new viscous design code also from the residual correction type with semi-inverse boundary layer coupling are compared with DIVA which may enhance the accuracy of trailing edge design for highly loaded airfoils. Finally, the capabilities of an optimization routine coupled with the two viscous full potential solvers are investigated in comparison to the inverse method.
A boundary integral method for an inverse problem in thermal imaging
NASA Technical Reports Server (NTRS)
Bryan, Kurt
1992-01-01
An inverse problem in thermal imaging involving the recovery of a void in a material from its surface temperature response to external heating is examined. Uniqueness and continuous dependence results for the inverse problem are demonstrated, and a numerical method for its solution is developed. This method is based on an optimization approach, coupled with a boundary integral equation formulation of the forward heat conduction problem. Some convergence results for the method are proved, and several examples are presented using computationally generated data.
A fast and low-loss 3-D magnetotelluric inversion method with parallel structure
NASA Astrophysics Data System (ADS)
Zhang, K.; Zhang, L.
2013-12-01
The 2D assumption is valid in some cases of interpretation, the approximation does not work in most cases, especially in areas with complex geo-electrical structure. A number of 3D magentotelluric inversion methods has been proposed, including RRI, CG, QA, NLCG. Each of those methods has its own advantages and disadvantages. However, as the 3D dataset and mesh grid require greater computer memory and calculation time than 2D methods, the efficiency of the inversion scheme become a key concern of 3D inversions. We chose NLCG as the optimization method for inversion. A parameter matrix related with the current resisitivity model and data error is proposed to approximate the Hessian matrix. So four forward calculation can be avoided each iteration. In addition, OPENMP parallel API is utilized to establish an effecient parallel inversion structure based on frequency to reduce computation time. And both synthetic and field data are used to test the efficiency of the inversion and the preconditioning method. The model consists of four square prisms residing in a halfspace. The total computation time of invertion is 706s (use one PC). Fiugre 1 shows the inversion result. The abnormal bodies can be distinguished clearly. Field data from the NIHE dataset in China is used to verify the reliability and efficiency of the 3D inversion method. The total computation time is about 25 minutes after 60 iterations on one PC. Totally, four electrical layers can be corresponded to the four stratum in 3D AMT inversion model, and the faults can be seen clearly. In addition, we can get more information about fault and alteration interface from constrained inversion result. Finally, the inversion method is very fast and low-loss, so it can be used in modern PC (need only one PC) with few hardware constraints. (a): initial model; (b): inversion depth slices (1-4km); (c): fitting error (a): AMT 3D slice; (b): CSAMT 2D model; (c): TEM 1D model; (d): SIP 2D model; (e) AMT 3D constrained
[A hyperspectral subpixel target detection method based on inverse least squares method].
Li, Qing-Bo; Nie, Xin; Zhang, Guang-Jun
2009-01-01
In the present paper, an inverse least square (ILS) method combined with the Mahalanobis distance outlier detection method is discussed to detect the subpixel target from the hyperspectral image. Firstly, the inverse model for the target spectrum and all the pixel spectra was established, in which the accurate target spectrum was obtained previously, and then the SNV algorithm was employed to preprocess each original pixel spectra separately. After the pretreatment, the regressive coefficient of ILS was calculated with partial least square (PLS) algorithm. Each point in the vector of regressive coefficient corresponds to a pixel in the image. The Mahalanobis distance was calculated with each point in the regressive coefficient vector. Because Mahalanobis distance stands for the extent to which samples deviate from the total population, the point with Mahalanobis distance larger than the 3sigma was regarded as the subpixel target. In this algorithm, no other prior information such as representative background spectrum or modeling of background is required, and only the target spectrum is needed. In addition, the result of the detection is insensitive to the complexity of background. This method was applied to AVIRIS remote sensing data. For this simulation experiment, AVIRIS remote sensing data was free downloaded from the NASA official websit, the spectrum of a ground object in the AVIRIS hyperspectral image was picked up as the target spectrum, and the subpixel target was simulated though a linear mixed method. The comparison of the subpixel detection result of the method mentioned above with that of orthogonal subspace projection method (OSP) was performed. The result shows that the performance of the ILS method is better than the traditional OSP method. The ROC (receive operating characteristic curve) and SNR were calculated, which indicates that the ILS method possesses higher detection accuracy and less computing time than the OSP algorithm. PMID:19385196
LensPerfect Analysis of Abell 1689
NASA Astrophysics Data System (ADS)
Coe, Dan A.
2007-12-01
I present the first massmap to perfectly reproduce the position of every gravitationally-lensed multiply-imaged galaxy detected to date in ACS images of Abell 1689. This massmap was obtained using a powerful new technique made possible by a recent advance in the field of Mathematics. It is the highest resolution assumption-free Dark Matter massmap to date, with the resolution being limited only by the number of multiple images detected. We detect 8 new multiple image systems and identify multiple knots in individual galaxies to constrain a grand total of 168 knots within 135 multiple images of 42 galaxies. No assumptions are made about mass tracing light, and yet the brightest visible structures in A1689 are reproduced in our massmap, a few with intriguing positional offsets. Our massmap probes radii smaller than that resolvable in current Dark Matter simulations of galaxy clusters. And at these radii, we observe slight deviations from the NFW and Sersic profiles which describe simulated Dark Matter halos so well. While we have demonstrated that our method is able to recover a known input massmap (to limited resolution), further tests are necessary to determine the uncertainties of our mass profile and positions of massive subclumps. I compile the latest weak lensing data from ACS, Subaru, and CFHT, and attempt to fit a single profile, either NFW or Sersic, to both the observed weak and strong lensing. I confirm the finding of most previous authors, that no single profile fits extremely well to both simultaneously. Slight deviations are revealed, with the best fits slightly over-predicting the mass profile at both large and small radius. Our easy-to-use software, called LensPerfect, will be made available soon. This research was supported by the European Commission Marie Curie International Reintegration Grant 017288-BPZ and the PNAYA grant AYA2005-09413-C02.
Internal dynamics of Abell 2294: a massive, likely merging cluster
NASA Astrophysics Data System (ADS)
Girardi, M.; Boschin, W.; Barrena, R.
2010-07-01
Context. The mechanisms giving rise to diffuse radio emission in galaxy clusters, and in particular their connection with cluster mergers, are still debated. Aims: We seek to explore the internal dynamics of the cluster Abell 2294, which has been shown to host a radio halo. Methods: Our analysis is mainly based on redshift data for 88 galaxies acquired at the Telescopio Nazionale Galileo. We combine galaxy velocities and positions to select 78 cluster galaxies and analyze its internal dynamics. We also use both photometric data acquired at the Isaac Newton Telescope and X-ray data from the Chandra archive. Results: We re-estimate the redshift of the large, brightest cluster galaxy (BCG) obtaining < z > = 0.1690, which closely agrees with the mean cluster redshift. We estimate a quite large line-of-sight (LOS) velocity dispersion σ_V ~ 1400 km s-1 and X-ray temperature TX ~ 10 keV. Our optical and X-ray analyses detect substructure. Our results imply that the cluster is composed of two massive subclusters separated by a LOS rest frame velocity difference Vrf ~ 2000 km s-1, very closely projected in the plane of sky along the SE-NW direction. This observational picture, interpreted in terms of the analytical two-body model, suggests that Abell 2294 is a cluster merger elongated mainly in the LOS direction and captured during the bound outgoing phase, a few fractions of Gyr after the core crossing. We find that Abell 2294 is a very massive cluster with a range of M = 2-4 × 1015 h70-1 M⊙, depending on the adopted model. In contrast to previous findings, we find no evidence of Hα emission in the spectrum of the BCG galaxy. Conclusions: The emerging picture of Abell 2294 is that of a massive, quite “normal” merging cluster, like many clusters hosting diffuse radio sources. However, perhaps because of its particular geometry, more data are needed for reach a definitive, more quantitative conclusion.
Resampling: An optimization method for inverse planning in robotic radiosurgery
Schweikard, Achim; Schlaefer, Alexander; Adler, John R. Jr.
2006-11-15
By design, the range of beam directions in conventional radiosurgery are constrained to an isocentric array. However, the recent introduction of robotic radiosurgery dramatically increases the flexibility of targeting, and as a consequence, beams need be neither coplanar nor isocentric. Such a nonisocentric design permits a large number of distinct beam directions to be used in one single treatment. These major technical differences provide an opportunity to improve upon the well-established principles for treatment planning used with GammaKnife or LINAC radiosurgery. With this objective in mind, our group has developed over the past decade an inverse planning tool for robotic radiosurgery. This system first computes a set of beam directions, and then during an optimization step, weights each individual beam. Optimization begins with a feasibility query, the answer to which is derived through linear programming. This approach offers the advantage of completeness and avoids local optima. Final beam selection is based on heuristics. In this report we present and evaluate a new strategy for utilizing the advantages of linear programming to improve beam selection. Starting from an initial solution, a heuristically determined set of beams is added to the optimization problem, while beams with zero weight are removed. This process is repeated to sample a set of beams much larger compared with typical optimization. Experimental results indicate that the planning approach efficiently finds acceptable plans and that resampling can further improve its efficiency.
NASA Astrophysics Data System (ADS)
Liu, B.; Li, S. C.; Nie, L. C.; Wang, J.; L, X.; Zhang, Q. S.
2012-12-01
Traditional inversion method is the most commonly used procedure for three-dimensional (3D) resistivity inversion, which usually takes the linearization of the problem and accomplish it by iterations. However, its accuracy is often dependent on the initial model, which can make the inversion trapped in local optima, even cause a bad result. Non-linear method is a feasible way to eliminate the dependence on the initial model. However, for large problems such as 3D resistivity inversion with inversion parameters exceeding a thousand, main challenges of non-linear method are premature and quite low search efficiency. To deal with these problems, we present an improved Genetic Algorithm (GA) method. In the improved GA method, smooth constraint and inequality constraint are both applied on the object function, by which the degree of non-uniqueness and ill-conditioning is decreased. Some measures are adopted from others by reference to maintain the diversity and stability of GA, e.g. real-coded method, and the adaptive adjustment of crossover and mutation probabilities. Then a generation method of approximately uniform initial population is proposed in this paper, with which uniformly distributed initial generation can be produced and the dependence on initial model can be eliminated. Further, a mutation direction control method is presented based on the joint algorithm, in which the linearization method is embedded in GA. The update vector produced by linearization method is used as mutation increment to maintain a better search direction compared with the traditional GA with non-controlled mutation operation. By this method, the mutation direction is optimized and the search efficiency is improved greatly. The performance of improved GA is evaluated by comparing with traditional inversion results in synthetic example or with drilling columnar sections in practical example. The synthetic and practical examples illustrate that with the improved GA method we can eliminate
Application of Carbonate Reservoir using waveform inversion and reverse-time migration methods
NASA Astrophysics Data System (ADS)
Kim, W.; Kim, H.; Min, D.; Keehm, Y.
2011-12-01
Recent exploration targets of oil and gas resources are deeper and more complicated subsurface structures, and carbonate reservoirs have become one of the attractive and challenging targets in seismic exploration. To increase the rate of success in oil and gas exploration, it is required to delineate detailed subsurface structures. Accordingly, migration method is more important factor in seismic data processing for the delineation. Seismic migration method has a long history, and there have been developed lots of migration techniques. Among them, reverse-time migration is promising, because it can provide reliable images for the complicated model even in the case of significant velocity contrasts in the model. The reliability of seismic migration images is dependent on the subsurface velocity models, which can be extracted in several ways. These days, geophysicists try to obtain velocity models through seismic full waveform inversion. Since Lailly (1983) and Tarantola (1984) proposed that the adjoint state of wave equations can be used in waveform inversion, the back-propagation techniques used in reverse-time migration have been used in waveform inversion, which accelerated the development of waveform inversion. In this study, we applied acoustic waveform inversion and reverse-time migration methods to carbonate reservoir models with various reservoir thicknesses to examine the feasibility of the methods in delineating carbonate reservoir models. We first extracted subsurface material properties from acoustic waveform inversion, and then applied reverse-time migration using the inverted velocities as a background model. The waveform inversion in this study used back-propagation technique, and conjugate gradient method was used in optimization. The inversion was performed using the frequency-selection strategy. Finally waveform inversion results showed that carbonate reservoir models are clearly inverted by waveform inversion and migration images based on the
ERIC Educational Resources Information Center
Ngu, Bing Hiong; Phan, Huy Phuong
2016-01-01
We examined the use of balance and inverse methods in equation solving. The main difference between the balance and inverse methods lies in the operational line (e.g. +2 on both sides vs -2 becomes +2). Differential element interactivity favours the inverse method because the interaction between elements occurs on both sides of the equation for…
FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)
NASA Astrophysics Data System (ADS)
2014-10-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the
FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems
NASA Astrophysics Data System (ADS)
Vourc'h, Eric; Rodet, Thomas
2015-11-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods
Efficiency of Pareto joint inversion of 2D geophysical data using global optimization methods
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2016-04-01
Pareto joint inversion of two or more sets of data is a promising new tool of modern geophysical exploration. In the first stage of our investigation we created software enabling execution of forward solvers of two geophysical methods (2D magnetotelluric and gravity) as well as inversion with possibility of constraining solution with seismic data. In the algorithm solving MT forward solver Helmholtz's equations, finite element method and Dirichlet's boundary conditions were applied. Gravity forward solver was based on Talwani's algorithm. To limit dimensionality of solution space we decided to describe model as sets of polygons, using Sharp Boundary Interface (SBI) approach. The main inversion engine was created using Particle Swarm Optimization (PSO) algorithm adapted to handle two or more target functions and to prevent acceptance of solutions which are non - realistic or incompatible with Pareto scheme. Each inversion run generates single Pareto solution, which can be added to Pareto Front. The PSO inversion engine was parallelized using OpenMP standard, what enabled execution code for practically unlimited amount of threads at once. Thereby computing time of inversion process was significantly decreased. Furthermore, computing efficiency increases with number of PSO iterations. In this contribution we analyze the efficiency of created software solution taking under consideration details of chosen global optimization engine used as a main joint minimization engine. Additionally we study the scale of possible decrease of computational time caused by different methods of parallelization applied for both forward solvers and inversion algorithm. All tests were done for 2D magnetotelluric and gravity data based on real geological media. Obtained results show that even for relatively simple mid end computational infrastructure proposed solution of inversion problem can be applied in practice and used for real life problems of geophysical inversion and interpretation.
Radiography of nonaxisymmetric objects: An onion-peeling inversion method
NASA Astrophysics Data System (ADS)
Schwierz-Iosefzon, T.; Notea, A.; Deutsch, M.
2002-09-01
An onion-peeling method for obtaining the linear attenuation coefficient distribution within a body from a single radiographic projection is presented. Unlike previous methods, which are applicable only to axi- or centrosymmetric objects, ours requires only mirror symmetry relative to the plane of the radiograph. An example of the use of the method is presented and discussed.
a method of gravity and seismic sequential inversion and its GPU implementation
NASA Astrophysics Data System (ADS)
Liu, G.; Meng, X.
2011-12-01
In this abstract, we introduce a gravity and seismic sequential inversion method to invert for density and velocity together. For the gravity inversion, we use an iterative method based on correlation imaging algorithm; for the seismic inversion, we use the full waveform inversion. The link between the density and velocity is an empirical formula called Gardner equation, for large volumes of data, we use the GPU to accelerate the computation. For the gravity inversion method , we introduce a method based on correlation imaging algorithm,it is also a interative method, first we calculate the correlation imaging of the observed gravity anomaly, it is some value between -1 and +1, then we multiply this value with a little density ,this value become the initial density model. We get a forward reuslt with this initial model and also calculate the correaltion imaging of the misfit of observed data and the forward data, also multiply the correaltion imaging result a little density and add it to the initial model, then do the same procedure above , at last ,we can get a inversion density model. For the seismic inveron method ,we use a mothod base on the linearity of acoustic wave equation written in the frequency domain,with a intial velociy model, we can get a good velocity result. In the sequential inversion of gravity and seismic , we need a link formula to convert between density and velocity ,in our method , we use the Gardner equation. Driven by the insatiable market demand for real time, high-definition 3D images, the programmable NVIDIA Graphic Processing Unit (GPU) as co-processor of CPU has been developed for high performance computing. Compute Unified Device Architecture (CUDA) is a parallel programming model and software environment provided by NVIDIA designed to overcome the challenge of using traditional general purpose GPU while maintaining a low learn curve for programmers familiar with standard programming languages such as C. In our inversion processing
Parallel full-waveform inversion in the frequency domain by the Gauss-Newton method
NASA Astrophysics Data System (ADS)
Zhang, Wensheng; Zhuang, Yuan
2016-06-01
In this paper, we investigate the full-waveform inversion in the frequency domain. We first test the inversion ability of three numerical optimization methods, i.e., the steepest-descent method, the Newton-CG method and the Gauss- Newton method, for a simple model. The results show that the Gauss-Newton method performs well and efficiently. Then numerical computations for a benchmark model named Marmousi model by the Gauss-Newton method are implemented. Parallel algorithm based on message passing interface (MPI) is applied as the inversion is a typical large-scale computational problem. Numerical computations show that the Gauss-Newton method has good ability to reconstruct the complex model.
The Noble-Abel Stiffened-Gas equation of state
NASA Astrophysics Data System (ADS)
Le Métayer, Olivier; Saurel, Richard
2016-04-01
Hyperbolic two-phase flow models have shown excellent ability for the resolution of a wide range of applications ranging from interfacial flows to fluid mixtures with several velocities. These models account for waves propagation (acoustic and convective) and consist in hyperbolic systems of partial differential equations. In this context, each phase is compressible and needs an appropriate convex equation of state (EOS). The EOS must be simple enough for intensive computations as well as boundary conditions treatment. It must also be accurate, this being challenging with respect to simplicity. In the present approach, each fluid is governed by a novel EOS named "Noble Abel stiffened gas," this formulation being a significant improvement of the popular "Stiffened Gas (SG)" EOS. It is a combination of the so-called "Noble-Abel" and "stiffened gas" equations of state that adds repulsive effects to the SG formulation. The determination of the various thermodynamic functions and associated coefficients is the aim of this article. We first use thermodynamic considerations to determine the different state functions such as the specific internal energy, enthalpy, and entropy. Then we propose to determine the associated coefficients for a liquid in the presence of its vapor. The EOS parameters are determined from experimental saturation curves. Some examples of liquid-vapor fluids are examined and associated parameters are computed with the help of the present method. Comparisons between analytical and experimental saturation curves show very good agreement for wide ranges of temperature for both liquid and vapor.
NASA Technical Reports Server (NTRS)
Kurtz, M. J.; Huchra, J. P.; Beers, T. C.; Geller, M. J.; Gioia, I. M.
1985-01-01
X-ray and optical observations of the cluster of galaxies Abell 744 are presented. The X-ray flux (assuming H(0) = 100 km/s per Mpc) is about 9 x 10 to the 42nd erg/s. The X-ray source is extended, but shows no other structure. Photographic photometry (in Kron-Cousins R), calibrated by deep CCD frames, is presented for all galaxies brighter than 19th magnitude within 0.75 Mpc of the cluster center. The luminosity function is normal, and the isopleths show little evidence of substructure near the cluster center. The cluster has a dominant central galaxy, which is classified as a normal brightest-cluster elliptical on the basis of its luminosity profile. New redshifts were obtained for 26 galaxies in the vicinity of the cluster center; 20 appear to be cluster members. The spatial distribution of redshifts is peculiar; the dispersion within the 150 kpc core radius is much greater than outside. Abell 744 is similar to the nearby cluster Abell 1060.
NASA Astrophysics Data System (ADS)
Schuster, David M.
1993-04-01
An inverse method has been developed to compute the structural stiffness properties of wings given a specified wing loading and aeroelastic twist distribution. The method directly solves for the bending and torsional stiffness distribution of the wing using a modal representation of these properties. An aeroelastic design problem involving the use of a computational aerodynamics method to optimize the aeroelastic twist distribution of a tighter wing operating at maneuver flight conditions is used to demonstrate the application of the method. This exercise verifies the ability of the inverse scheme to accurately compute the structural stiffness distribution required to generate a specific aeroelastic twist under a specified aeroelastic load.
Method for detecting a pericentric inversion in a chromosome
Lucas, Joe N.
2000-01-01
A method is provided for determining a clastogenic signature of a sample of chromosomes by quantifying a frequency of a first type of chromosome aberration present in the sample; quantifying a frequency of a second, different type of chromosome aberration present in the sample; and comparing the frequency of the first type of chromosome aberration to the frequency of the second type of chromosome aberration. A method is also provided for using that clastogenic signature to identify a clastogenic agent or dosage to which the cells were exposed.
A comparison of techniques for inversion of radio-ray phase data in presence of ray bending
NASA Technical Reports Server (NTRS)
Wallio, H. A.; Grossi, M. D.
1972-01-01
Derivations are presented of the straight-line Abel transform and the seismological Herglotz-Wiechert transform (which takes ray bending into account) that are used in the reconstruction of refractivity profiles from radio-wave phase data. Profile inversion utilizing these approaches, performed in computer-simulated experiments, are compared for cases of positive, zero, and negative ray bending. For thin atmospheres and ionospheres, such as the Martian atmosphere and ionosphere, radio wave signals are shown to be inverted accurately with both methods. For dense media, such as the solar corona or the lower Venus atmosphere, the refractive recovered by the seismological Herglotz-Wiechert transform provide a significant improvement compared with the straight-line Abel transform.
A surface misfit inversion method for brain deformation modeling
NASA Astrophysics Data System (ADS)
Liu, Fenghong; Paulsen, Keith D.; Hartov, Alexander; Roberts, David W.
2007-03-01
Biomechanical models of brain deformation are useful tools for estimating the shift that occurs during neurosurgical interventions. Incorporation of intra-operative data into the biomechanical model improves the accuracy of the registration between the patient and the image volume. The representer method to solve the adjoint equations (AEM) for data assimilation has been developed. In order to improve the computational efficiency and to process more intraoperative data, we modified the adjoint equation method by changing the way in which intraoperative data is applied. The current formulation is developed around a point-based data-model misfit. Surface based data-model misfit could be a more robust and computationally efficient technique. Our approach is to express the surface misfit as the volume between the measured surface and model predicted surface. An iterative method is used to solve the adjoint equations. The surface misfit criterion is tested in a cortical distension clinical case and compared to the results generated with the prior point-based methodology solved either iteratively or with the representer algorithm. The results show that solving the adjoint equations with an iterative method improves computational efficiency dramatically over the representer approach and that reformulating the minimization criterion in terms of a surface description is even more efficient. Applying intra-operative data in the form of a surface misfit is computationally very efficient and appears promising with respect to its accuracy in estimating brain deformation.
A new inversion method for (T2, D) 2D NMR logging and fluid typing
NASA Astrophysics Data System (ADS)
Tan, Maojin; Zou, Youlong; Zhou, Cancan
2013-02-01
One-dimensional nuclear magnetic resonance (1D NMR) logging technology has some significant limitations in fluid typing. However, not only can two-dimensional nuclear magnetic resonance (2D NMR) provide some accurate porosity parameters, but it can also identify fluids more accurately than 1D NMR. In this paper, based on the relaxation mechanism of (T2, D) 2D NMR in a gradient magnetic field, a hybrid inversion method that combines least-squares-based QR decomposition (LSQR) and truncated singular value decomposition (TSVD) is examined in the 2D NMR inversion of various fluid models. The forward modeling and inversion tests are performed in detail with different acquisition parameters, such as magnetic field gradients (G) and echo spacing (TE) groups. The simulated results are discussed and described in detail, the influence of the above-mentioned observation parameters on the inversion accuracy is investigated and analyzed, and the observation parameters in multi-TE activation are optimized. Furthermore, the hybrid inversion can be applied to quantitatively determine the fluid saturation. To study the effects of noise level on the hybrid method and inversion results, the numerical simulation experiments are performed using different signal-to-noise-ratios (SNRs), and the effect of different SNRs on fluid typing using three fluid models are discussed and analyzed in detail.
Diffuse interface methods for inverse problems: case study for an elliptic Cauchy problem
NASA Astrophysics Data System (ADS)
Burger, Martin; Løseth Elvetun, Ole; Schlottbom, Matthias
2015-12-01
Many inverse problems have to deal with complex, evolving and often not exactly known geometries, e.g. as domains of forward problems modeled by partial differential equations. This makes it desirable to use methods which are robust with respect to perturbed or not well resolved domains, and which allow for efficient discretizations not resolving any fine detail of those geometries. For forward problems in partial differential equations methods based on diffuse interface representations have gained strong attention in the last years, but so far they have not been considered systematically for inverse problems. In this work we introduce a diffuse domain method as a tool for the solution of variational inverse problems. As a particular example we study ECG inversion in further detail. ECG inversion is a linear inverse source problem with boundary measurements governed by an anisotropic diffusion equation, which naturally cries for solutions under changing geometries, namely the beating heart. We formulate a regularization strategy using Tikhonov regularization and, using standard source conditions, we prove convergence rates. A special property of our approach is that not only operator perturbations are introduced by the diffuse domain method, but more important we have to deal with topologies which depend on a parameter \\varepsilon in the diffuse domain method, i.e. we have to deal with \\varepsilon -dependent forward operators and \\varepsilon -dependent norms. In particular the appropriate function spaces for the unknown and the data depend on \\varepsilon . This prevents the application of some standard convergence techniques for inverse problems, in particular interpreting the perturbations as data errors in the original problem does not yield suitable results. We consequently develop a novel approach based on saddle-point problems. The numerical solution of the problem is discussed as well and results for several computational experiments are reported. In
NASA Astrophysics Data System (ADS)
Liu, Qing Huo; Zhang, Zhong Qing
2000-07-01
We invert for the axisymmetric conductivity distribution from borehole electromagnetic induction measurements using a two-step linear inversion method based on a fast Fourier and Hankel transform enhanced extended Born approximation. In this method, the inverse problem is first cast as an under- determined linear least-norm problem for the induced electric current density; from the solution of this induced current density, the unknown conductivity distribution is then obtained by solving an over-determined linear problem using the newly developed, fast Fourier and Hankel transform enhanced extended Born approximation. Numerical results show that this inverse method is applicable to a very high conductivity contrast. It is a natural extension of the original two-step linear inversion method of Torres-Verdin and Habashy to axisymmetric media. In the first step, the CPU time costs O(N2). In the second step, the CPU time costs O(N log2 N) where N is the number of unknowns. Because of the fast Fourier and Hankel transform algorithm, this inverse method is actually more efficient than the conventional, brute-force first-order Born approximation.
Numerical Methods for Forward and Inverse Problems in Discontinuous Media
Chartier, Timothy P.
2011-03-08
The research emphasis under this grant's funding is in the area of algebraic multigrid methods. The research has two main branches: 1) exploring interdisciplinary applications in which algebraic multigrid can make an impact and 2) extending the scope of algebraic multigrid methods with algorithmic improvements that are based in strong analysis.The work in interdisciplinary applications falls primarily in the field of biomedical imaging. Work under this grant demonstrated the effectiveness and robustness of multigrid for solving linear systems that result from highly heterogeneous finite element method models of the human head. The results in this work also give promise to medical advances possible with software that may be developed. Research to extend the scope of algebraic multigrid has been focused in several areas. In collaboration with researchers at the University of Colorado, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory, the PI developed an adaptive multigrid with subcycling via complementary grids. This method has very cheap computing costs per iterate and is showing promise as a preconditioner for conjugate gradient. Recent work with Los Alamos National Laboratory concentrates on developing algorithms that take advantage of the recent advances in adaptive multigrid research. The results of the various efforts in this research could ultimately have direct use and impact to researchers for a wide variety of applications, including, astrophysics, neuroscience, contaminant transport in porous media, bi-domain heart modeling, modeling of tumor growth, and flow in heterogeneous porous media. This work has already led to basic advances in computational mathematics and numerical linear algebra and will continue to do so into the future.
Quasiparticle density of states by inversion with maximum entropy method
NASA Astrophysics Data System (ADS)
Sui, Xiao-Hong; Wang, Han-Ting; Tang, Hui; Su, Zhao-Bin
2016-10-01
We propose to extract the quasiparticle density of states (DOS) of the superconductor directly from the experimentally measured superconductor-insulator-superconductor junction tunneling data by applying the maximum entropy method to the nonlinear systems. It merits the advantage of model independence with minimum a priori assumptions. Various components of the proposed method have been carefully investigated, including the meaning of the targeting function, the mock function, as well as the role and the designation of the input parameters. The validity of the developed scheme is shown by two kinds of tests for systems with known DOS. As a preliminary application to a Bi2Sr2CaCu2O8 +δ sample with its critical temperature Tc=89 K , we extract the DOS from the measured intrinsic Josephson junction current data at temperatures of T =4.2 K , 45 K , 55 K , 95 K , and 130 K . The energy gap decreases with increasing temperature below Tc, while above Tc, a kind of energy gap survives, which provides an angle to investigate the pseudogap phenomenon in high-Tc superconductors. The developed method itself might be a useful tool for future applications in various fields.
Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.
Dynamic inversion method based on the time-staggered stereo-modeling scheme and its acceleration
NASA Astrophysics Data System (ADS)
Jing, Hao; Yang, Dinghui; Wu, Hao
2016-09-01
A set of second-order differential equations describing the space-time behavior of derivatives of displacement with respect to model parameters (i.e. waveform sensitivities) is obtained via taking the derivative of the original wave equations. The dynamic inversion method obtains sensitivities of the seismic displacement field with respect to earth properties directly by solving differential equations for them instead of constructing sensitivities from the displacement field itself. In this study, we have taken a new perspective on the dynamic inversion method and used acceleration approaches to reduce the computational time and memory usage to improve its ability of performing high-resolution imaging. The dynamic inversion method, which can simultaneously use different waves and multi-component observation data, is appropriate for directly inverting elastic parameters, medium density or wave velocities. Full wave-field information is utilized as much as possible at the expense of a larger amount of calculations. To mitigate the computational burden, two ways are proposed to accelerate the method from a computer-implementation point of view. One is source encoding which uses a linear combination of all shots, and the other is to reduce the amount of calculations on forward modeling. We applied a new finite difference method to the dynamic inversion to improve the computational accuracy and speed up the performance. Numerical experiments indicated that the new finite difference method can effectively suppress the numerical dispersion caused by the discretization of wave equations, resulting in enhanced computational efficiency with less memory cost for seismic modeling and inversion based on the full wave equations. We present some inversion results to demonstrate the validity of this method through both checkerboard and Marmousi models. It shows that this method is also convergent even with big deviations for the initial model. Besides, parallel calculations can be
A Study of the Parameters for Solar Structure Inversion Methods
NASA Astrophysics Data System (ADS)
Rabello-Soares, M. C.; Basu, Sarbani; Christensen-Dalsgaard, J.
The observed solar p-mode frequencies provide an extremely useful diagnostic of the internal structure of the Sun, and permit us to test in considerable detail the physics used in the theory of stellar structure. Two implementations of the optimally localized averages (OLA) method are amongst the most commonly used techniques for inverting helioseismic data, namely the Subtractive Optimally Localized Averages (SOLA) and Multiplicative Optimally Localized Averages (MOLA). In both of them, there are a number of parameters that must be chosen in order to find the solution. Proper choice of the parameters is very important to determine correctly the variation of the internal structure along the solar radius. In this work, we make a detailed analysis on the influence of each parameter on the solution and indicate how to arrive at an optimal set of parameters for a given data set.
Integro-differential method of solving the inverse coefficient heat conduction problem
NASA Astrophysics Data System (ADS)
Baranov, V. L.; Zasyad'Ko, A. A.; Frolov, G. A.
2010-03-01
On the basis of differential transformations, a stable integro-differential method of solving the inverse heat conduction problem is suggested. The method has been tested on the example of determining the thermal diffusivity on quasi-stationary fusion and heating of a quartz glazed ceramics specimen.
Numerical solution of 2D-vector tomography problem using the method of approximate inverse
NASA Astrophysics Data System (ADS)
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-01
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
Tauberian theorems for Abel summability of sequences of fuzzy numbers
NASA Astrophysics Data System (ADS)
Yavuz, Enes; ćoşkun, Hüsamettin
2015-09-01
We give some conditions under which Abel summable sequences of fuzzy numbers are convergent. As corollaries we obtain the results given in [E. Yavuz, Ö. Talo, Abel summability of sequences of fuzzy numbers, Soft computing 2014, doi: 10.1007/s00500-014-1563-7].
A numerical method for solving a stochastic inverse problem for parameters.
Butler, T; Estep, D
2013-02-01
We review recent work (Briedt et al., 2011., 2012) on a new approach to the formulation and solution of the stochastic inverse parameter determination problem, i.e. determine the random variation of input parameters to a map that matches specified random variation in the output of the map, and then apply the various aspects of this method to the interesting Brusselator model. In this approach, the problem is formulated as an inverse problem for an integral equation using the Law of Total Probability. The solution method employs two steps: (1) we construct a systematic method for approximating set-valued inverse solutions and (2) we construct a computational approach to compute a measure-theoretic approximation of the probability measure on the input space imparted by the approximate set-valued inverse that solves the inverse problem. In addition to convergence analysis, we carry out an a posteriori error analysis on the computed probability distribution that takes into account all sources of stochastic and deterministic error. PMID:24347806
A numerical method for solving a stochastic inverse problem for parameters
Butler, T.; Estep, D.
2013-01-01
We review recent work (Briedt et al., 2011., 2012) on a new approach to the formulation and solution of the stochastic inverse parameter determination problem, i.e. determine the random variation of input parameters to a map that matches specified random variation in the output of the map, and then apply the various aspects of this method to the interesting Brusselator model. In this approach, the problem is formulated as an inverse problem for an integral equation using the Law of Total Probability. The solution method employs two steps: (1) we construct a systematic method for approximating set-valued inverse solutions and (2) we construct a computational approach to compute a measure-theoretic approximation of the probability measure on the input space imparted by the approximate set-valued inverse that solves the inverse problem. In addition to convergence analysis, we carry out an a posteriori error analysis on the computed probability distribution that takes into account all sources of stochastic and deterministic error. PMID:24347806
Inverse planning optimization method for intensity modulated radiation therapy.
Lan, Yihua; Ren, Haozheng; Li, Cunhua; Min, Zhifang; Wan, Jinxin; Ma, Jianxin; Hung, Chih-Cheng
2013-10-01
In order to facilitate the leaf sequencing process in intensity modulated radiation therapy (IMRT), and design of a practical leaf sequencing algorithm, it is an important issue to smooth the planned fluence maps. The objective is to achieve both high-efficiency and high-precision dose delivering by considering characteristics of leaf sequencing process. The key factor which affects total number of monitor units for the leaf sequencing optimization process is the max flow value of the digraph which formulated from the fluence maps. Therefore, we believe that one strategy for compromising dose conformity and total number of monitor units in dose delivery is to balance the dose distribution function and the max flow value mentioned above. However, there are too many paths in the digraph, and we don't know the flow value of which path is the maximum. The maximum flow value among the horizontal paths was selected and used in the objective function of the fluence map optimization to formulate the model. The model is a traditional linear constrained quadratic optimization model which can be solved by interior point method easily. We believe that the smoothed maps from this model are more suitable for leaf sequencing optimization process than other smoothing models. A clinical head-neck case and a prostate case were tested and compared using our proposed model and the smoothing model which is based on the minimization of total variance. The optimization results with the same level of total number of monitor units (TNMU) show that the fluence maps obtained from our model have much better dose performance for the target/non-target region than the maps from total variance based on the smoothing model. This indicates that our model achieves better dose distribution when the algorithm suppresses the TNMU at the same level. Although we have just used the max flow value of the horizontal paths in the diagraph in the objective function, a good balance has been achieved between
A combined direct/inverse three-dimensional transonic wing design method for vector computers
NASA Technical Reports Server (NTRS)
Weed, R. A.; Carlson, L. A.; Anderson, W. K.
1984-01-01
A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Schöpfer, Frank
2010-08-01
The method of approximate inverse is a mollification method for stably solving inverse problems. In its original form it has been developed to solve operator equations in L2-spaces and general Hilbert spaces. We show that the method of approximate inverse can be extended to solve linear, ill-posed problems in Banach spaces. This paper is restricted to function spaces. The method itself consists of evaluations of dual pairings of the given data with reconstruction kernels that are associated with mollifiers and the dual of the operator. We first define what we mean by a mollifier in general Banach spaces and then investigate two settings more exactly: the case of Lp-spaces and the case of the Banach space of continuous functions on a compact set. For both settings we present the criteria turning the method of approximate inverse into a regularization method and prove convergence with rates. As an application we refer to x-ray diffractometry which is a technique of non-destructive testing that is concerned with computing the stress tensor of a specimen. Since one knows that the stress tensor is smooth, x-ray diffractometry can appropriately be modelled by a Banach space setting using continuous functions.
The inversion method of Matrix mineral bulk modulus based on Gassmann equation
NASA Astrophysics Data System (ADS)
Kai, L.; He, X.; Zhang, Z. H.
2015-12-01
In recent years, seismic rock physics has played an important role in oil and gas exploration. The seismic rock physics model can quantitatively describe the reservoir characteristics, such as lithologic association, pore structure, geological processes and so on. But the classic rock physics models need to determine the background parameter, that is, matrix mineral bulk modulus. An inaccurate inputs greatly influence the prediction reliability. By introducing different rock physics parameters, Gassmann equation is used to derive a reasonable modification. Two forms of Matrix mineral bulk modulus inversion methods including the linear regression method and Self-adapting inversion method are proposed. They effectively solve the value issues of Matrix mineral bulk modulus in different complex parameters conditions. Based on laboratory tests data, compared with the conventional method, the linear regression method is more simple and accurate. Meanwhile Self-adapting inversion method also has higher precision in the known rich rock physics parameters. Consequently, the modulus value was applied to reservoir fluid substitution, porosity inversion and S-wave velocity prediction. The introduction of Matrix mineral modulus base on Gassmann equations can effectively improve the reliability of the fluid impact prediction, and computational efficiency.
Freezing Time Estimation for a Cylindrical Food Using an Inverse Method
NASA Astrophysics Data System (ADS)
Hu, Yao Xing; Mihori, Tomoo; Watanabe, Hisahiko
Most of the published methods for estimating the freezing time require thermal properties of the product and any relevant heat transfer coefficients between the product and the cooling medium. However, the difficulty of obtaining thermal data for use in industrial freezing system of food has been pointed out. We have developed a new procedure for estimating the time to freeze a food of a slab by using the inverse method, which does not require the knowledge of thermal properties of the food being frozen. The method of applying inverse method to estimation of freezing time depends on the shape of the body to be frozen. In this paper, we explored the method of applying inverse method to the food body of cylindrical shape, using selected explicit expressions to describe the temperature profile. The temperature profile was found to be successfully approximated by a logarithmic function, with which an approximate equation to describe the freezing time was derived. An inversion procedure of estimating freezing time associated with the approximate equation, was validated via a numerical experiment.
NASA Astrophysics Data System (ADS)
Amey, Ruth; Hooper, Andy
2016-04-01
Modelling the slip distribution along fault planes is an essential part of earthquake investigations. These models give insight into stress distribution and frictional properties of a fault, as well as being important for understanding seismic hazard. Here we present a new approach for constraining earthquake slip using geodetic data, and apply our method to the Mw 6.0 Napa Valley, California, earthquake of 24th August 2014. The method relies on the inclusion of a prior based on von Karman correlation. With the launch of ESA's satellite Sentinel-1A in 2014, the scientific community is now in a position to routinely investigate all large continental earthquakes using InSAR, and inverting for slip is a crucial part of that procedure. However, in order for the slip inversions to be useful we need to ensure that the inversion processes give results that are properly representing the slip distribution. Slip inversions are ill-posed and measurement noise results in unrealistically large fluctuations in the solution. To avoid this, an extra constraint such as minimum norm or Laplacian smoothing is usually employed to regularise the inversion. However, these constraints do not necessarily realistically represent earthquake slip. There is growing evidence that many aspects of earthquakes are self-similar and that earthquake slip distribution is well described by a von Karman autocorrelation function, which incorporates fractal properties through the Hurst parameter. We add this constraint to the slip inversion as a prior assumption using a Bayesian approach.
Ita, B. I.
2014-11-12
By using the Nikiforov-Uvarov (NU) method, the Schrödinger equation has been solved for the interaction of inversely quadratic Hellmann (IQHP) and inversely quadratic potential (IQP) for any angular momentum quantum number, l. The energy eigenvalues and their corresponding eigenfunctions have been obtained in terms of Laguerre polynomials. Special cases of the sum of these potentials have been considered and their energy eigenvalues also obtained.
Combining Strong and Weak Gravitational Lensing in Abell 1689
NASA Astrophysics Data System (ADS)
Limousin, Marceau; Richard, Johan; Jullo, Eric; Kneib, Jean-Paul; Fort, Bernard; Soucail, Geneviève; Elíasdóttir, Árdís; Natarajan, Priyamvada; Ellis, Richard S.; Smail, Ian; Czoske, Oliver; Smith, Graham P.; Hudelot, Patrick; Bardeau, Sébastien; Ebeling, Harald; Egami, Eiichi; Knudsen, Kirsten K.
2007-10-01
We present a reconstruction of the mass distribution of galaxy cluster Abell 1689 at z=0.18 using detected strong lensing features from deep ACS observations and extensive ground based spectroscopy. Earlier analyses have reported up to 32 multiply imaged systems in this cluster, of which only 3 were spectroscopically confirmed. In this work, we present a parametric strong lensing mass reconstruction using 34 multiply imaged systems of which 24 have newly determined spectroscopic redshifts, which is a major step forward in building a robust mass model. In turn, the new spectroscopic data allows a more secure identification of multiply imaged systems. The resultant mass model enables us to reliably predict the redshifts of additional multiply imaged systems for which no spectra are currently available, and to use the location of these systems to further constrain the mass model. Using our strong lensing mass model, we predict on larger scale a shear signal which is consistent with that inferred from our large scale weak lensing analysis derived using CFH12K wide field images. Thanks to a new method for reliably selecting a well defined background lensed galaxy population, we resolve the discrepancy found between the NFW concentration parameters derived from earlier strong and weak lensing analysis. The derived parameters for the best fit NFW profile is found to be c200=7.6+/-1.6 and r200=2.16+/-0.10 h-170 Mpc (corresponding to a 3D mass equal to M200=[1.32+/-0.2]×1015 h70 Msolar). The large number of new constraints incorporated in this work makes Abell 1689 the most reliably reconstructed cluster to date. This well calibrated mass model, which we here make publicly available, will enable us to exploit Abell 1689 efficiently as a gravitational telescope, as well as to potentially constrain cosmology. Based on observations obtained at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council of Canada, the Institut National des
ABEL description and implementation of cyber net system
NASA Astrophysics Data System (ADS)
Lu, Jiyuan; Jing, Liang
2013-03-01
Cyber net system is a subclass of Petri Nets. It has more powerful description capability and more complex properties compared with P/T system. Due to its nonlinear relation, it can't use analysis techniques of other net systems directly. This influences the research on cyber net system. In this paper, the author uses hardware description language to describe cyber net system. Simulation analysis is carried out through EDA software tools to disclose properties of the system. This method is introduced in detail through cyber net system model of computing Fibonacci series. ABEL source codes and simulation wave are also presented. The source codes are compiled, optimized, fit design and downloaded to the Programmable Logic Device. Thus ASIC of computing Fibonacci series is obtained. It will break a new path for the analysis and application study of cyber net system.
Solving the structural inverse gravity problem by the modified gradient methods
NASA Astrophysics Data System (ADS)
Martyshko, P. S.; Akimova, E. N.; Misilov, V. E.
2016-09-01
New methods for solving the three-dimensional inverse gravity problem in the class of contact surfaces are described. Based on the approach previously suggested by the authors, new algorithms are developed. Application of these algorithms significantly reduces the number of the iterations and computing time compared to the previous ones. The algorithms have been numerically implemented on the multicore processor. The example of solving the structural inverse gravity problem for a model of four-layer medium (with the use of gravity field measurements) is constructed.
NASA Technical Reports Server (NTRS)
Prinn, Ronald G.
2001-01-01
For interpreting observational data, and in particular for use in inverse methods, accurate and realistic chemical transport models are essential. Toward this end we have, in recent years, helped develop and utilize a number of three-dimensional models including the Model for Atmospheric Transport and Chemistry (MATCH).
A second degree Newton method for an inverse obstacle scattering problem
NASA Astrophysics Data System (ADS)
Kress, Rainer; Lee, Kuo-Ming
2011-08-01
A regularized second degree Newton method is proposed and implemented for the inverse problem for scattering of time-harmonic acoustic waves from a sound-soft obstacle. It combines ideas due to Johansson and Sleeman [18] and Hettlich and Rundell [8] and reconstructs the obstacle from the far field pattern for scattering of one incident plane wave.
Zatsiorsky, Vladimir M.
2011-01-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907
Singular solutions of the KdV equation and the inverse scattering method
Arkad'ev, V.A.; Pogrebkov, A.K.; Polivanov, M.K.
1985-12-20
The paper is devoted to the construction of singular solutions of the KdV equation. The presentation is based on a variant of the inverse scattering method for singular solutions of nonlinear equations developed in previous works of the authors.
Towards "Inverse" Character Tables? A One-Step Method for Decomposing Reducible Representations
ERIC Educational Resources Information Center
Piquemal, J.-Y.; Losno, R.; Ancian, B.
2009-01-01
In the framework of group theory, a new procedure is described for a one-step automated reduction of reducible representations. The matrix inversion tool, provided by standard spreadsheet software, is applied to the central part of the character table that contains the characters of the irreducible representation. This method is not restricted to…
Terekhov, Alexander V; Zatsiorsky, Vladimir M
2011-02-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423-453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907
Terekhov, Alexander V; Zatsiorsky, Vladimir M
2011-02-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423-453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem.
Iterative solution of a Dirac equation with an inverse Hamiltonian method
Hagino, K.; Tanimura, Y.
2010-11-15
We solve a singe-particle Dirac equation with Woods-Saxon potentials using an iterative method in the coordinate space representation. By maximizing the expectation value of the inverse of the Dirac Hamiltonian, this method avoids the variational collapse in which an iterative solution dives into the Dirac sea. We demonstrate that this method works efficiently, reproducing the exact solutions of the Dirac equation.
NASA Astrophysics Data System (ADS)
Gherlone, Marco; Cerracchio, Priscilla; Mattone, Massimiliano; Di Sciuva, Marco; Tessler, Alexander
2014-04-01
Shape sensing, i.e., reconstruction of the displacement field of a structure from surface-measured strains, has relevant implications for the monitoring, control and actuation of smart structures. The inverse finite element method (iFEM) is a shape-sensing methodology shown to be fast, accurate and robust. This paper aims to demonstrate that the recently presented iFEM for beam and frame structures is reliable when experimentally measured strains are used as input data. The theoretical framework of the methodology is first reviewed. Timoshenko beam theory is adopted, including stretching, bending, transverse shear and torsion deformation modes. The variational statement and its discretization with C0-continuous inverse elements are briefly recalled. The three-dimensional displacement field of the beam structure is reconstructed under the condition that least-squares compatibility is guaranteed between the measured strains and those interpolated within the inverse elements. The experimental setup is then described. A thin-walled cantilevered beam is subjected to different static and dynamic loads. Measured surface strains are used as input data for shape sensing at first with a single inverse element. For the same test cases, convergence is also investigated using an increasing number of inverse elements. The iFEM-recovered deflections and twist rotations are then compared with those measured experimentally. The accuracy, convergence and robustness of the iFEM with respect to unavoidable measurement errors, due to strain sensor locations, measurement systems and geometry imperfections, are demonstrated for both static and dynamic loadings.
New Modified Band Limited Impedance (BLIMP) Inversion Method Using Envelope Attribute
NASA Astrophysics Data System (ADS)
Maulana, Z. L.; Saputro, O. D.; Latief, F. D. E.
2016-01-01
Earth attenuates high frequencies from seismic wavelet. Low frequency seismics cannot be obtained by low quality geophone. The low frequencies (0-10 Hz) that are not present in seismic data are important to obtain a good result in acoustic impedance (AI) inversion. AI is important to determine reservoir quality by converting AI to reservoir properties like porosity, permeability and water saturation. The low frequencies can be supplied from impedance log (AI logs), velocity analysis, and from the combination of both data. In this study, we propose that the low frequencies could be obtained from the envelope seismic attribute. This new proposed method is essentially a modified BLIMP (Band Limited Impedance) inversion method, in which the AI logs for BLIMP substituted with the envelope attribute. In low frequency domain (0-10 Hz), the envelope attribute produces high amplitude. This low frequency from the envelope attribute is utilized to replace low frequency from AI logs in BLIMP. Linear trend in this method is acquired from the AI logs. In this study, the method is applied on synthetic seismograms created from impedance log from well ‘X’. The mean squared error from the modified BLIMP inversion is 2-4% for each trace (variation in error is caused by different normalization constant), lower than the conventional BLIMP inversion which produces error of 8%. The new method is also applied on Marmousi2 dataset and show promising result. The modified BLIMP inversion result from Marmousi2 by using one log AI is better than the one produced from the conventional method.
Mass, velocity anisotropy, and pseudo phase-space density profiles of Abell 2142
NASA Astrophysics Data System (ADS)
Munari, E.; Biviano, A.; Mamon, G. A.
2014-06-01
Aims: We aim to compute the mass and velocity anisotropy profiles of Abell 2142 and, from there, the pseudo phase-space density profile Q(r) and the density slope - velocity anisotropy β - γ relation, and then to compare them with theoretical expectations. Methods: The mass profiles were obtained by using three techniques based on member galaxy kinematics, namely the caustic method, the method of dispersion-kurtosis, and MAMPOSSt. Through the inversion of the Jeans equation, it was possible to compute the velocity anisotropy profiles. Results: The mass profiles, as well as the virial values of mass and radius, computed with the different techniques agree with one another and with the estimates coming from X-ray and weak lensing studies. A combined mass profile is obtained by averaging the lensing, X-ray, and kinematics determinations. The cluster mass profile is well fitted by an NFW profile with c = 4.0 ± 0.5. The population of red and blue galaxies appear to have a different velocity anisotropy configuration, since red galaxies are almost isotropic, while blue galaxies are radially anisotropic, with a weak dependence on radius. The Q(r) profile for the red galaxy population agrees with the theoretical results found in cosmological simulations, suggesting that any bias, relative to the dark matter particles, in velocity dispersion of the red component is independent of radius. The β - γ relation for red galaxies matches the theoretical relation only in the inner region. The deviations might be due to the use of galaxies as tracers of the gravitational potential, unlike the non-collisional tracer used in the theoretical relation.
Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.
2015-04-29
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also
Cycle-Based Cluster Variational Method for Direct and Inverse Inference
NASA Astrophysics Data System (ADS)
Furtlehner, Cyril; Decelle, Aurélien
2016-08-01
Large scale inference problems of practical interest can often be addressed with help of Markov random fields. This requires to solve in principle two related problems: the first one is to find offline the parameters of the MRF from empirical data (inverse problem); the second one (direct problem) is to set up the inference algorithm to make it as precise, robust and efficient as possible. In this work we address both the direct and inverse problem with mean-field methods of statistical physics, going beyond the Bethe approximation and associated belief propagation algorithm. We elaborate on the idea that loop corrections to belief propagation can be dealt with in a systematic way on pairwise Markov random fields, by using the elements of a cycle basis to define regions in a generalized belief propagation setting. For the direct problem, the region graph is specified in such a way as to avoid feed-back loops as much as possible by selecting a minimal cycle basis. Following this line we are led to propose a two-level algorithm, where a belief propagation algorithm is run alternatively at the level of each cycle and at the inter-region level. Next we observe that the inverse problem can be addressed region by region independently, with one small inverse problem per region to be solved. It turns out that each elementary inverse problem on the loop geometry can be solved efficiently. In particular in the random Ising context we propose two complementary methods based respectively on fixed point equations and on a one-parameter log likelihood function minimization. Numerical experiments confirm the effectiveness of this approach both for the direct and inverse MRF inference. Heterogeneous problems of size up to 10^5 are addressed in a reasonable computational time, notably with better convergence properties than ordinary belief propagation.
Identification of dynamic stiffness matrices of elastomeric joints using direct and inverse methods
NASA Astrophysics Data System (ADS)
Noll, Scott; Dreyer, Jason T.; Singh, Rajendra
2013-08-01
New experiments are designed to permit direct comparison between direct and inverse identification methods of the dynamic stiffness matrices of elastomeric joints, including non-diagonal terms. The joints are constructed with combinations of inclined elastomeric cylinders to control non-diagonal terms in the stiffness matrix. The inverse experiment consists of an elastic metal beam end-supported by elastomeric joints coupling the in-plane transverse and longitudinal beam motion. A prior method is extended to identify the joint dynamic stiffness matrices of dimension 3 from limited modal measurements of the beam. The dynamic stiffness and loss factors of the elastomeric cylinders are directly measured in a commercial elastomer test machine in shear, compression, and inclined configurations and a coordinate transformation is used to estimate the kinematic non-diagonal stiffness terms. Agreement is found for both dynamic stiffness and loss factors between the direct and inverse methods at small displacements. Further, the identified joint properties are employed in a model that successfully predicts the modal parameters and accelerance spectra of the inverse experiment. This article provides valuable insight on the difficulties encountered when comparing system and elastomeric component test results.
NASA Astrophysics Data System (ADS)
Zhang, Dong; Zhang, Xiaolei; Yuan, Jianzheng; Ke, Rui; Yang, Yan; Hu, Ying
2016-01-01
The Laplace-Fourier domain full waveform inversion can simultaneously restore both the long and intermediate short-wavelength information of velocity models because of its unique characteristics of complex frequencies. This approach solves the problem of conventional frequency-domain waveform inversion in which the inversion result is excessively dependent on the initial model due to the lack of low frequency information in seismic data. Nevertheless, the Laplace-Fourier domain waveform inversion requires substantial computational resources and long computation time because the inversion must be implemented on different combinations of multiple damping constants and multiple frequencies, namely, the complex frequencies, which are much more numerous than the Fourier frequencies. However, if the entire target model is computed on every complex frequency for the Laplace-Fourier domain inversion (as in the conventional frequency domain inversion), excessively redundant computation will occur. In the Laplace-Fourier domain waveform inversion, the maximum depth penetrated by the seismic wave decreases greatly due to the application of exponential damping to the seismic record, especially with use of a larger damping constant. Thus, the depth of the area effectively inverted on a complex frequency tends to be much less than the model depth. In this paper, we propose a method for quantitative estimation of the effective inversion depth in the Laplace-Fourier domain inversion based on the principle of seismic wave propagation and mathematical analysis. According to the estimated effective inversion depth, we can invert and update only the model area above the effective depth for every complex frequency without loss of accuracy in the final inversion result. Thus, redundant computation is eliminated, and the efficiency of the Laplace-Fourier domain waveform inversion can be improved. The proposed method was tested in numerical experiments. The experimental results show that
Bledsoe, Keith C.
2015-04-01
The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratory’s INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric. This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.
NASA Astrophysics Data System (ADS)
Ke, Quanpeng
Heat flux and heat transfer coefficients at the interfaces of castings and molds are important parameters in the mold design and computer simulations of the solidification process in foundry operations. A better understanding of the heat flux and heat transfer coefficient between the solidifying casting and its mold can promote model design and improve the accuracy of computer simulation. The main purpose of the present dissertation involves the estimation of the heat flux and heat transfer coefficient at the interface of the molten metal and green sand. Since the inverse heat conduction method requires temperature measurement data to deduce the missing surface information, it is suitable for the present research. However, heat transfer inside green sand is complicated by the migration of water vapor and zonal temperature distribution results. This makes the solution of the inverse heat conduction problem more challenging. In this dissertation, Galerkin's method of Weighted Residual together with the front tracking technique is used in the development of a forward solver. Beck's future time step method incorporated with the Gaussian iterative minimization method is used as the inverse solver. The mathematical descriptions of the sensitivity coefficient for both the direct heat flux and direct heat transfer coefficient estimation are derived. The variations of the sensitivity coefficients with time are revealed. From the analysis of sensitivity coefficients, the concept of blank time period is proposed. This blank time period makes the inverse problem much more difficult. A total energy balance criterion is used to combat this. Numerical experiments confirmed the accuracy and robustness of both the direct heat flux estimation algorithm and the direct heat transfer coefficient estimation algorithm. Finally, some pouring experiments are carried out. The inverse algorithms are applied to the estimation of the heat flux and heat transfer coefficient at the interface of
FOREWORD: 3rd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2013)
NASA Astrophysics Data System (ADS)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2013-10-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 3rd International Workshop on New Computational Methods for Inverse Problems, NCMIP 2013 (http://www.farman.ens-cachan.fr/NCMIP_2013.html). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 22 May 2013, at the initiative of Institut Farman. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 (http://www.farman.ens-cachan.fr/NCMIP_2012.html). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational
FOREWORD: 2nd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2012)
NASA Astrophysics Data System (ADS)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2012-09-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 2nd International Workshop on New Computational Methods for Inverse Problems, (NCMIP 2012). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 15 May 2012, at the initiative of Institut Farman. The first edition of NCMIP also took place in Cachan, France, within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finance. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition
NASA Astrophysics Data System (ADS)
Li, Jinghe; Song, Linping; Liu, Qing Huo
2016-02-01
A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1989-01-01
An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.
Solution of non-linear inverse heat conduction problems using the method of lines
NASA Astrophysics Data System (ADS)
Taler, J.; Duda, P.
Two space marching methods for solving the one-dimensional nonlinear inverse heat conduction problems are presented. The temperature-dependent thermal properties and the boundary condition on the accessible part of the boundary of the body are known. Additional temperature measurements in time are taken with a sensor located in an arbitrary position within the solid, and the objective is to determine the surface temperature and heat flux on the remaining part of the unspecified boundary. The methods have the advantage that time derivatives are not replaced by finite differences and the good accuracy of the method results from an appropriate approximation of the first time derivative using smoothing polynomials. The extension of the first method presented in this study to higher dimensions inverse heat conduction problems is straightforward.
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian; Wilson, John L.
2000-09-01
Inverse methods can be used to reconstruct the release history of a known source of groundwater contamination from concentration data describing the present-day spatial distribution of the contaminant plume. Using hypothetical release history functions and contaminant plumes, we evaluate the relative effectiveness of two proposed inverse methods, Tikhonov regularization (TR) and minimum relative entropy (MRE) inversion, in reconstructing the release history of a conservative contaminant in a one-dimensional domain [Skaggs and Kabala, 1994; Woodbury and Ulrych, 1996]. We also address issues of reproducibility of the solution and the appropriateness of models for simulating random measurement error. The results show that if error-free plume concentration data are available, both methods perform well in reconstructing a smooth source history function. With error-free data the MRE method is more robust than TR in reconstructing a nonsmooth source history function; however, the TR method is more robust if the data contain measurement error. Two error models were evaluated in this study, and we found that the particular error model does not affect the reliability of the solutions. The results for the TR method have somewhat greater reproducibility because, in some cases, its input parameters are less subjective than those of the MRE method; however, the MRE solution can identify regions where the data give little or no information about the source history function, while the TR solution cannot.
Supercritical blade design on stream surfaces of revolution with an inverse method
NASA Technical Reports Server (NTRS)
Schmidt, E.; Grein, H.-D.
1991-01-01
A method to solve the inverse problem of supercritical blade-to-blade flow on stream surfaces of revolution with variable radius and variable stream surface thickness in a relative system is described. Some aspects of shockless design and of leading edge resolution in the numerical procedure are depicted. Some supercritical compressor cascades were designed and their complete flow field results were compared with computations of two different analysis methods.
A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Liu, Tianyou
2014-07-01
Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.
New Advances for a joint 3D inversion of multiple EM methods
NASA Astrophysics Data System (ADS)
Meqbel, N. M.; Ritter, O.
2013-12-01
Electromagnetic (EM) methods are routinely applied to image the subsurface from shallow to regional structures. Individual EM methods differ in their sensitivities towards resistive and conductive structures as well as in their exploration depths. Joint 3D inversion of multiple EM data sets can result in significantly better resolution of subsurface structures than the individual inversions. Proper weighting between different EM data is essential, however. We present a recently developed weighting algorithm to combine magnetotelluric (MT), controlled source EM (CSEM) and DC-geoelectric (DC) data. It is well known that MT data are mostly sensible to regional conductive structures, whereas, CSEM and DC data are more suitable to recover more shallow and resistive structures. Our new scheme is based on weighting individual components of the total data gradient after each model update. Norms of each data residual are used to assess how much weight individual components of the total data gradient must have to achieve an equal contribution of all data sets in the inverse model. A numerically efficient way to search for appropriate weighting factors could be established by applying a bi-diagonalization procedure to the sensitivity matrix. Thereby, the original inverse problem can be projected onto a smaller dimension in which the search of weighting factors is numerically cheap. We demonstrate the efficiency of the proposed weighting schemes and explore the model domain with synthetic data sets.
Are Abell Clusters Correlated with Gamma-Ray Bursts?
NASA Technical Reports Server (NTRS)
Hurley, K.; Hartmann, D.; Kouveliotou, C.; Fishman, G.; Laros, J.; Cline, T.; Boer, M.
1997-01-01
A recent study has presented marginal statistical evidence that gamma-ray burst (GRB) sources are correlated with Abell clusters, based on analyses of bursts in the BATSE 3B catalog. Using precise localization information from the Third Interplanetary Network, we have reanalyzed this possible correlation. We find that most of the Abell clusters that are in the relatively large 3B error circles are not in the much smaller IPN/BATSE error regions. We believe that this argues strongly against an Abell cluster-GRB correlation.
The generalized Phillips-Twomey method for NMR relaxation time inversion
NASA Astrophysics Data System (ADS)
Gao, Yang; Xiao, Lizhi; Zhang, Yi; Xie, Qingming
2016-10-01
The inversion of NMR relaxation time involves the Fredholm integral equation of the first kind. Due to its ill-posedness, numerical solutions to this type of equations are often found much less accurate and bear little resemblance to the true solution. There has been a strong interest in finding a well-posed method for this ill-posed problem since 1950s. In this paper, we prove the existence, the uniqueness, the stability and the convergence of the generalized Phillips-Twomey regularization method for solving this type of equations. Numerical simulations and core analyses arising from NMR transverse relaxation time inversion are conducted to show the effectiveness of the generalized Phillips-Twomey method. Both the simulation results and the core analyses agree well with the model and the realities.
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel. PMID:27250181
Application of the Discrete Regularization Method to the Inverse of the Chord Vibration Equation
NASA Astrophysics Data System (ADS)
Wang, Linjun; Han, Xu; Wei, Zhouchao
The inverse problem of the initial condition about the boundary value of the chord vibration equation is ill-posed. First, we transform it into a Fredholm integral equation. Second, we discretize it by the trapezoidal formula method, and then obtain a severely ill-conditioned linear equation, which is sensitive to the disturbance of the data. In addition, the tiny error of right data causes the huge concussion of the solution. We cannot obtain good results by the traditional method. In this paper, we solve this problem by the Tikhonov regularization method, and the numerical simulations demonstrate that this method is feasible and effective.
Inversion methods for fast-ion velocity-space tomography in fusion plasmas
NASA Astrophysics Data System (ADS)
Jacobsen, A. S.; Stagner, L.; Salewski, M.; Geiger, B.; Heidbrink, W. W.; Korsholm, S. B.; Leipold, F.; Nielsen, S. K.; Rasmussen, J.; Stejner, M.; Thomsen, H.; Weiland, M.; the ASDEX Upgrade Team
2016-04-01
Velocity-space tomography has been used to infer 2D fast-ion velocity distribution functions. Here we compare the performance of five different tomographic inversion methods: truncated singular value decomposition, maximum entropy, minimum Fisher information and zeroth- and first-order Tikhonov regularization. The inversion methods are applied to fast-ion {{\\text{D}}α} measurements taken just before and just after a sawtooth crash in the ASDEX Upgrade tokamak as well as to synthetic measurements from different test distributions. We find that the methods regularizing by penalizing steep gradients or maximizing entropy perform best. We assess the uncertainty of the calculated inversions taking into account photon noise, uncertainties in the forward model as well as uncertainties introduced by the regularization which allows us to distinguish regions of high and low confidence in the tomographies. In high confidence regions, all methods agree that ions with pitch values close to zero, as well as ions with large pitch values, are ejected from the plasma center by the sawtooth crash, and that this ejection depletes the ion population with large pitch values more strongly.
Multiply scattered aerosol lidar returns: inversion method and comparison with in situ measurements.
Bissonnette, L R; Hutt, D L
1995-10-20
A novel aerosol lidar inversion method based on the use of multiple-scattering contributions measured by a multiple-field-of-view receiver is proposed. The method requires assumptions that restrict applications to aerosol particles large enough to give rise to measurable multiple scattering and depends on parameters that must be specified empirically but that have an uncertainty range of much less than the boundary value and the backscatter-to-extinction ratio of the conventional single-scattering inversion methods. The proposed method is applied to cloud measurements. The solutions obtained are the profiles of the scattering coefficient and the effective diameter of the cloud droplets. With mild assumptions on the form of the function, the full-size distribution is estimated at each range position from which the extinction coefficient at any visible and infrared wavelength and the liquid water content can be determined. Typical results on slant-path-integrated optical depth, vertical extinction profiles, and fluctuation statistics are compared with in situ data obtained in two field experiments. The inversion works well in all cases reported here, i.e., for water clouds at optical depths between ~0.1 and ~4.
Inverse scattering solutions by a sinc basis, multiple source, moment method--Part I: Theory.
Johnson, S A; Tracy, M L
1983-10-01
A new method for solving the inverse scattering problem for the scalar, inhomogeneous, exact, Helmholtz wave equation is presented. No perturbation approximations are used and the method is applicable even for many cases where weak to moderate attenuation and moderate to strong refraction of incident fields occur. The ill-posed nature of the inverse scattering problem for a single monochromatic source is known. However, the use of multiple sources, the collection of redundant (i.e., overdetermined) data, and the constraining of the fields and complex refractive index to be spatially band limited constitutes a new problem. The cases we have tested by computer simulation indicate that the new problem is well posed, a unique solution, and is stable with noisy data. The method is an application of the well-known method of moments with sinc basis and delta testing functions to discretize the problem. The inverse scattering solution may be obtained by solving the resulting set of simultaneous, quadratic, multivariate equations. Several algorithms for solving these equations are given. PMID:6686901
NASA Astrophysics Data System (ADS)
Michalak, Anna M.; Kitanidis, Peter K.
2004-08-01
As the incidence of groundwater contamination continues to grow, a number of inverse modeling methods have been developed to address forensic groundwater problems. In this work the geostatistical approach to inverse modeling is extended to allow for the recovery of the antecedent distribution of a contaminant at a given point back in time, which is critical to the assessment of historical exposure to contamination. Such problems are typically strongly underdetermined, with a large number of points at which the distribution is to be estimated. To address this challenge, the computational efficiency of the new method is increased through the application of the adjoint state method. In addition, the adjoint problem is presented in a format that allows for the reuse of existing groundwater flow and transport codes as modules in the inverse modeling algorithm. As demonstrated in the presented applications, the geostatistical approach combined with the adjoint state method allow for a historical multidimensional contaminant distribution to be recovered even in heterogeneous media, where a numerical solution is required for the forward problem.
The Abell 85 BCG: A Nucleated, Coreless Galaxy
NASA Astrophysics Data System (ADS)
Madrid, Juan P.; Donzelli, Carlos J.
2016-03-01
New high-resolution r-band imaging of the brightest cluster galaxy (BCG) in Abell 85 (Holm 15A) was obtained using the Gemini Multi Object Spectrograph. These data were taken with the aim of deriving an accurate surface brightness profile of the BCG of Abell 85, in particular, its central region. The new Gemini data show clear evidence of a previously unreported nuclear emission that is evident as a distinct light excess in the central kiloparsec of the surface brightness profile. We find that the light profile is never flat nor does it present a downward trend toward the center of the galaxy. That is, the new Gemini data show a different physical reality from the featureless, “evacuated core” recently claimed for the Abell 85 BCG. After trying different models, we find that the surface brightness profile of the BCG of Abell 85 is best fit by a double Sérsic model.
Structural Anomaly Detection Using Fiber Optic Sensors and Inverse Finite Element Method
NASA Technical Reports Server (NTRS)
Quach, Cuong C.; Vazquez, Sixto L.; Tessler, Alex; Moore, Jason P.; Cooper, Eric G.; Spangler, Jan. L.
2005-01-01
NASA Langley Research Center is investigating a variety of techniques for mitigating aircraft accidents due to structural component failure. One technique under consideration combines distributed fiber optic strain sensing with an inverse finite element method for detecting and characterizing structural anomalies anomalies that may provide early indication of airframe structure degradation. The technique identifies structural anomalies that result in observable changes in localized strain but do not impact the overall surface shape. Surface shape information is provided by an Inverse Finite Element Method that computes full-field displacements and internal loads using strain data from in-situ fiberoptic sensors. This paper describes a prototype of such a system and reports results from a series of laboratory tests conducted on a test coupon subjected to increasing levels of damage.
NASA Technical Reports Server (NTRS)
Vazquez, Sixto L.; Tessler, Alexander; Quach, Cuong C.; Cooper, Eric G.; Parks, Jeffrey; Spangler, Jan L.
2005-01-01
In an effort to mitigate accidents due to system and component failure, NASA s Aviation Safety has partnered with industry, academia, and other governmental organizations to develop real-time, on-board monitoring capabilities and system performance models for early detection of airframe structure degradation. NASA Langley is investigating a structural health monitoring capability that uses a distributed fiber optic strain system and an inverse finite element method for measuring and modeling structural deformations. This report describes the constituent systems that enable this structural monitoring function and discusses results from laboratory tests using the fiber strain sensor system and the inverse finite element method to demonstrate structural deformation estimation on an instrumented test article
Performance evaluation of the inverse dynamics method for optimal spacecraft reorientation
NASA Astrophysics Data System (ADS)
Ventura, Jacopo; Romano, Marcello; Walter, Ulrich
2015-05-01
This paper investigates the application of the inverse dynamics in the virtual domain method to Euler angles, quaternions, and modified Rodrigues parameters for rapid optimal attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics method, it yields sub-optimal solutions for minimum time problems. Furthermore, the virtual domain improves the optimality of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the optimal solution can be obtained without the virtual domain with any considered attitude representation.
The genus curve of the Abell clusters
NASA Technical Reports Server (NTRS)
Rhoads, James E.; Gott, J. Richard, III; Postman, Marc
1994-01-01
We study the topology of large-scale structure through a genus curve measurement of the recent Abell catalog redshift survey of Postman, Huchra, and Geller (1992). The structure is found to be spongelike near median density and to exhibit isolated superclusters and voids at high and low densities, respectively. The genus curve shows a slight shift toward 'meatball' topology, but remains consistent with the hypothesis of Gaussian random phase initial conditions. The amplitude of the genus curve corresponds to a power-law spectrum with index n = 0.21(sub -0.47 sup +0.43) on scales of 48/h Mpc or to a cold dark matter power spectrum with omega h = 0.36(sub -0.17 sup +0.46).
NASA Astrophysics Data System (ADS)
Grigorov, Igor V.
2009-12-01
In article the algorithm of numerical modelling of the nonlinear equation of Korteweg-de Vrieze which generates nonlinear algorithm of digital processing of signals is considered. For realisation of the specified algorithm it is offered to use a inverse scattering method (ISM). Algorithms of direct and return spectral problems, and also problems of evolution of the spectral data are in detail considered. Results of modelling are resulted.
Laboratory evaluation of a hydrodynamic inverse modeling method based on water content data
NASA Astrophysics Data System (ADS)
Lambot, S.; Hupet, F.; Javaux, M.; Vanclooster, M.
2004-03-01
The inverse modeling method of [2002] for estimating the hydraulic properties of partially saturated soils, which was numerically validated, is further tested on laboratory-scale transient flow experiments. The method uses the global multilevel coordinate search algorithm combined sequentially with the local Nelder-Mead simplex algorithm to obtain the inverse of the one-dimensional Richards equation using soil moisture time series measured at three different depths during natural infiltration. Flow experiments were conducted on a homogeneous artificial sand column and three undisturbed soil columns collected from agricultural fields. Three models describing the unsaturated soil hydraulic properties were used and compared: the model of Mualem and van Genuchten, the model of Assouline, and the decoupled van Genuchten-Brooks and Corey combination. The performances of all three models were similar, except for Assouline's model, which provided poorer results in two cases. The inversion method provided relatively good estimates for the water retention curves and also for the saturated conductivity when the moisture range explored was not too small. Water content time series were very well reproduced for the artificial soil and a sandy loam soil, but for two silt loam soils, larger errors were observed. The prediction of the water transfer behavior in the soil columns was poor when flow properties were estimated using directly determined hydraulic properties. The main limiting factor for applying the inversion method, particularly for nonsandy soils, was the characterization of the initial conditions in terms of the pressure head profile. Furthermore, the use of only soil moisture data is essential to enable the hydrogeophysical characterization of soils.
Interpretation of Trace Gas Data Using Inverse Methods and Global Chemical Transport Models
NASA Technical Reports Server (NTRS)
Prinn, Ronald G.
1997-01-01
This is a theoretical research project aimed at: (1) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for long lived gases important in ozone depletion and climate forcing, (2) utilization of inverse methods to determine these source/sink strengths which use the NCAR/Boulder CCM2-T42 3-D model and a global 3-D Model for Atmospheric Transport and Chemistry (MATCH) which is based on analyzed observed wind fields (developed in collaboration by MIT and NCAR/Boulder), (3) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple titrating gases, and, (4) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3-D models. Important goals include determination of regional source strengths of methane, nitrous oxide, and other climatically and chemically important biogenic trace gases and also of halocarbons restricted by the Montreal Protocol and its follow-on agreements and hydrohalocarbons used as alternatives to the restricted halocarbons.
Studies of Trace Gas Chemical Cycles Using Inverse Methods and Global Chemical Transport Models
NASA Technical Reports Server (NTRS)
Prinn, Ronald G.
2003-01-01
We report progress in the first year, and summarize proposed work for the second year of the three-year dynamical-chemical modeling project devoted to: (a) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for long lived gases important in ozone depletion and climate forcing, (b) utilization of inverse methods to determine these source/sink strengths using either MATCH (Model for Atmospheric Transport and Chemistry) which is based on analyzed observed wind fields or back-trajectories computed from these wind fields, (c) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple titrating gases, and (d) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3D models. Important goals include determination of regional source strengths of methane, nitrous oxide, methyl bromide, and other climatically and chemically important biogenic/anthropogenic trace gases and also of halocarbons restricted by the Montreal protocol and its follow-on agreements and hydrohalocarbons now used as alternatives to the restricted halocarbons.
Hesford, Andrew J.; Chew, Weng C.
2010-01-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438
Design of Aspirated Compressor Blades Using Three-dimensional Inverse Method
NASA Technical Reports Server (NTRS)
Dang, T. Q.; Rooij, M. Van; Larosiliere, L. M.
2003-01-01
A three-dimensional viscous inverse method is extended to allow blading design with full interaction between the prescribed pressure-loading distribution and a specified transpiration scheme. Transpiration on blade surfaces and endwalls is implemented as inflow/outflow boundary conditions, and the basic modifications to the method are outlined. This paper focuses on a discussion concerning an application of the method to the design and analysis of a supersonic rotor with aspiration. Results show that an optimum combination of pressure-loading tailoring with surface aspiration can lead to a minimization of the amount of sucked flow required for a net performance improvement at design and off-design operations.
The magnitude-redshift relation for 561 Abell clusters
NASA Technical Reports Server (NTRS)
Postman, M.; Huchra, J. P.; Geller, M. J.; Henry, J. P.
1985-01-01
The Hubble diagram for the 561 Abell clusters with measured redshifts has been examined using Abell's (1958) corrected photo-red magnitudes for the tenth-ranked cluster member (m10). After correction for the Scott effect and K dimming, the data are in good agreement with a linear magnitude-redshift relation with a slope of 0.2 out to z = 0.1. New redshift data are also presented for 20 Abell clusters. Abell's m10 is suitable for redshift estimation for clusters with m10 of no more than 16.5. At fainter m10, the number of foreground galaxies expected within an Abell radius is large enough to make identification of the tenth-ranked galaxy difficult. Interlopers bias the estimated redshift toward low values at high redshift. Leir and van den Bergh's (1977) redshift estimates suffer from this same bias but to a smaller degree because of the use of multiple cluster parameters. Constraints on deviations of cluster velocities from the mean cosmological flow require greater photometric accuracy than is provided by Abell's m10 magnitudes.
3D CSEM data inversion using Newton and Halley class methods
NASA Astrophysics Data System (ADS)
Amaya, M.; Hansen, K. R.; Morten, J. P.
2016-05-01
For the first time in 3D controlled source electromagnetic data inversion, we explore the use of the Newton and the Halley optimization methods, which may show their potential when the cost function has a complex topology. The inversion is formulated as a constrained nonlinear least-squares problem which is solved by iterative optimization. These methods require the derivatives up to second order of the residuals with respect to model parameters. We show how Green's functions determine the high-order derivatives, and develop a diagrammatical representation of the residual derivatives. The Green's functions are efficiently calculated on-the-fly, making use of a finite-difference frequency-domain forward modelling code based on a multi-frontal sparse direct solver. This allow us to build the second-order derivatives of the residuals keeping the memory cost in the same order as in a Gauss-Newton (GN) scheme. Model updates are computed with a trust-region based conjugate-gradient solver which does not require the computation of a stabilizer. We present inversion results for a synthetic survey and compare the GN, Newton, and super-Halley optimization schemes, and consider two different approaches to set the initial trust-region radius. Our analysis shows that the Newton and super-Halley schemes, using the same regularization configuration, add significant information to the inversion so that the convergence is reached by different paths. In our simple resistivity model examples, the convergence speed of the Newton and the super-Halley schemes are either similar or slightly superior with respect to the convergence speed of the GN scheme, close to the minimum of the cost function. Due to the current noise levels and other measurement inaccuracies in geophysical investigations, this advantageous behaviour is at present of low consequence, but may, with the further improvement of geophysical data acquisition, be an argument for more accurate higher-order methods like those
NASA Astrophysics Data System (ADS)
Qi, Zhipeng; Li, Xiu; Lu, Xushan; Zhang, Yingying; Yao, Weihua
2015-04-01
We introduce a new and potentially useful method for wave field inverse transformation and its application in transient electromagnetic method (TEM) 3D interpretation. The diffusive EM field is known to have a unique integral representation in terms of a fictitious wave field that satisfies a wave equation. The continuous imaging of TEM can be accomplished using the imaging methods in seismic interpretation after the diffusion equation is transformed into a fictitious wave equation. The interpretation method based on the imaging of a fictitious wave field could be used as a fast 3D inversion method. Moreover, the fictitious wave field possesses some wave field features making it possible for the application of a wave field interpretation method in TEM to improve the prospecting resolution. Wave field transformation is a key issue in the migration imaging of a fictitious wave field. The equation in the wave field transformation belongs to the first class Fredholm integration equation, which is a typical ill-posed equation. Additionally, TEM has a large dynamic time range, which also facilitates the weakness of this ill-posed problem. The wave field transformation is implemented by using pre-conditioned regularized conjugate gradient method. The continuous imaging of a fictitious wave field is implemented by using Kirchhoff integration. A synthetic aperture and deconvolution algorithm is also introduced to improve the interpretation resolution. We interpreted field data by the method proposed in this paper, and obtained a satisfying interpretation result.
Depth-weighted Inverse and Imaging methods to study the Earth's Crust in Southern Italy
NASA Astrophysics Data System (ADS)
Fedi, M.
2012-04-01
Inversion means solving a set of geophysical equations for a spatial distribution of parameters (or functions) which could have produced an observed set of measurements. Imaging is instead a transformation of magnetometric data into a scaled 3D model resembling the true geometry of subsurface geologic features. While inversion theory allows many additional constraints, such as depth weighting, positivity, physical property bounds, smoothness, focusing, imaging methods of magnetic data derived under different theories are all found to reduce to either simple upward continuation or a depth-weighted upward continuation, with weights expressed in the general form of a power law of the altitude, with the half of the structural index as exponent. Note however that specifying the appropriate level of depth weighting is not just a problem in these imaging techniques but should also be considered in standard inversion methods. We will also investigate the relationship between imaging methods and multiscale methods. A multiscale analysis is well suitable to study potential fields because the way potential fields convey source information is strictly related to the scale of analysis. The stability of multiscale methods results from mixing, in a single operator, the wavenumber low-pass behaviour of the upward continuation transformation of the field with the enhancement high-pass properties of n-order derivative transformations. So, the complex reciprocal interference of several field components may be efficiently faced at several scales of the analysis and the depth to the sources may be estimated together with the homogeneity degrees of the field. We will describe the main aspects of both the kinds of interpretation under the study of multi-source models and apply either inversion or imaging techniques to the magnetic data of complex crustal areas of Southern Italy, such as the Campanian volcanic district and the Southern Apennines. The studied area includes a Pleistocene
Fast full waveform inversion with source encoding and second-order optimization methods
NASA Astrophysics Data System (ADS)
Castellanos, Clara; Métivier, Ludovic; Operto, Stéphane; Brossier, Romain; Virieux, Jean
2015-02-01
Full waveform inversion (FWI) of 3-D data sets has recently been possible thanks to the development of high performance computing. However, FWI remains a computationally intensive task when high frequencies are injected in the inversion or more complex wave physics (viscoelastic) is accounted for. The highest computational cost results from the numerical solution of the wave equation for each seismic source. To reduce the computational burden, one well-known technique is to employ a random linear combination of the sources, rather that using each source independently. This technique, known as source encoding, has shown to successfully reduce the computational cost when applied to real data. Up to now, the inversion is normally carried out using gradient descent algorithms. With the idea of achieving a fast and robust frequency-domain FWI, we assess the performance of the random source encoding method when it is interfaced with second-order optimization methods (quasi-Newton l-BFGS, truncated Newton). Because of the additional seismic modelings required to compute the Newton descent direction, it is not clear beforehand if truncated Newton methods can indeed further reduce the computational cost compared to gradient algorithms. We design precise stopping criteria of iterations to fairly assess the computational cost and the speed-up provided by the source encoding method for each optimization method. We perform experiment on synthetic and real data sets. In both cases, we confirm that combining source encoding with second-order optimization methods reduces the computational cost compared to the case where source encoding is interfaced with gradient descent algorithms. For the synthetic data set, inspired from the geology of Gulf of Mexico, we show that the quasi-Newton l-BFGS algorithm requires the lowest computational cost. For the real data set application on the Valhall data, we show that the truncated Newton methods provide the most robust direction of descent.
NASA Technical Reports Server (NTRS)
Gherlone, Marco; Cerracchio, Priscilla; Mattone, Massimiliano; Di Sciuva, Marco; Tessler, Alexander
2011-01-01
A robust and efficient computational method for reconstructing the three-dimensional displacement field of truss, beam, and frame structures, using measured surface-strain data, is presented. Known as shape sensing , this inverse problem has important implications for real-time actuation and control of smart structures, and for monitoring of structural integrity. The present formulation, based on the inverse Finite Element Method (iFEM), uses a least-squares variational principle involving strain measures of Timoshenko theory for stretching, torsion, bending, and transverse shear. Two inverse-frame finite elements are derived using interdependent interpolations whose interior degrees-of-freedom are condensed out at the element level. In addition, relationships between the order of kinematic-element interpolations and the number of required strain gauges are established. As an example problem, a thin-walled, circular cross-section cantilevered beam subjected to harmonic excitations in the presence of structural damping is modeled using iFEM; where, to simulate strain-gauge values and to provide reference displacements, a high-fidelity MSC/NASTRAN shell finite element model is used. Examples of low and high-frequency dynamic motion are analyzed and the solution accuracy examined with respect to various levels of discretization and the number of strain gauges.
NASA Astrophysics Data System (ADS)
Li, Guo-Yang; Zheng, Yang; Liu, Yanlin; Destrade, Michel; Cao, Yanping
2016-11-01
A body force concentrated at a point and moving at a high speed can induce shear-wave Mach cones in dusty-plasma crystals or soft materials, as observed experimentally and named the elastic Cherenkov effect (ECE). The ECE in soft materials forms the basis of the supersonic shear imaging (SSI) technique, an ultrasound-based dynamic elastography method applied in clinics in recent years. Previous studies on the ECE in soft materials have focused on isotropic material models. In this paper, we investigate the existence and key features of the ECE in anisotropic soft media, by using both theoretical analysis and finite element (FE) simulations, and we apply the results to the non-invasive and non-destructive characterization of biological soft tissues. We also theoretically study the characteristics of the shear waves induced in a deformed hyperelastic anisotropic soft material by a source moving with high speed, considering that contact between the ultrasound probe and the soft tissue may lead to finite deformation. On the basis of our theoretical analysis and numerical simulations, we propose an inverse approach to infer both the anisotropic and hyperelastic parameters of incompressible transversely isotropic (TI) soft materials. Finally, we investigate the properties of the solutions to the inverse problem by deriving the condition numbers in analytical form and performing numerical experiments. In Part II of the paper, both ex vivo and in vivo experiments are conducted to demonstrate the applicability of the inverse method in practical use.
NASA Astrophysics Data System (ADS)
Baskaran, R.; Janawadkar, M. P.
2012-10-01
Signal Space Separation (SSS) method is a technique utilized to eliminate the contribution of unwanted signals due to external magnetic noise that inevitably get recorded along with the actual magnetic signal due to neuronal currents in the brain. In this paper, the SSS method has been implemented using block matrix inversion. By implementing block matrix inversion along with regrouping of radial terms, it has been possible to extract the true brain signal from the measured magnetic signal which includes the contribution from an external magnetic dipole artifact for measurements from as few as 64 channels. We observe that the minimum root mean square deviation (RMSD) of the signal inferred from block matrix inversion with regrouped terms is around 6 fT when the truncation order is set at L1 = 11 for signals of interest and L2 = 2 for external noise sources and saturates to similar values for higher L1. The RMSD of extracted signal is 50% smaller than the minimum RMSD when the magnetoencephalography signal is extracted by direct pseudoinverse technique and does not have a deep minimum in the truncation order L1 as observed when direct pseudoinverse technique is used.
NASA Astrophysics Data System (ADS)
Awaluddin, Moehammad; Yuwono, Bambang Darmo; Puspita, Yolanda Adya
2016-05-01
Continuous Global Positioning System (GPS) observations showed significant crustal displacements as a result of the 2010 Mentawai earthquake. The Least Square Inversion method of Mentawai earthquake slip distribution from SuGAR observations yielded in an optimum value of slip distribution by giving a weight of smoothing constraint and a weight of slip value constraint = 0 at the edge of the earthquake rupture area. A maximum coseismic slip of the inversion calculation was 1.997 m and concentrated around stations PRKB (Pagai Island). In addition, the values of dip-slip direction tend to be more dominant. The seismic moment calculated from the slip distribution was 6.89 × 10E+20 Nm, which is equivalent to a magnitude of 7.8.
A computational method for the inversion of wide-band GPR measurements
NASA Astrophysics Data System (ADS)
Salucci, M.; Tenuti, L.; Poli, L.; Oliveri, G.; Massa, A.
2016-10-01
An innovative method for the inversion of ground penetrating radar (GPR) measurements is presented. The proposed inverse scattering (IS) approach is based on the exploitation of wide-band data according to a multi-frequency (MF) strategy, and integrates a customized particle swarm optimizer (PSO) within the iterative multi-scaling approach (IMSA) to counteract the high non-linearity of the optimized cost function. If from the one hand the IMSA provides a reduction of the ratio between problem unknowns and informative data, on the other hand the stochastic nature of the PSO solver allows to "escape" from the high density of false solutions of the MF-IS subsurface problem. A set of representative numerical results verifies the effectiveness of the developed approach, as well as its superiority with respect to a deterministic implementation.
Using informative priors in facies inversion: The case of C-ISR method
NASA Astrophysics Data System (ADS)
Valakas, G.; Modis, K.
2016-08-01
Inverse problems involving the characterization of hydraulic properties of groundwater flow systems by conditioning on observations of the state variables are mathematically ill-posed because they have multiple solutions and are sensitive to small changes in the data. In the framework of McMC methods for nonlinear optimization and under an iterative spatial resampling transition kernel, we present an algorithm for narrowing the prior and thus producing improved proposal realizations. To achieve this goal, we cosimulate the facies distribution conditionally to facies observations and normal scores transformed hydrologic response measurements, assuming a linear coregionalization model. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior. The effectiveness of our approach is demonstrated by an example application on a synthetic underdetermined inverse problem in aquifer characterization.
NASA Astrophysics Data System (ADS)
Groh, Andreas; Krebs, Jochen
2012-08-01
In this paper, a population balance equation, originating from applications in chemical engineering, is considered and novel solution techniques for a related inverse problem are presented. This problem consists in the determination of the breakage rate and the daughter drop distribution of an evolving drop size distribution from time-dependent measurements under the assumption of self-similarity. We analyze two established solution methods for this ill-posed problem and improve the two procedures by adapting suitable data fitting and inversion algorithms to the specific situation. In addition, we introduce a novel technique that, compared to the former, does not require certain a priori information. The improved stability properties of the resulting algorithms are substantiated with numerical examples.
Markov Chain Monte Carlo Sampling Methods for 1D Seismic and EM Data Inversion
2008-09-22
This software provides several Markov chain Monte Carlo sampling methods for the Bayesian model developed for inverting 1D marine seismic and controlled source electromagnetic (CSEM) data. The current software can be used for individual inversion of seismic AVO and CSEM data and for joint inversion of both seismic and EM data sets. The structure of the software is very general and flexible, and it allows users to incorporate their own forward simulation codes and rockmore » physics model codes easily into this software. Although the softwae was developed using C and C++ computer languages, the user-supplied codes can be written in C, C++, or various versions of Fortran languages. The software provides clear interfaces for users to plug in their own codes. The output of this software is in the format that the R free software CODA can directly read to build MCMC objects.« less
Localization of incipient tip vortex cavitation using ray based matched field inversion method
NASA Astrophysics Data System (ADS)
Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon
2015-10-01
Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.
Lura, Derek; Wernke, Matthew; Alqasemi, Redwan; Carey, Stephanie; Dubey, Rajiv
2012-01-01
This paper presents the probability density based gradient projection (GP) of the null space of the Jacobian for a 25 degree of freedom bilateral robotic human body model (RHBM). This method was used to predict the inverse kinematics of the RHBM and maximize the similarity between predicted inverse kinematic poses and recorded data of 10 subjects performing activities of daily living. The density function was created for discrete increments of the workspace. The number of increments in each direction (x, y, and z) was varied from 1 to 20. Performance of the method was evaluated by finding the root mean squared (RMS) of the difference between the predicted joint angles relative to the joint angles recorded from motion capture. The amount of data included in the creation of the probability density function was varied from 1 to 10 subjects, creating sets of for subjects included and excluded from the density function. The performance of the GP method for subjects included and excluded from the density function was evaluated to test the robustness of the method. Accuracy of the GP method varied with amount of incremental division of the workspace, increasing the number of increments decreased the RMS error of the method, with the error of average RMS error of included subjects ranging from 7.7° to 3.7°. However increasing the number of increments also decreased the robustness of the method.
Toward Optimal and Scalable Dimension Reduction Methods for large-scale Bayesian Inversions
NASA Astrophysics Data System (ADS)
Bousserez, N.; Henze, D. K.
2015-12-01
Many inverse problems in geophysics are solved within the Bayesian framework, in which a prior probability density function of a quantity of interest is optimally updated using newly available observations. A maximum likelihood of the posterior probability density function is estimated using a model of the physics that relates the variables to be optimized to the observations. However, in many practical situations the number of observations is much smaller than the number of variables estimated, which leads to an ill-posed problem. In practice, this means that the data are informative only in a subspace of the initial space. It is both of theoretical and practical interest to characterize this "data-informed" subspace, since it allows a simple interpretation of the inverse solution and its uncertainty, but can also dramatically reduce the computational cost of the optimization by reducing the size of the problem. In this presentation the formalism of dimension reduction in Bayesian methods will be introduced, and different optimality criteria will be discussed (e.g., minimum error variances, maximum degree of freedom for signal). For each criterion, an optimal design for the reduced Bayesian problem will be proposed and compared with other suboptimal approaches. A significant advantage of our method is its high scalability owing to an efficient parallel implementation, making it very attractive for large-scale inverse problems. Numerical results from an Observation Simulation System Experiment (OSSE) consisting of a high spatial resolution (0.5°x0.7°) source inversion of methane over North America using observations from the Greenhouse gases Observing SATellite (GOSAT) instrument and the GEOS-Chem chemistry-transport model will illustrate the computational efficiency of our approach. Although only linear models are considered in this study, possible extensions to the non-linear case will also be discussed
Full Waveform Inversion Methods for Source and Media Characterization before and after SPE5
NASA Astrophysics Data System (ADS)
Phillips-Alonge, K. E.; Knox, H. A.; Ober, C.; Abbott, R. E.
2015-12-01
The Source Physics Experiment (SPE) was designed to advance our understanding of explosion-source phenomenology and subsequent wave propagation through the development of innovative physics-based models. Ultimately, these models will be used for characterizing explosions, which can occur with a variety of yields, depths of burial, and in complex media. To accomplish this, controlled chemical explosions were conducted in a granite outcrop at the Nevada Nuclear Security Test Site. These explosions were monitored with extensive seismic and infrasound instrumentation both in the near and far-field. Utilizing this data, we calculate predictions before the explosions occur and iteratively improve our models after each explosion. Specifically, we use an adjoint-based full waveform inversion code that employs discontinuous Galerkin techniques to predict waveforms at station locations prior to the fifth explosion in the series (SPE5). The full-waveform inversions are performed using a realistic geophysical model based on local 3D tomography and inversions for media properties using previous shot data. The code has capabilities such as unstructured meshes that align with material interfaces, local polynomial refinement, and support for various physics and methods for implicit and explicit time-integration. The inversion results we show here evaluate these different techniques, which allows for model fidelity assessment (acoustic versus elastic versus anelastic, etc.). In addition, the accuracy and efficiency of several time-integration methods can be determined. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.
2014-08-20
We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems
NASA Astrophysics Data System (ADS)
Iglesias, Marco A.
2016-02-01
We introduce a derivative-free computational framework for approximating solutions to nonlinear PDE-constrained inverse problems. The general aim is to merge ideas from iterative regularization with ensemble Kalman methods from Bayesian inference to develop a derivative-free stable method easy to implement in applications where the PDE (forward) model is only accessible as a black box (e.g. with commercial software). The proposed regularizing ensemble Kalman method can be derived as an approximation of the regularizing Levenberg-Marquardt (LM) scheme (Hanke 1997 Inverse Problems 13 79-95) in which the derivative of the forward operator and its adjoint are replaced with empirical covariances from an ensemble of elements from the admissible space of solutions. The resulting ensemble method consists of an update formula that is applied to each ensemble member and that has a regularization parameter selected in a similar fashion to the one in the LM scheme. Moreover, an early termination of the scheme is proposed according to a discrepancy principle-type of criterion. The proposed method can be also viewed as a regularizing version of standard Kalman approaches which are often unstable unless ad hoc fixes, such as covariance localization, are implemented. The aim of this paper is to provide a detailed numerical investigation of the regularizing and convergence properties of the proposed regularizing ensemble Kalman scheme; the proof of these properties is an open problem. By means of numerical experiments, we investigate the conditions under which the proposed method inherits the regularizing properties of the LM scheme of (Hanke 1997 Inverse Problems 13 79-95) and is thus stable and suitable for its application in problems where the computation of the Fréchet derivative is not computationally feasible. More concretely, we study the effect of ensemble size, number of measurements, selection of initial ensemble and tunable parameters on the performance of the method
A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty.
Zhang, Kai; Wang, Zengfei; Zhang, Liming; Yao, Jun; Yan, Xia
2015-01-01
In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method), for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model's parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively 'important' elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient.
Reconstruction of multiple gastric electrical wave fronts using potential-based inverse methods
NASA Astrophysics Data System (ADS)
Kim, J. H. K.; Pullan, A. J.; Cheng, L. K.
2012-08-01
One approach for non-invasively characterizing gastric electrical activity, commonly used in the field of electrocardiography, involves solving an inverse problem whereby electrical potentials on the stomach surface are directly reconstructed from dense potential measurements on the skin surface. To investigate this problem, an anatomically realistic torso model and an electrical stomach model were used to simulate potentials on stomach and skin surfaces arising from normal gastric electrical activity. The effectiveness of the Greensite-Tikhonov or the Tikhonov inverse methods were compared under the presence of 10% Gaussian noise with either 84 or 204 body surface electrodes. The stability and accuracy of the Greensite-Tikhonov method were further investigated by introducing varying levels of Gaussian signal noise or by increasing or decreasing the size of the stomach by 10%. Results showed that the reconstructed solutions were able to represent the presence of propagating multiple wave fronts and the Greensite-Tikhonov method with 204 electrodes performed best (correlation coefficients of activation time: 90%; pacemaker localization error: 3 cm). The Greensite-Tikhonov method was stable with Gaussian noise levels up to 20% and 10% change in stomach size. The use of 204 rather than 84 body surface electrodes improved the performance; however, for all investigated cases, the Greensite-Tikhonov method outperformed the Tikhonov method.
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
Arbanas, Goran; Williams, Mark L; Leal, Luiz C; Dunn, Michael E; Khuwaileh, Bassam A.; Wang, C; Abdel-Khalik, Hany
2015-01-01
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimiator (INSURE) module of the AMPX system [1]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way. We show how the IS/UQ method could be used to optimize uncertainties of IBEs and differential cross section data simultaneously.
A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty
Zhang, Kai; Wang, Zengfei; Zhang, Liming; Yao, Jun; Yan, Xia
2015-01-01
In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method), for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model’s parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively ‘important’ elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient. PMID:26252392
A hybrid differential evolution/Levenberg-Marquardt method for solving inverse transport problems
Bledsoe, Keith C; Favorite, Jeffrey A
2010-01-01
Recently, the Differential Evolution (DE) optimization method was applied to solve inverse transport problems in finite cylindrical geometries and was shown to be far superior to the Levenberg-Marquardt optimization method at finding a global optimum for problems with several unknowns. However, while extremely adept at finding a global optimum solution, the DE method often requires a large number (hundreds or thousands) of transport calculations, making it much slower than the Levenberg-Marquardt method. In this paper, a hybridization of the Differential Evolution and Levenberg-Marquardt approaches is presented. This hybrid method takes advantage of the robust search capability of the Differential Evolution method and the speed of the Levenberg-Marquardt technique.
Earthquake source tensor inversion with the gCAP method and 3D Green's functions
NASA Astrophysics Data System (ADS)
Zheng, J.; Ben-Zion, Y.; Zhu, L.; Ross, Z.
2013-12-01
We develop and apply a method to invert earthquake seismograms for source properties using a general tensor representation and 3D Green's functions. The method employs (i) a general representation of earthquake potency/moment tensors with double couple (DC), compensated linear vector dipole (CLVD), and isotropic (ISO) components, and (ii) a corresponding generalized CAP (gCap) scheme where the continuous wave trains are broken into Pnl and surface waves (Zhu & Ben-Zion, 2013). For comparison, we also use the waveform inversion method of Zheng & Chen (2012) and Ammon et al. (1998). Sets of 3D Green's functions are calculated on a grid of 1 km3 using the 3-D community velocity model CVM-4 (Kohler et al. 2003). A bootstrap technique is adopted to establish robustness of the inversion results using the gCap method (Ross & Ben-Zion, 2013). Synthetic tests with 1-D and 3-D waveform calculations show that the source tensor inversion procedure is reasonably reliable and robust. As initial application, the method is used to investigate source properties of the March 11, 2013, Mw=4.7 earthquake on the San Jacinto fault using recordings of ~45 stations up to ~0.2Hz. Both the best fitting and most probable solutions include ISO component of ~1% and CLVD component of ~0%. The obtained ISO component, while small, is found to be a non-negligible positive value that can have significant implications for the physics of the failure process. Work on using higher frequency data for this and other earthquakes is in progress.
NASA Astrophysics Data System (ADS)
Miller, Eric L.; Willsky, Alan S.
1996-01-01
In this paper, we present an approach to the nonlinear inverse scattering problem using the extended Born approximation (EBA) on the basis of methods from the fields of multiscale and statistical signal processing. By posing the problem directly in the wavelet transform domain, regularization is provided through the use of a multiscale prior statistical model. Using the maximum a posteriori (MAP) framework, we introduce the relative Cramér-Rao bound (RCRB) as a tool for analyzing the level of detail in a reconstruction supported by a data set as a function of the physics, the source-receiver geometry, and the nature of our prior information. The MAP estimate is determined using a novel implementation of the Levenberg-Marquardt algorithm in which the RCRB is used to achieve a substantial reduction in the effective dimensionality of the inversion problem with minimal degradation in performance. Additional reduction in complexity is achieved by taking advantage of the sparse structure of the matrices defining the EBA in scale space. An inverse electrical conductivity problem arising in geophysical prospecting applications provides the vehicle for demonstrating the analysis and algorithmic techniques developed in this paper.
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
Arbanas, G.; Williams, M.L.; Leal, L.C.; Dunn, M.E.; Khuwaileh, B.A.; Wang, C.; Abdel-Khalik, H.
2015-01-15
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, “AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications,” Trans. Am. Nucl. Soc. 86, 118–119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore, we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way and how it could be used to optimize uncertainties of IBEs and differential cross section data simultaneously. We itemize contributions to the cost of differential data measurements needed to define a realistic cost function.
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
NASA Astrophysics Data System (ADS)
Arbanas, G.; Williams, M. L.; Leal, L. C.; Dunn, M. E.; Khuwaileh, B. A.; Wang, C.; Abdel-Khalik, H.
2015-01-01
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, "AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications," Trans. Am. Nucl. Soc. 86, 118-119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore, we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way and how it could be used to optimize uncertainties of IBEs and differential cross section data simultaneously. We itemize contributions to the cost of differential data measurements needed to define a realistic cost function.
NASA Astrophysics Data System (ADS)
Lin, Hongxiang; Azuma, Takashi; Qu, Xiaolei; Takagi, Shu
2016-07-01
We consider ultrasound waveform tomography using an ultrasound prototype equipped with the ring-array transducers. For this purpose, we use robust contrast source inversion (robust CSI), viz extended contrast source inversion, to reconstruct the sound-speed image from the wave-field data. The robust CSI method is implemented by the alternating minimization method. An automatic choice rule is employed into the alternating minimization method in order to heuristically determine a suitable regularization parameter while iterating. We prove the convergence of this algorithm. The numerical examples show that the robust CSI method with the automatic choice rule improves the spatial resolution of medical images and enhances the robustness, even when the wave-field data of a wavelength of 6.16 mm contaminated by 5% noise are used. The numerical results also show that the images reconstructed by the proposed method yield a spatial resolution of approximately half the wavelength that may be adequate for imaging a breast tumor at Stage I.
NASA Astrophysics Data System (ADS)
Fussen, Didier
1995-12-01
The tomography of the Earth's atmosphere by the solar occultation method leads to a highly non-linear inverse problem if the full solar disc is used as the light source. Well known heuristic methods like Chahine's algorithm or onion peeling fail to solve the inversion. We present a new method referred to as NOPE (for natural orthogonal polynomial expansion) that addresses this class of inverse problems by focusing on the morphological content of the unknown profile and allowing also a fine tuning of the a priori information.
A neural network based error correction method for radio occultation electron density retrieval
NASA Astrophysics Data System (ADS)
Pham, Viet-Cuong; Juang, Jyh-Ching
2015-12-01
Abel inversion techniques have been widely employed to retrieve electron density profiles (EDPs) from radio occultation (RO) measurements, which are available by observing Global Navigation Satellite System (GNSS) satellites from low-earth-orbit (LEO) satellites. It is well known that the ordinary Abel inversion might introduce errors in the retrieval of EDPs when the spherical symmetry assumption is violated. The error, however, is case-dependent; therefore it is desirable to associate an error index or correction coefficient with respect to each retrieved EDP. Several error indices have been proposed but they only deal with electron density at the F2 peak and suffer from some drawbacks. In this paper we propose an artificial neural network (ANN) based error correction method for EDPs obtained by the ordinary Abel inversion. The ANN is first trained to learn the relationship between vertical total electron content (TEC) measurements and retrieval errors at the F2 peak, 220 km and 110 km altitudes; correction coefficients are then estimated to correct the retrieved EDPs at these three altitudes. Experiments using the NeQuick2 model and real FORMOSAT-3/COSMIC RO geometry show that the proposed method outperforms existing ones. Real incoherent scatter radar (ISR) measurements at the Jicamarca Radio Observatory and the global TEC map provided by the International GNSS Service (IGS) are also used to valid the proposed method.
FOREWORD: 2nd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2012)
NASA Astrophysics Data System (ADS)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2012-09-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 2nd International Workshop on New Computational Methods for Inverse Problems, (NCMIP 2012). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 15 May 2012, at the initiative of Institut Farman. The first edition of NCMIP also took place in Cachan, France, within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finance. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition
FOREWORD: 3rd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2013)
NASA Astrophysics Data System (ADS)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2013-10-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 3rd International Workshop on New Computational Methods for Inverse Problems, NCMIP 2013 (http://www.farman.ens-cachan.fr/NCMIP_2013.html). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 22 May 2013, at the initiative of Institut Farman. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 (http://www.farman.ens-cachan.fr/NCMIP_2012.html). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational
The galaxy population of Abell 1367: photometric and spectroscopic data
NASA Astrophysics Data System (ADS)
Kriwattanawong, W.; Moss, C.; James, P. A.; Carter, D.
2011-03-01
Aims: Photometric and spectroscopic observations of the galaxy population of the galaxy cluster Abell 1367 have been obtained, over a field of 34' × 90', covering the cluster centre out to a radius of ~2.2 Mpc. Optical broad- and narrow-band imaging was used to determine galaxy luminosities, diameters and morphologies, and to study current star formation activity of a sample of cluster galaxies. Near-infrared imaging was obtained to estimate integrated stellar masses, and to aid the determination of mean stellar ages and metallicities for the future investigation of the star formation history of those galaxies. Optical spectroscopic observations were also taken, to confirm cluster membership of galaxies in the sample through their recession velocities. Methods.U, B and R broad-band and Hα narrow-band imaging observations were carried out using the Wide Field Camera (WFC) on the 2.5 m Isaac Newton Telescope on La Palma, covering the field described above. J and K near-infrared imaging was obtained using the Wide Field Camera (WFCAM) on the 3.8 m UK Infrared Telescope on Mauna Kea, covering a somewhat smaller field of 0.75 square degrees on the cluster centre. The spectroscopic observations were carried out using a multifibre spectrograph (WYFFOS) on the 4.2 m William Herschel Telecope on La Palma, over the same field as the optical imaging observations. Results: Our photometric data give optical and near-infrared isophotal magnitudes for 303 galaxies in our survey regions, down to stated diameter and B-band magnitude limits, determined within R24 isophotal diameters. Our spectroscopic data of 328 objects provide 84 galaxies with detections of emission and/or absorption lines. Combining these with published spectroscopic data gives 126 galaxies within our sample for which recession velocities are known. Of these, 72 galaxies are confirmed as cluster members of Abell 1367, 11 of which are identified in this study and 61 are reported in the literature. Hα equivalent
NASA Technical Reports Server (NTRS)
Ratcliff, Robert R.; Carlson, Leland A.
1989-01-01
Progress in the direct-inverse wing design method in curvilinear coordinates has been made. A spanwise oscillation problem and proposed remedies are discussed. Test cases are presented which reveal the approximate limits on the wing's aspect ratio and leading edge wing sweep angle for a successful design, and which show the significance of spanwise grid skewness, grid refinement, viscous interaction, the initial airfoil section and Mach number-pressure distribution compatibility on the final design. Furthermore, preliminary results are shown which indicate that it is feasible to successfully design a region of the wing which begins aft of the leading edge and terminates prior to the trailing edge.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
Solution of the stationary 2D inverse heat conduction problem by Treffetz method
NASA Astrophysics Data System (ADS)
Cialkowski, Michael J.; Frąckowiak, Andrzej
2002-05-01
The paper presents analysis of a solution of Laplace equation with the use of FEM harmonic basic functions. The essence of the problem is aimed at presenting an approximate solution based on possibly large finite element. Introduction of harmonic functions allows to reduce the order of numerical integration as compared to a classical Finite Element Method. Numerical calculations conform good efficiency of the use of basic harmonic functions for resolving direct and inverse problems of stationary heat conduction. Further part of the paper shows the use of basic harmonic functions for solving Poisson’s equation and for drawing up a complete system of biharmonic and polyharmonic basic functions
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems
A comparative study of minimum norm inverse methods for MEG imaging
Leahy, R.M.; Mosher, J.C.; Phillips, J.W.
1996-07-01
The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we can use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.
NASA Astrophysics Data System (ADS)
Ryo, Hyok-Su; Ryo, In-Gwang
2016-08-01
In this study, a generalized inverse-pole-figure (IPF) method has been suggested to analyze domain switching in polycrystalline ferroelectrics including composition of morphotropic phase boundary (MPB). Using the generalized IPF method, saturated domain orientation textures of single-phase polycrystalline ferroelectrics with tetragonal and rhombohedral symmetry have been analytically calculated and the results have been confirmed by comparison with the results from preceding studies. In addition, saturated domain orientation textures near MPBs of different multiple-phase polycrystalline ferroelectrics have been also analytically calculated. The results show that the generalized IPF method is an efficient method to analyze not only domain switching of single-phase polycrystalline ferroelectrics but also MPB of multiple-phase polycrystalline ferroelectrics.
The merging cluster Abell 1758: an optical and dynamical view
NASA Astrophysics Data System (ADS)
Monteiro-Oliveira, Rogerio; Serra Cypriano, Eduardo; Machado, Rubens; Lima Neto, Gastao B.
2015-08-01
The galaxy cluster Abell 1758-North (z=0.28) is a binary system composed by the sub-structures NW and NE. This is supposed to be a post-merging cluster due to observed detachment between the NE BCG and the respective X-ray emitting hot gas clump in a scenario very close to the famous Bullet Cluster. On the other hand, the projected position of the NW BCG coincides with the local hot gas peak. This system was been targeted previously by several studies, using multiple wavelengths and techniques, but there is still no clear picture of the scenario that could have caused this unusual configuration. To help solving this complex puzzle we added some pieces: firstly, we have used deep B, RC and z' Subaru images to perform both weak lensing shear and magnification analysis of A1758 (including here the South component that is not in interaction with A1758-North) modeling each sub-clump as an NFW profile in order to constrain masses and its center positions through MCMC methods; the second piece is the dynamical analysis using radial velocities available in the literature (143) plus new Gemini-GMOS/N measurements (68 new redshifts).From weak lensing we found that independent shear and magnification mass determinations are in excellent agreement between them and combining both we could reduce mass error bar by ~30% compared to shear alone. By combining this two weak-lensing probes we found that the position of both Northern BCGs are consistent with the masses centers within 2σ and and the NE hot gas peak to be offseted of the respective mass peak (M200=5.5 X 1014 M⊙) with very high significance. The most massive structure is NW (M200=7.95 X 1014 M⊙ ) where we observed no detachment between gas, DM and BCG.We have calculated a low line-of-sight velocity difference (<300 km/s) between A1758 NW and NE. We have combined it with the projected velocity of 1600 km/s which was estimated by previous X-ray analysis (David & Kempner 2004) and we have obtained a small angle between
NASA Astrophysics Data System (ADS)
Izumi, Tomoki; Takeuchi, Junichiro; Kawachi, Toshihiko; Fujihara, Masayuki
An inverse method to estimate the unsaturated hydraulic conductivity in seepage flow from field observations is presented. Considering the water movement in soil significantly affected by the soil temperature, the soil column of interest is assumed to be non-isothermal, and therefore the problem is based on coupled 1D water movement and thermal conduction equations. Since the saturated hydraulic conductivity could be definitely known, the inverse problem associated with the unsaturated hydraulic conductivity is reduced to that of identifying the relative hydraulic conductivity (RHC) from the hydro-geological information available. For functional representation of RHC, the free-form parameterized function is employed in lieu of the conventional fixed-form function. Values of the parameters included in the functions are optimally determined according to a simulation-optimization algorithm. For easy application of the method, a utilitarian observation system with simple instrumentation is specially contrived which implements collection of the hydro-geological data relatively easily in-situ available. Validity of the method developed is examined through its practical application to a real soil column in an upland crop field. The results show that the water movement model provides the forward solutions of high reproducibility, when coupled with thermal conduction model and calibrated through identifying the RHC by use of a free-form function.
NASA Astrophysics Data System (ADS)
Xiao, Dongsheng; Chang, Ming; Su, Yong; Hu, Qijun; Yu, Bing
2016-09-01
This study explores the quasi-real time inversion principle and precision estimation of three-dimensional coordinates of the epicenter, trigger time and magnitude of earthquakes with the aim to improve traditional methods, which are flawed due to missing information or distortion in the seismograph records. The epicenter, trigger time and magnitude from the Lushan earthquake are inverted and analyzed based on high-frequency GNSS data. The inversion results achieved a high precision, which are consistent with the data published by the China Earthquake Administration. Moreover, it has been proven that the inversion method has good theoretical value and excellent application prospects.
Chandra View of Galaxy Cluster Abell 2554
NASA Astrophysics Data System (ADS)
kıyami Erdim, Muhammed; Hudaverdi, Murat
2016-07-01
We study the structure of the galaxy cluster Abell 2554 at z = 0.11, which is a member of Aquarius Super cluster using the Chandra archival data. The X-ray peak coincides with a bright elliptical cD galaxy. Slightly elongated X-ray plasma has an average temperature and metal abundance values of ˜6 keV and 0.28 solar, respectively. We observe small-scale temperature variations in the ICM. There is a significantly hot wall-like structure with 9 keV at the SE and also radio-lope locates at the tip of this hot region. A2554 is also part of a trio-cluster. Its close neighbors A2550 (at SW) and A2556 (at SE) have only 2 Mpc and 1.5 Mpc separations with A2554. Considering the temperature fluctuations and the dynamical environment of super cluster, we examine the possible ongoing merger scenarios within A2554.
Probing single biomolecules in solution using the anti-Brownian electrokinetic (ABEL) trap.
Wang, Quan; Goldsmith, Randall H; Jiang, Yan; Bockenhauer, Samuel D; Moerner, W E
2012-11-20
Single-molecule fluorescence measurements allow researchers to study asynchronous dynamics and expose molecule-to-molecule structural and behavioral diversity, which contributes to the understanding of biological macromolecules. To provide measurements that are most consistent with the native environment of biomolecules, researchers would like to conduct these measurements in the solution phase if possible. However, diffusion typically limits the observation time to approximately 1 ms in many solution-phase single-molecule assays. Although surface immobilization is widely used to address this problem, this process can perturb the system being studied and contribute to the observed heterogeneity. Combining the technical capabilities of high-sensitivity single-molecule fluorescence microscopy, real-time feedback control and electrokinetic flow in a microfluidic chamber, we have developed a device called the anti-Brownian electrokinetic (ABEL) trap to significantly prolong the observation time of single biomolecules in solution. We have applied the ABEL trap method to explore the photodynamics and enzymatic properties of a variety of biomolecules in aqueous solution and present four examples: the photosynthetic antenna allophycocyanin, the chaperonin enzyme TRiC, a G protein-coupled receptor protein, and the blue nitrite reductase redox enzyme. These examples illustrate the breadth and depth of information which we can extract in studies of single biomolecules with the ABEL trap. When confined in the ABEL trap, the photosynthetic antenna protein allophycocyanin exhibits rich dynamics both in its emission brightness and its excited state lifetime. As each molecule discontinuously converts from one emission/lifetime level to another in a primarily correlated way, it undergoes a series of state changes. We studied the ATP binding stoichiometry of the multi-subunit chaperonin enzyme TRiC in the ABEL trap by counting the number of hydrolyzed Cy3-ATP using stepwise
The Sunyaev-Zeldovich Effect in Abell 370
NASA Technical Reports Server (NTRS)
Grego, Laura; Carlstrom, John E.; Joy, Marshall K.; Reese, Erik D.; Holder, Gilbert P.; Patel, Sandeep; Cooray, Asantha R.; Holzappel, William L.
2000-01-01
We present interferometric measurements of the Sunyaev-Zeldovich (SZ) effect toward the galaxy cluster Abell 370. These measurements, which directly probe the pressure of the cluster's gas, show the gas distribution to be strongly aspherical, as do the X-ray and gravitational lensing observations. We calculate the cluster's gas mass fraction in two ways. We first compare the gas mass derived from the SZ measurements to the lensing-derived gravitational mass near the critical lensing radius. We also calculate the gas mass fraction from the SZ data by deprojecting the three-dimensional gas density distribution and deriving the total mass under the assumption that the gas is in hydrostatic equilibrium (HSE). We test the assumptions in the HSE method by comparing the total cluster mass implied by the two methods and find that they agree within the errors of the measurement. We discuss the possible system- atic errors in the gas mass fraction measurement and the constraints it places on the matter density parameter, Omega(sub M).
A new method for the inversion of atmospheric parameters of A/Am stars
NASA Astrophysics Data System (ADS)
Gebran, M.; Farah, W.; Paletou, F.; Monier, R.; Watson, V.
2016-05-01
Context. We present an automated procedure that simultaneously derives the effective temperature Teff, surface gravity log g, metallicity [Fe/H], and equatorial projected rotational velocity vsini for "normal" A and Am stars. The procedure is based on the principal component analysis (PCA) inversion method, which we published in a recent paper . Aims: A sample of 322 high-resolution spectra of F0-B9 stars, retrieved from the Polarbase, SOPHIE, and ELODIE databases, were used to test this technique with real data. We selected the spectral region from 4400-5000 Å as it contains many metallic lines and the Balmer Hβ line. Methods: Using three data sets at resolving powers of R = 42 000, 65 000 and 76 000, about ~6.6 × 106 synthetic spectra were calculated to build a large learning database. The online power iteration algorithm was applied to these learning data sets to estimate the principal components (PC). The projection of spectra onto the few PCs offered an efficient comparison metric in a low-dimensional space. The spectra of the well-known A0- and A1-type stars, Vega and Sirius A, were used as control spectra in the three databases. Spectra of other well-known A-type stars were also employed to characterize the accuracy of the inversion technique. Results: We inverted all of the observational spectra and derived the atmospheric parameters. After removal of a few outliers, the PCA-inversion method appeared to be very efficient in determining Teff, [Fe/H], and vsini for A/Am stars. The derived parameters agree very well with previous determinations. Using a statistical approach, deviations of around 150 K, 0.35 dex, 0.15 dex, and 2 km s-1 were found for Teff, log g, [Fe/H], and vsini with respect to literature values for A-type stars. Conclusions: The PCA inversion proves to be a very fast, practical, and reliable tool for estimating stellar parameters of FGK and A stars and for deriving effective temperatures of M stars. Based on data retrieved from the
Extending and Merging the Purple Crow Lidar Temperature Climatologies Using the Inversion Method
NASA Astrophysics Data System (ADS)
Jalali, Ali; Sica, R. J.; Argall, P. S.
2016-06-01
Rayleigh and Raman scatter measurements from The University of Western Ontario Purple Crow Lidar (PCL) have been used to develop temperature climatologies for the stratosphere, mesosphere, and thermosphere using data from 1994 to 2013 (Rayleigh system) and from 1999 to 2013 (vibrational Raman system). Temperature retrievals from Rayleigh-scattering lidar measurements have been performed using the methods by Hauchecorne and Chanin (1980; henceforth HC) and Khanna et al. (2012). Argall and Sica (2007) used the HC method to compute a climatology of the PCL measurements from 1994 to 2004 for 35 to 110 km, while Iserhienrhien et al. (2013) applied the same technique from 1999 to 2007 for 10 to 35 km. Khanna et al. (2012) used the inversion technique to retrieve atmospheric temperature profiles and found that it had advantages over the HC method. This paper presents an extension of the PCL climatologies created by Argall and Sica (2007) and Iserhienrhien et al. (2013). Both the inversion and HC methods were used to form the Rayleigh climatology, while only the latter was adopted for the Raman climatology. Then, two different approaches were used to merge the climatologies from 10 to 110 km. Among four different functional identities, a trigonometric hyperbolic relation results in the best choice for merging temperature profiles between the Raman and Low level Rayleigh channels, with an estimated uncertainty of 0.9 K for merging temperatures. Also, error function produces best result with uncertainty of 0.7 K between the Low Level Rayleigh and High Level Rayleigh channels. The results show that the temperature climatologies produced by the HC method when using a seed pressure are comparable to the climatologies produced by the inversion method. The Rayleigh extended climatology is slightly warmer below 80 km and slightly colder above 80 km. There are no significant differences in temperature between the extended and the previous Raman channel climatologies. Through out
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B. Montgomery
2016-01-01
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution. PMID:27453632
Systematic method of generating new integrable systems via inverse Miura maps
Tsuchida, Takayuki
2011-05-15
We provide a new natural interpretation of the Lax representation for an integrable system; that is, the spectral problem is the linearized form of a Miura transformation between the original system and a modified version of it. On the basis of this interpretation, we formulate a systematic method of identifying modified integrable systems that can be mapped to a given integrable system by Miura transformations. Thus, this method can be used to generate new integrable systems from known systems through inverse Miura maps; it can be applied to both continuous and discrete systems in 1 + 1 dimensions as well as in 2 + 1 dimensions. The effectiveness of the method is illustrated using examples such as the nonlinear Schroedinger (NLS) system, the Zakharov-Ito system (two-component KdV), the three-wave interaction system, the Yajima-Oikawa system, the Ablowitz-Ladik lattice (integrable space-discrete NLS), and two (2 + 1)-dimensional NLS systems.
The Novikov-Veselov equation and the inverse scattering method: II. Computation
NASA Astrophysics Data System (ADS)
Lassas, M.; Mueller, J. L.; Siltanen, S.; Stahel, A.
2012-06-01
The Novikov-Veselov (NV) equation is a (2 + 1)-dimensional nonlinear evolution equation generalizing the (1 + 1)-dimensional Korteweg-deVries equation. The inverse scattering method (ISM) is applied for numerical solution of the NV equation. It is the first time the ISM is used as a computational tool for computing evolutions of a (2 + 1)-dimensional integrable system. In addition, a semi-implicit method is given for the numerical solution of the NV equation using finite differences in the spatial variables, Crank-Nicolson in time, and fast Fourier transforms for the auxiliary equation. Evolutions of initial data satisfying the hypotheses of part I of this paper are computed by the two methods and are observed to coincide with significant accuracy.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Zhdanov, Michael S.; Chernyavskiy, Alexey
2004-12-01
In this paper, we develop a new method of three-dimensional (3D) inversion of multi-transmitter electromagnetic data. We apply the spectral Lanczos decomposition method (SLDM) in the framework of the localized quasi-linear inversion introduced by Zhdanov and Tartaras (2002 Geophys. J. Int. 148 506-19). The SLDM makes it possible to find the regularized solution of the ill-posed inverse problem for all values of the regularization parameter α at once. As an illustration, we apply this technique for interpretation of the helicopter-borne electromagnetic (HEM) data over inhomogeneous geoelectrical structures, typical for mining exploration. This technique helps to accelerate HEM data inversion and provides a stable and focused image of the geoelectrical target. The new method and the corresponding computer code have been tested on synthetic data. The case history includes interpretation of HEM data collected by INCO Exploration in the Voisey's Bay area of Canada.
The Dark Matter filament between Abell 222/223
NASA Astrophysics Data System (ADS)
Dietrich, Jörg P.; Werner, Norbert; Clowe, Douglas; Finoguenov, Alexis; Kitching, Tom; Miller, Lance; Simionescu, Aurora
2016-10-01
Weak lensing detections and measurements of filaments have been elusive for a long time. The reason is that the low density contrast of filaments generally pushes the weak lensing signal to unobservably low scales. To nevertheless map the dark matter in filaments exquisite data and unusual systems are necessary. SuprimeCam observations of the supercluster system Abell 222/223 provided the required combination of excellent seeing images and a fortuitous alignment of the filament with the line-of-sight. This boosted the lensing signal to a detectable level and led to the first weak lensing mass measurement of a large-scale structure filament. The filament connecting Abell 222 and Abell 223 is now the only one traced by the galaxy distribution, dark matter, and X-ray emission from the hottest phase of the warm-hot intergalactic medium. The combination of these data allows us to put the first constraints on the hot gas fraction in filaments.
Inverse methods for estimating primary input signals from time-averaged isotope profiles
NASA Astrophysics Data System (ADS)
Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.
2005-08-01
Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.
A geometric calibration method for inverse geometry computed tomography using P-matrices
NASA Astrophysics Data System (ADS)
Slagowski, Jordan M.; Dunkerley, David A. P.; Hatt, Charles R.; Speidel, Michael A.
2016-03-01
Accurate and artifact free reconstruction of tomographic images requires precise knowledge of the imaging system geometry. This work proposes a novel projection matrix (P-matrix) based calibration method to enable C-arm inverse geometry CT (IGCT). The method is evaluated for scanning-beam digital x-ray (SBDX), a C-arm mounted inverse geometry fluoroscopic technology. A helical configuration of fiducials is imaged at each gantry angle in a rotational acquisition. For each gantry angle, digital tomosynthesis is performed at multiple planes and a composite image analogous to a cone-beam projection is generated from the plane stack. The geometry of the C-arm, source array, and detector array is determined at each angle by constructing a parameterized 3D-to-2D projection matrix that minimizes the sum-of-squared deviations between measured and projected fiducial coordinates. Simulations were used to evaluate calibration performance with translations and rotations of the source and detector. In a geometry with 1 mm translation of the central ray relative to the axis-of-rotation and 1 degree yaw of the detector and source arrays, the maximum error in the recovered translational parameters was 0.4 mm and maximum error in the rotation parameter was 0.02 degrees. The relative rootmean- square error in a reconstruction of a numerical thorax phantom was 0.4% using the calibration method, versus 7.7% without calibration. Changes in source-detector-distance were the most challenging to estimate. Reconstruction of experimental SBDX data using the proposed method eliminated double contour artifacts present in a non-calibrated reconstruction. The proposed IGCT geometric calibration method reduces image artifacts when uncertainties exist in system geometry.
Arnal, B; Pinton, G; Garapon, P; Pernot, M; Fink, M; Tanter, M
2013-10-01
Shear wave imaging (SWI) maps soft tissue elasticity by measuring shear wave propagation with ultrafast ultrasound acquisitions (10 000 frames s(-1)). This spatiotemporal data can be used as an input for an inverse problem that determines a shear modulus map. Common inversion methods are local: the shear modulus at each point is calculated based on the values of its neighbour (e.g. time-of-flight, wave equation inversion). However, these approaches are sensitive to the information loss such as noise or the lack of the backscattered signal. In this paper, we evaluate the benefits of a global approach for elasticity inversion using a least-squares formulation, which is derived from full waveform inversion in geophysics known as the adjoint method. We simulate an acoustic waveform in a medium with a soft and a hard lesion. For this initial application, full elastic propagation and viscosity are ignored. We demonstrate that the reconstruction of the shear modulus map is robust with a non-uniform background or in the presence of noise with regularization. Compared to regular local inversions, the global approach leads to an increase of contrast (∼+3 dB) and a decrease of the quantification error (∼+2%). We demonstrate that the inversion is reliable in the case when there is no signal measured within the inclusions like hypoechoic lesions which could have an impact on medical diagnosis.
Determination of thermal load in film cooled bipropellant thrust chambers by an inverse method
NASA Astrophysics Data System (ADS)
Hinckel, J. N.; Savonov, R. I.; Patire, H.
2013-03-01
A method to obtain the heat load on the internal wall of a rocket thrust chamber using an inverse problem approach is described. According to the "classical" approach, the heat load on the internal wall of the chamber is assumed as the product of a heat transfer coefficient and the temperature difference of adiabatic wall temperature and local wall surface temperature. The time-dependent temperature distribution of the external wall of the thruster chamber is used to obtain empirical curve fittings to the temperature profile of the near wall flow field (adiabatic wall temperature) and the heat transfer coefficient profile. The applicability of the method is verified by applying it to three different problems; a model problem, an analytical solution, and a set of experimental data.
An efficient, advanced regularized inversion method for highly parameterized environmental models
NASA Astrophysics Data System (ADS)
Skahill, B. E.; Baggett, J. S.
2008-12-01
The Levenberg-Marquardt method of computer based parameter estimation can be readily modified in cases of high parameter insensitivity and correlation by the inclusion of various regularization devices to maintain numerical stability and robustness, including; for example, Tikhonov regularization and truncated singular value decomposition. With Tikhonov regularization, where parameters or combinations of parameters cannot be uniquely estimated, they are provided with values or assigned relationships with other parameters that are decreed to be realistic by the modeler. Tikhonov schemes provide a mechanism for assimilation of valuable "outside knowledge" into the inversion process, with the result that parameter estimates, thus informed by a modeler's expertise, are more suitable for use in the making of important predictions by that model than would otherwise be the case. However, by maintaining the high dimensionality of the adjustable parameter space, they can potentially be computational burdensome. Moreover, while Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. We will present results associated with development efforts that include an accelerated Levenberg-Marquardt local search algorithm adapted for Tikhonov regularization, and a technique which allows relative regularization weights to be estimated as parameters through the calibration process itself (Doherty and Skahill, 2006). This new method, encapsulated in the MICUT software (Skahill et al., 2008) will be compared, in terms of efficiency and enforcement of regularization relationships, with the SVD Assist method (Tonkin and Doherty, 2005) contained in the popular PEST package by considering various watershed
Joining direct and indirect inverse calibration methods to characterize karst, coastal aquifers
NASA Astrophysics Data System (ADS)
De Filippis, Giovanna; Foglia, Laura; Giudici, Mauro; Mehl, Steffen; Margiotta, Stefano; Negri, Sergio
2016-04-01
Parameter estimation is extremely relevant for accurate simulation of groundwater flow. Parameter values for models of large-scale catchments are usually derived from a limited set of field observations, which can rarely be obtained in a straightforward way from field tests or laboratory measurements on samples, due to a number of factors, including measurement errors and inadequate sampling density. Indeed, a wide gap exists between the local scale, at which most of the observations are taken, and the regional or basin scale, at which the planning and management decisions are usually made. For this reason, the use of geologic information and field data is generally made by zoning the parameter fields. However, pure zoning does not perform well in the case of fairly complex aquifers and this is particularly true for karst aquifers. In fact, the support of the hydraulic conductivity measured in the field is normally much smaller than the cell size of the numerical model, so it should be upscaled to a scale consistent with that of the numerical model discretization. Automatic inverse calibration is a valuable procedure to identify model parameter values by conditioning on observed, available data, limiting the subjective evaluations introduced with the trial-and-error technique. Many approaches have been proposed to solve the inverse problem. Generally speaking, inverse methods fall into two groups: direct and indirect methods. Direct methods allow determination of hydraulic conductivities from the groundwater flow equations which relate the conductivity and head fields. Indirect methods, instead, can handle any type of parameters, independently from the mathematical equations that govern the process, and condition parameter values and model construction on measurements of model output quantities, compared with the available observation data, through the minimization of an objective function. Both approaches have pros and cons, depending also on model complexity. For
NASA Technical Reports Server (NTRS)
Cerracchio, Priscilla; Gherlone, Marco; Di Sciuva, Marco; Tessler, Alexander
2013-01-01
The marked increase in the use of composite and sandwich material systems in aerospace, civil, and marine structures leads to the need for integrated Structural Health Management systems. A key capability to enable such systems is the real-time reconstruction of structural deformations, stresses, and failure criteria that are inferred from in-situ, discrete-location strain measurements. This technology is commonly referred to as shape- and stress-sensing. Presented herein is a computationally efficient shape- and stress-sensing methodology that is ideally suited for applications to laminated composite and sandwich structures. The new approach employs the inverse Finite Element Method (iFEM) as a general framework and the Refined Zigzag Theory (RZT) as the underlying plate theory. A three-node inverse plate finite element is formulated. The element formulation enables robust and efficient modeling of plate structures instrumented with strain sensors that have arbitrary positions. The methodology leads to a set of linear algebraic equations that are solved efficiently for the unknown nodal displacements. These displacements are then used at the finite element level to compute full-field strains, stresses, and failure criteria that are in turn used to assess structural integrity. Numerical results for multilayered, highly heterogeneous laminates demonstrate the unique capability of this new formulation for shape- and stress-sensing.
A 1400-MHz survey of 1478 Abell clusters of galaxies
NASA Technical Reports Server (NTRS)
Owen, F. N.; White, R. A.; Hilldrup, K. C.; Hanisch, R. J.
1982-01-01
Observations of 1478 Abell clusters of galaxies with the NRAO 91-m telescope at 1400 MHz are reported. The measured beam shape was deconvolved from the measured source Gaussian fits in order to estimate the source size and position angle. All detected sources within 0.5 corrected Abell cluster radii are listed, including the cluster number, richness class, distance class, magnitude of the tenth brightest galaxy, redshift estimate, corrected cluster radius in arcmin, right ascension and error, declination and error, total flux density and error, and angular structure for each source.
The Merger Dynamics of Abell 2061
NASA Astrophysics Data System (ADS)
Bailey, Avery; Sarazin, Craig L.; Clarke, Tracy E.; Chatzikos, Marios; Hogge, Taylor; Wik, Daniel R.; Rudnick, Lawrence; Farnsworth, Damon; Van Weeren, Reinout J.; Brown, Shea
2016-04-01
Abell 2061, a galaxy cluster at a redshift of z=.0784 in the Corona Borealis Supercluster, displays features in both the X-ray and radio indicative of merger activity. Observations by the GBT and the Westerbork Northern Sky Survey (WENSS) have indicated the presence of an extended, central radio halo/relic coincident with the cluster's main X-ray emission and a bright radio relic to the SW of the center of the cluster. Previous observations by ROSAT, Beppo-SAX, and Chandra show an elongated structure (referred to as the ‘Plume’), emitting in the soft X-ray and stretching to the NE of the cluster’s center. The Beppo-SAX and Chandra observations also suggest the presence of a hard X-ray shock slightly NE of the cluster’s center. Here we present the details of an August 2013 XMM-Newton observation of A2061 which has greater field of view and longer exposure (48.6 ks) than the previous Chandra observation. We present images displaying the cluster’s soft and hard X-ray emission and also a temperature map of the cluster. This temperature map highlights the presence of a previously unseen cool region of the cluster which we hypothesize to be the cool core of one of the subclusters involved in this merger. We also discuss the structural similarity of this cluster with a simulated high mass-ratio offset cluster merger taken from the Simulation Library of Astrophysical cluster Mergers (SLAM). This simulation would suggest that the Plume is gas from the cool core of a subcluster which is now falling back into the center of the cluster after initial core passage.
Moissenet, Florent; Chèze, Laurence; Dumas, Raphaël
2012-06-01
Inverse dynamics combined with a constrained static optimization analysis has often been proposed to solve the muscular redundancy problem. Typically, the optimization problem consists in a cost function to be minimized and some equality and inequality constraints to be fulfilled. Penalty-based and Lagrange multipliers methods are common optimization methods for the equality constraints management. More recently, the pseudo-inverse method has been introduced in the field of biomechanics. The purpose of this paper is to evaluate the ability and the efficiency of this new method to solve the muscular redundancy problem, by comparing respectively the musculo-tendon forces prediction and its cost-effectiveness against common optimization methods. Since algorithm efficiency and equality constraints fulfillment highly belong to the optimization method, a two-phase procedure is proposed in order to identify and compare the complexity of the cost function, the number of iterations needed to find a solution and the computational time of the penalty-based method, the Lagrange multipliers method and pseudo-inverse method. Using a 2D knee musculo-skeletal model in an isometric context, the study of the cost functions isovalue curves shows that the solution space is 2D with the penalty-based method, 3D with the Lagrange multipliers method and 1D with the pseudo-inverse method. The minimal cost function area (defined as the area corresponding to 5% over the minimal cost) obtained for the pseudo-inverse method is very limited and along the solution space line, whereas the minimal cost function area obtained for other methods are larger or more complex. Moreover, when using a 3D lower limb musculo-skeletal model during a gait cycle simulation, the pseudo-inverse method provides the lowest number of iterations while Lagrange multipliers and pseudo-inverse method have almost the same computational time. The pseudo-inverse method, by providing a better suited cost function and an
NASA Astrophysics Data System (ADS)
van der Hilst, R. D.; de Hoop, M. V.; Shim, S. H.; Shang, X.; Wang, P.; Cao, Q.
2012-04-01
Over the past three decades, tremendous progress has been made with the mapping of mantle heterogeneity and with the understanding of these structures in terms of, for instance, the evolution of Earth's crust, continental lithosphere, and thermo-chemical mantle convection. Converted wave imaging (e.g., receiver functions) and reflection seismology (e.g. SS stacks) have helped constrain interfaces in crust and mantle; surface wave dispersion (from earthquake or ambient noise signals) characterizes wavespeed variations in continental and oceanic lithosphere, and body wave and multi-mode surface wave data have been used to map trajectories of mantle convection and delineate mantle regions of anomalous elastic properties. Collectively, these studies have revealed substantial ocean-continent differences and suggest that convective flow is strongly influenced by but permitted to cross the upper mantle transition zone. Many questions have remained unanswered, however, and further advances in understanding require more accurate depictions of Earth's heterogeneity at a wider range of length scales. To meet this challenge we need new observations—more, better, and different types of data—and methods that help us extract and interpret more information from the rapidly growing volumes of broadband data. The huge data volumes and the desire to extract more signal from them means that we have to go beyond 'business as usual' (that is, simplified theory, manual inspection of seismograms, …). Indeed, it inspires the development of automated full wave methods, both for tomographic delineation of smooth wavespeed variations and the imaging (for instance through inverse scattering) of medium contrasts. Adjoint tomography and reverse time migration, which are closely related wave equation methods, have begun to revolutionize seismic inversion of global and regional waveform data. In this presentation we will illustrate this development - and its promise - drawing from our work
Love wave tomography in southern Africa from a two-plane-wave inversion method
NASA Astrophysics Data System (ADS)
Li, Aibing; Li, Lun
2015-08-01
Array measurements of surface wave phase velocity can be biased by multipath arrivals. A two-plane-wave (TPW) inversion method, in which the incoming wavefield is represented by the interference of two plane waves, is able to account for the multipath effect and solve for laterally varying phase velocity. Despite broad applications of the TPW method, its usage has been limited to Rayleigh waves. In this study, we have modified the TPW approach and applied it to Love waves. Main modifications include decomposing Love wave amplitude on the transverse component to x and y components in a local Cartesian system for each earthquake and using both components in the inversion. Such decomposition is also applied to the two plane waves to predict the incoming wavefield of an earthquake. We utilize fundamental mode Love wave data recorded at 85 broad-band stations from 69 distant earthquakes and solved for phase velocity in nine frequency bands with centre periods ranging from 34 to 100 s. The average phase velocity in southern Africa increases from 4.30 km s-1 at 34 s to 4.87 km s-1 at 100 s. Compared with predicted Love wave phase velocities from the published 1-D SV velocity model and radial anisotropy model in the region, these values are compatible from 34 to 50 s and slightly higher beyond 50 s, indicating radial anisotropy of VSH > VSV in the shallow upper mantle. A high Love wave velocity anomaly is imaged in the central and southern Kaapvaal craton at all periods, reflecting a cold and depleted cratonic lithosphere. A low velocity anomaly appears in the Bushveld Complex from 34 to 50 s, which can be interpreted as being caused by high iron content from an intracratonic magma intrusion. The modified TPW method provides a new way to measure Love wave phase velocities in a regional array, which are essential in developing radial anisotropic models and understanding the Earth structure in the crust and upper mantle.
NASA Astrophysics Data System (ADS)
Zhang, B.; Xu, C. L.; Wang, S. M.
2016-07-01
The infrared temperature measurement technique has been applied in various fields, such as thermal efficiency analysis, environmental monitoring, industrial facility inspections, and remote temperature sensing. In the problem of infrared measurement of the metal surface temperature of superheater surfaces, the outer wall of the metal pipe is covered by radiative participating flue gas. This means that the traditional infrared measurement technique will lead to intolerable measurement errors due to the absorption and scattering of the flue gas. In this paper, an infrared measurement method for a metal surface in flue gas is investigated theoretically and experimentally. The spectral emissivity of the metal surface, and the spectral absorption and scattering coefficients of the radiative participating flue gas are retrieved simultaneously using an inverse method called quantum particle swarm optimization. Meanwhile, the detected radiation energy simulated using a forward simulation method (named the source multi-flux method) is set as the input of the retrieval. Then, the temperature of the metal surface detected by an infrared CCD camera is modified using the source multi-flux method in combination with these retrieved physical properties. Finally, an infrared measurement system for metal surface temperature is built to assess the proposed method. Experimental results show that the modified temperature is closer to the true value than that of the direct measured temperature.
NASA Astrophysics Data System (ADS)
Gao, Yingjie; Zhang, Jinhai; Yao, Zhenxing
2016-06-01
The symplectic integration method is popular in high-accuracy numerical simulations when discretizing temporal derivatives; however, it still suffers from time-dispersion error when the temporal interval is coarse, especially for long-term simulations and large-scale models. We employ the inverse time dispersion transform (ITDT) to the third-order symplectic integration method to reduce the time-dispersion error. First, we adopt the pseudospectral algorithm for the spatial discretization and the third-order symplectic integration method for the temporal discretization. Then, we apply the ITDT to eliminate time-dispersion error from the synthetic data. As a post-processing method, the ITDT can be easily cascaded in traditional numerical simulations. We implement the ITDT in one typical exiting third-order symplectic scheme and compare its performances with the performances of the conventional second-order scheme and the rapid expansion method. Theoretical analyses and numerical experiments show that the ITDT can significantly reduce the time-dispersion error, especially for long travel times. The implementation of the ITDT requires some additional computations on correcting the time-dispersion error, but it allows us to use the maximum temporal interval under stability conditions; thus, its final computational efficiency would be higher than that of the traditional symplectic integration method for long-term simulations. With the aid of the ITDT, we can obtain much more accurate simulation results but with a lower computational cost.
NASA Astrophysics Data System (ADS)
Zhao, Jingtao; Peng, Suping; Du, Wenfeng
2016-02-01
We consider sparsity-constraint inversion method for detecting seismic small-scale discontinuities, such as edges, faults and cavities, which provide rich information about petroleum reservoirs. However, where there is karstification and interference caused by macro-scale fault systems, these seismic small-scale discontinuities are hard to identify when using currently available discontinuity-detection methods. In the subsurface, these small-scale discontinuities are separately and sparsely distributed and their seismic responses occupy a very small part of seismic image. Considering these sparsity and non-smooth features, we propose an effective L 2-L 0 norm model for improvement of their resolution. First, we apply a low-order plane-wave destruction method to eliminate macro-scale smooth events. Then, based the residual data, we use a nonlinear structure-enhancing filter to build a L 2-L 0 norm model. In searching for its solution, an efficient and fast convergent penalty decomposition method is employed. The proposed method can achieve a significant improvement in enhancing seismic small-scale discontinuities. Numerical experiment and field data application demonstrate the effectiveness and feasibility of the proposed method in studying the relevant geology of these reservoirs.
Mass Profile of Abell 2204 An X-Ray Analysis of Abell 2204 using XMM-Newton Data
Lau, Travis
2003-09-05
The vast majority of the matter in the universe is of an unknown type. This matter is called dark matter by astronomers. The dark matter manifests itself only through gravitational interaction and is otherwise undetectable. The distribution of this matter in can be better understood by studying the mass profile of galaxy clusters. The X-ray emissions of the galaxy cluster Abell 2204 were analyzed using archived data from the XMM-Newton space telescope. We analyze a 40ks observation of Abell 2204 and present a radial temperature and radial mass profile based on hydrostatic equilibrium calculations.
NASA Technical Reports Server (NTRS)
Fymat, A. L.
1976-01-01
The paper studies the inversion of the radiative transfer equation describing the interaction of electromagnetic radiation with atmospheric aerosols. The interaction can be considered as the propagation in the aerosol medium of two light beams: the direct beam in the line-of-sight attenuated by absorption and scattering, and the diffuse beam arising from scattering into the viewing direction, which propagates more or less in random fashion. The latter beam has single scattering and multiple scattering contributions. In the former case and for single scattering, the problem is reducible to first-kind Fredholm equations, while for multiple scattering it is necessary to invert partial integrodifferential equations. A nonlinear minimization search method, applicable to the solution of both types of problems has been developed, and is applied here to the problem of monitoring aerosol pollution, namely the complex refractive index and size distribution of aerosol particles.
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1989-01-01
Progress in the direct-inverse wing design method in curvilinear coordinates has been made. This includes the remedying of a spanwise oscillation problem and the assessment of grid skewness, viscous interaction, and the initial airfoil section on the final design. It was found that, in response to the spanwise oscillation problem that designing at every other spanwise station produced the best results for the cases presented, a smoothly varying grid is especially needed for the accurate design at the wing tip, the boundary layer displacement thicknesses must be included in a successful wing design, the design of high and medium aspect ratio wings is possible with this code, and the final airfoil section designed is fairly independent of the initial section.
Inverse analysis of water profile in starch by non-contact photopyroelectric method
NASA Astrophysics Data System (ADS)
Frandas, A.; Duvaut, T.; Paris, D.
2000-07-01
The photopyroelectric (PPE) method in a non-contact configuration was proposed to study water migration in starch sheets used for biodegradable packaging. A 1-D theoretical model was developed, allowing the study of samples having a water profile characterized by an arbitrary continuous function. An experimental setup was designed or this purpose which included the choice of excitation source, detection of signals, signal and data processing, and cells for conditioning the samples. We report here the development of an inversion procedure allowing for the determination of the parameters that influence the PPE signal. This procedure led to the optimization of experimental conditions in order to identify the parameters related to the water profile in the sample, and to monitor the dynamics of the process.
NASA Technical Reports Server (NTRS)
Fu, L.-L.
1981-01-01
The circulation and meridional heat transport of the subtropical South Atlantic Ocean are determined through the application of the inverse method of Wunsch (1978) to hydrographic data from the IGY and METEOR expeditions. Meridional circulation results of the two data sets agree on a northward mass transport of about 20 million metric tons/sec for waters above the North Atlantic Deep Water (NADW), and a comparable southward transport of deep waters. Additional gross features held in common are the Benguela, South Equatorial and North Brazilian Coastal currents' northward transport of the Surface Water, and the deflection of the southward-flowing NADW from the South American Coast into the mid ocean by a seamount chain near 20 deg S. Total heat transport is equatorward, with a magnitude of 0.8 X 10 to the 15th W near 30 deg S and indistinguishable from zero near 8 deg S.
Variational methods for direct/inverse problems of atmospheric dynamics and chemistry
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena
2013-04-01
We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the
Inverse Sensitivity/Uncertainty Methods Development for Nuclear Fuel Cycle Applications
NASA Astrophysics Data System (ADS)
Arbanas, G.; Dunn, M. E.; Williams, M. L.
2014-04-01
The Standardized Computer Analyses for Licensing Evaluation (SCALE) software package developed at the Oak Ridge National Laboratory includes codes that propagate uncertainties available in the nuclear data libraries to compute uncertainties in nuclear application performance parameters. We report on our recent efforts to extend this capability to develop an inverse sensitivity/uncertainty (IS/U) methodology that identifies the improvements in nuclear data that are needed to compute application responses within prescribed tolerances, while minimizing the cost of such data improvements. We report on our progress to date and present a simple test case for our method. Our methodology is directly applicable to thermal and intermediate neutron energy systems because it addresses the implicit neutron resonance self-shielding effects that are essential to accurate modeling of thermal and intermediate systems. This methodology is likely to increase the efficiency of nuclear data efforts.
NASA Astrophysics Data System (ADS)
He, Qinglong; Han, Bo; Chen, Yong; Li, Yang
2016-01-01
In this paper, we present an efficient inversion method to reconstruct the velocity and density model based on the acoustic wave equation. The inversion is performed in the frequency domain using the finite-difference contrast source inversion (FD-CSI) method. The full forward problem is required to be solved only once at the beginning of the inversion process, which makes the method very computationally efficient. Furthermore, the flexibility of the finite-difference operator ensures FD-CSI the capability of handling complex geophysical applications with inhomogeneous background media. Moreover, different parameters are automatically normalized, avoiding the numerical difficulty arising from the different magnitudes of the parameters. A variant of total variation regularization called multiplicative constraint is incorporated to resolve the sharp discontinuities of the parameters. We employ a two-phase inversion strategy to carry out the FD-CSI method. After simultaneously reconstructing the bulk modulus and density, we obtain a relatively reliable bulk modulus, which is used as the background in the next phase to retrieve more accurate bulk modulus and density. A simple experiment is carried out to present the capability of the FD-CSI method in dealing with the cross talk effect between different parameters. The application on the Marmousi model further emphasizes the performance of the method for more complex geophysical problems.
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-01-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452
Assessment of Tikhonov-type regularization methods for solving atmospheric inverse problems
NASA Astrophysics Data System (ADS)
Xu, Jian; Schreier, Franz; Doicu, Adrian; Trautmann, Thomas
2016-11-01
Inverse problems occurring in atmospheric science aim to estimate state parameters (e.g. temperature or constituent concentration) from observations. To cope with nonlinear ill-posed problems, both direct and iterative Tikhonov-type regularization methods can be used. The major challenge in the framework of direct Tikhonov regularization (TR) concerns the choice of the regularization parameter λ, while iterative regularization methods require an appropriate stopping rule and a flexible λ-sequence. In the framework of TR, a suitable value of the regularization parameter can be generally determined based on a priori, a posteriori, and error-free selection rules. In this study, five practical regularization parameter selection methods, i.e. the expected error estimation (EEE), the discrepancy principle (DP), the generalized cross-validation (GCV), the maximum likelihood estimation (MLE), and the L-curve (LC), have been assessed. As a representative of iterative methods, the iteratively regularized Gauss-Newton (IRGN) algorithm has been compared with TR. This algorithm uses a monotonically decreasing λ-sequence and DP as an a posteriori stopping criterion. Practical implementations pertaining to retrievals of vertically distributed temperature and trace gas profiles from synthetic microwave emission measurements and from real far infrared data, respectively, have been conducted. Our numerical analysis demonstrates that none of the parameter selection methods dedicated to TR appear to be perfect and each has its own advantages and disadvantages. Alternatively, IRGN is capable of producing plausible retrieval results, allowing a more efficient manner for estimating λ.
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-05-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.
Inverse dispersion method for calculation of complex photonic band diagram and PT symmetry
NASA Astrophysics Data System (ADS)
Rybin, Mikhail V.; Limonov, Mikhail F.
2016-04-01
We suggest an inverse dispersion method for calculating a photonic band diagram for materials with arbitrary frequency-dependent dielectric functions. The method is able to calculate the complex wave vector for a given frequency by solving the eigenvalue problem with a non-Hermitian operator. The analogy with PT -symmetric Hamiltonians reveals that the operator corresponds to the momentum as a physical quantity, and the singularities at the band edges are related to the branch points and responses for the features on the band edges. The method is realized using a plane wave expansion technique for a two-dimensional periodic structure in the case of TE and TM polarizations. We illustrate the applicability of the method by the calculation of the photonic band diagrams of an infinite two-dimensional square lattice composed of dielectric cylinders using the measured frequency-dependent dielectric functions of different materials (amorphous hydrogenated carbon, silicon, and chalcogenide glass). We show that the method allows one to distinguish unambiguously between Bragg and Mie gaps in the spectra.
An Inverse Method to Infer the Global Ocean Paleoventilation from the Atmospheric 14C Record
NASA Astrophysics Data System (ADS)
Marchal, O.; Hughen, K. A.; Muscheler, R.
2001-12-01
We present an inverse method to infer a record of global ocean ventilation (GOV) from records of atmospheric 14C activity (Δ 14C) and production. The method is based on the assimilation of activity and production data in a box model of the 14C cycle in the ocean-atmosphere-land biosphere system using the variational (adjoint) technique. It includes three components: (1) the model code that yields the value of the cost function (a measure of the misfit between observed and modelled Δ 14C); (2) the adjoint code that yields the partial derivatives of the cost function with respect to the parameters describing the temporal evolution of the GOV; and (3) an optimization procedure that yields the parameter values minimizing the cost function. Lagrange multipliers are introduced to simplify the calculation of the partial derivatives of the cost function and to construct the adjoint code directly from the model code. First we describe the method, outlining the formal similarities with the calculus of variation in analytical mechanics. Second we verify the method through the capability to recover a variety of GOV evolutions from the assimilation of artificial data ("twin experiments"). Third we apply the method to the Younger Dryas, using recent high-resolution records of Δ 14C from the Cariaco basin and of 10Be flux from Greenland ice cores. Our results give new insight into the role of the deep ocean circulation during this dramatic and rapid climate change in the circum North Atlantic area.
A new method to inverse soil moisture based on thermal infrared and passive microwave remote sensing
NASA Astrophysics Data System (ADS)
Zhou, Zhuang; Kou, Xiaokang; Zhao, Shaojie; Jiang, Lingmei
2014-11-01
Soil moisture is one of the main factors in the water, energy and carbon cycles. It constitutes a major uncertainty in climate and hydrological models. By now, passive microwave remote sensing and thermal infrared remote sensing technology have been used to obtain and monitor soil moisture. However, as the resolution of passive microwave remote sensing is very low and the thermal infrared remote sensing method fails to provide soil temperature on cloudy days, it is hard to monitor the soil moisture accurately. To solve the problem, a new method has been tried in this research. Thermal infrared remote sensing and passive microwave remote sensing technology have been combined based on the delicate experiment. Since the soil moisture retrieved by passive microwave in general represents surface centimeters deep, which is different from deeper soil moisture estimated by thermal inertia method, a relationship between the two depths soil moisture has been established based on the experiment. The results show that there is a good relationship between the soil moisture estimated by passive microwave and thermal infrared remote sensing method. The correlation coefficient is 0.78 and RMSE (root mean square error) is 0.0195 · ?. This research provides a new possible method to inverse soil moisture.
NASA Astrophysics Data System (ADS)
Kaporin, I. E.
2012-02-01
In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.
Application of the Method of Fundamental Solutions to Potential-based Inverse Electrocardiography
Wang, Yong; Rudy, Yoram
2007-01-01
Potential-based inverse electrocardiography is a method for the noninvasive computation of epicardial potentials from measured body surface electrocardiographic data. From the computed epicardial potentials, epicardial electrograms and isochrones (activation sequences), as well as repolarization patterns can be constructed. We term this noninvasive procedure Electrocardiographic Imaging (ECGI). The method of choice for computing epicardial potentials has been the Boundary Element Method (BEM) which requires meshing the heart and torso surfaces and optimizing the mesh, a very time-consuming operation that requires manual editing. Moreover, it can introduce mesh-related artifacts in the reconstructed epicardial images. Here we introduce the application of a meshless method, the Method of Fundamental Solutions (MFS) to ECGI. This new approach that does not require meshing is evaluated on data from animal experiments and human studies, and compared to BEM. Results demonstrate similar accuracy, with the following advantages: 1. Elimination of meshing and manual mesh optimization processes, thereby enhancing automation and speeding the ECGI procedure. 2. Elimination of mesh-induced artifacts. 3. Elimination of complex singular integrals that must be carefully computed in BEM. 4. Simpler implementation. These properties of MFS enhance the practical application of ECGI as a clinical diagnostic tool. PMID:16807788
Łęski, Szymon; Pettersen, Klas H; Tunstall, Beth; Einevoll, Gaute T; Gigg, John; Wójcik, Daniel K
2011-12-01
The recent development of large multielectrode recording arrays has made it affordable for an increasing number of laboratories to record from multiple brain regions simultaneously. The development of analytical tools for array data, however, lags behind these technological advances in hardware. In this paper, we present a method based on forward modeling for estimating current source density from electrophysiological signals recorded on a two-dimensional grid using multi-electrode rectangular arrays. This new method, which we call two-dimensional inverse Current Source Density (iCSD 2D), is based upon and extends our previous one- and three-dimensional techniques. We test several variants of our method, both on surrogate data generated from a collection of Gaussian sources, and on model data from a population of layer 5 neocortical pyramidal neurons. We also apply the method to experimental data from the rat subiculum. The main advantages of the proposed method are the explicit specification of its assumptions, the possibility to include system-specific information as it becomes available, the ability to estimate CSD at the grid boundaries, and lower reconstruction errors when compared to the traditional approach. These features make iCSD 2D a substantial improvement over the approaches used so far and a powerful new tool for the analysis of multielectrode array data. We also provide a free GUI-based MATLAB toolbox to analyze and visualize our test data as well as user datasets.
Method of Minimax Optimization in the Coefficient Inverse Heat-Conduction Problem
NASA Astrophysics Data System (ADS)
Diligenskaya, A. N.; Rapoport, É. Ya.
2016-07-01
Consideration has been given to the inverse problem on identification of a temperature-dependent thermal-conductivity coefficient. The problem was formulated in an extremum statement as a problem of search for a quantity considered as the optimum control of an object with distributed parameters, which is described by a nonlinear homogeneous spatially one-dimensional Fourier partial equation with boundary conditions of the second kind. As the optimality criterion, the authors used the error (minimized on the time interval of observation) of uniform approximation of the temperature computed on the object's model at an assigned point of the segment of variation in the spatial variable to its directly measured value. Pre-parametrization of the sought control action, which a priori records its description accurate to assigning parameters of representation in the class of polynomial temperature functions, ensured the reduction of the problem under study to a problem of parametric optimization. To solve the formulated problem, the authors used an analytical minimax-optimization method taking account of the alternance properties of the sought optimum solutions based on which the algorithm of computation of the optimum values of the sought parameters is reduced to a system (closed for these unknowns) of equations fixing minimax deviations of the calculated values of temperature from those observed on the time interval of identification. The obtained results confirm the efficiency of the proposed method for solution of a certain range of applied problems. The authors have studied the influence of the coordinate of a point of temperature measurement on the exactness of solution of the inverse problem.
Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing
NASA Technical Reports Server (NTRS)
Chu, W. P.
1985-01-01
The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.
A novel model for diffusion based release kinetics using an inverse numerical method.
Mohammadi, Hadi; Herzog, Walter
2011-10-01
We developed and analyzed an inverse numerical model based on Fick's second law on the dynamics of drug release. In contrast to previous models which required two state descriptions of diffusion for long- and short-term release processes, our model is valid for the entire release process. The proposed model may be used for identifying and reducing experimental errors associated with measurements of diffusion based release kinetics. Knowing the initial and boundary conditions, and assuming Fick's second law to be appropriate, we use the methods of Lagrange multiplier along with least-square algorithms to define a cost function which is discretized using finite difference methods and is optimized so as to minimize errors. Our model can describe diffusion based release kinetics for static and dynamic conditions as accurately as finite element methods, but results are obtained in a fraction of CPU time. Our method can be widely used for drug release procedures and for tissue engineering/repair applications where oxygenation of cells residing within a matrix is important.
GPU-accelerated Monte Carlo simulation of particle coagulation based on the inverse method
NASA Astrophysics Data System (ADS)
Wei, J.; Kruis, F. E.
2013-09-01
Simulating particle coagulation using Monte Carlo methods is in general a challenging computational task due to its numerical complexity and the computing cost. Currently, the lowest computing costs are obtained when applying a graphic processing unit (GPU) originally developed for speeding up graphic processing in the consumer market. In this article we present an implementation of accelerating a Monte Carlo method based on the Inverse scheme for simulating particle coagulation on the GPU. The abundant data parallelism embedded within the Monte Carlo method is explained as it will allow an efficient parallelization of the MC code on the GPU. Furthermore, the computation accuracy of the MC on GPU was validated with a benchmark, a CPU-based discrete-sectional method. To evaluate the performance gains by using the GPU, the computing time on the GPU against its sequential counterpart on the CPU were compared. The measured speedups show that the GPU can accelerate the execution of the MC code by a factor 10-100, depending on the chosen particle number of simulation particles. The algorithm shows a linear dependence of computing time with the number of simulation particles, which is a remarkable result in view of the n2 dependence of the coagulation.
Retrieval Performance and Indexing Differences in ABELL and MLAIB
ERIC Educational Resources Information Center
Graziano, Vince
2012-01-01
Searches for 117 British authors are compared in the Annual Bibliography of English Language and Literature (ABELL) and the Modern Language Association International Bibliography (MLAIB). Authors are organized by period and genre within the early modern era. The number of records for each author was subdivided by format, language of publication,…
Evaluation of a Heterogeneity Preserving Inversion Method for Subsurface Unsaturated Flow
NASA Astrophysics Data System (ADS)
Zhang, Y.; Schaap, M. G.; Neuman, S. P.; Guadagnini, A.; Riva, M.
2013-12-01
is embedded in the inversion method through textural information. Our results show the benefit of our inverse modeling approach, assessed through minimization of the difference between observed and simulated water content dynamics, when compared against traditional zonation, with mean squared residual (MSR) decreasing by about 35% and Pearson correlation coefficient increasing from 0.9395 to 0.9612.
NASA Astrophysics Data System (ADS)
Fortin, Will F. J.
The utility and meaning of a geophysical dataset is dependent on good interpretation informed by high-quality data, processing, and attribute examination via technical methodologies. Active source marine seismic reflection data contains a great deal of information in the location, phase, and amplitude of both pre- and post-stack seismic reflections. Using pre- and post-stack data, this work has extracted useful information from marine reflection seismic data in novel ways in both the oceanic water column and the sub-seafloor geology. In chapter 1 we develop a new method for estimating oceanic turbulence from a seismic image. This method is tested on synthetic seismic data to show the method's ability to accurately recover both distribution and levels of turbulent diffusivity. Then we apply the method to real data offshore Costa Rica where we observe lee waves. Our results find elevated diffusivities near the seafloor as well as above the lee waves five times greater than surrounding waters and 50 times greater than open ocean diffusivities. Chapter 2 investigates subsurface geology in the Cascadia Subduction Zone and outlines a workflow for using pre-stack waveform inversion to produce highly detailed velocity models and seismic images. Using a newly developed inversion code, we achieve better imaging results as compared to the product of a standard, user-intensive method for building a velocity model. Our results image the subduction interface ~30 km farther landward than previous work and better images faults and sedimentary structures above the oceanic plate as well as in the accretionary prism. The resultant velocity model is highly detailed, inverted every 6.25 m with ~20 m vertical resolution, and will be used to examine the role of fluids in the subduction system. These results help us to better understand the natural hazards risks associated with the Cascadia Subduction Zone. Chapter 3 returns to seismic oceanography and examines the dynamics of nonlinear
Estimates of European emissions of methyl chloroform using a Bayesian inversion method
NASA Astrophysics Data System (ADS)
Maione, M.; Graziosi, F.; Arduini, J.; Furlani, F.; Giostra, U.; Blake, D. R.; Bonasoni, P.; Fang, X.; Montzka, S. A.; O'Doherty, S. J.; Reimann, S.; Stohl, A.; Vollmer, M. K.
2014-03-01
Methyl chloroform (MCF) is a man-made chlorinated solvent contributing to the destruction of stratospheric ozone and is controlled under the Montreal Protocol on Substances that Deplete the Ozone Layer. Long-term, high-frequency observations of MCF carried out at three European sites show a constant decline of the background mixing ratios of MCF. However, we observe persistent non-negligible mixing ratio enhancements of MCF in pollution episodes suggesting unexpectedly high ongoing emissions in Europe. In order to identify the source regions and to give an estimate of the magnitude of such emissions, we have used a Bayesian inversion method and a point source analysis, based on high-frequency long-term observations at the three European sites. The inversion identified south-eastern France (SEF) as a region with enhanced MCF emissions. This estimate was confirmed by the point source analysis. We performed this analysis using an eleven-year data set, from January 2002 to December 2012. Overall emissions estimated for the European study domain decreased nearly exponentially from 1.1 Gg yr-1 in 2002 to 0.32 Gg yr-1 in 2012, of which the estimated emissions from the SEF region accounted for 0.49 Gg yr-1 in 2002 and 0.20 Gg yr-1 in 2012. The European estimates are a significant fraction of the total semi-hemisphere (30-90° N) emissions, contributing a minimum of 9.8% in 2004 and a maximum of 33.7% in 2011, of which on average 50% are from the SEF region. On the global scale, the SEF region is thus responsible from a minimum of 2.6% (in 2003) to a maximum of 10.3% (in 2009) of the global MCF emissions.
Estimates of European emissions of methyl chloroform using a Bayesian inversion method
NASA Astrophysics Data System (ADS)
Maione, M.; Graziosi, F.; Arduini, J.; Furlani, F.; Giostra, U.; Blake, D. R.; Bonasoni, P.; Fang, X.; Montzka, S. A.; O'Doherty, S. J.; Reimann, S.; Stohl, A.; Vollmer, M. K.
2014-09-01
Methyl chloroform (MCF) is a man-made chlorinated solvent contributing to the destruction of stratospheric ozone and is controlled under the "Montreal Protocol on Substances that Deplete the Ozone Layer" and its amendments, which called for its phase-out in 1996 in developed countries and 2015 in developing countries. Long-term, high-frequency observations of MCF carried out at three European sites show a constant decline in the background mixing ratios of MCF. However, we observe persistent non-negligible mixing ratio enhancements of MCF in pollution episodes, suggesting unexpectedly high ongoing emissions in Europe. In order to identify the source regions and to give an estimate of the magnitude of such emissions, we have used a Bayesian inversion method and a point source analysis, based on high-frequency long-term observations at the three European sites. The inversion identified southeastern France (SEF) as a region with enhanced MCF emissions. This estimate was confirmed by the point source analysis. We performed this analysis using an 11-year data set, from January 2002 to December 2012. Overall, emissions estimated for the European study domain decreased nearly exponentially from 1.1 Gg yr-1 in 2002 to 0.32 Gg yr-1 in 2012, of which the estimated emissions from the SEF region accounted for 0.49 Gg yr-1 in 2002 and 0.20 Gg yr-1 in 2012. The European estimates are a significant fraction of the total semi-hemisphere (30-90° N) emissions, contributing a minimum of 9.8% in 2004 and a maximum of 33.7% in 2011, of which on average 50% are from the SEF region. On the global scale, the SEF region is thus responsible for a minimum of 2.6% (in 2003) and a maximum of 10.3% (in 2009) of the global MCF emissions.
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-01-01
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods
NASA Astrophysics Data System (ADS)
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-04-01
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.
Validation of a 'displacement tomography' inversion method for modeling sheet intrusions
NASA Astrophysics Data System (ADS)
Menassian, Sarah
The study of volcano deformation data can provide information on magma processes and help assess the potential for future eruptions. In employing inverse deformation modeling on these data, we attempt to characterize the geometry, location and volume/pressure change of a deformation source. Techniques currently used to model sheet intrusions (e.g., dikes and sills) often require significant a priori assumptions about source geometry and can require testing a large number of parameters. Moreover, surface deformations are a non-linear function of the source geometry and location. This requires the use of Monte Carlo inversion techniques which leads to long computation times. Recently, 'displacement tomography' models have been used to characterize magma reservoirs by inverting source deformation data for volume changes using a grid of point sources in the subsurface. The computations involved in these models are less intensive as no assumptions are made on the source geometry and location, and the relationship between the point sources and the surface deformation is linear. In this project, seeking a less computationally intensive technique for fracture sources, we tested if this displacement tomography method for reservoirs could be used for sheet intrusions. We began by simulating the opening of three synthetic dikes of known geometry and location using an established deformation model for fracture sources. We then sought to reproduce the displacements and volume changes undergone by the fractures using the sources employed in the tomography methodology. Results of this validation indicate the volumetric point sources are not appropriate for locating fracture sources, however they may provide useful qualitative information on volume changes occurring in the surrounding rock, and therefore indirectly indicate the source location.
The Sunyaev-Zel'dovich Effect in Abell 370
NASA Technical Reports Server (NTRS)
Grego, Laura; Carlstrom, John E.; Joy, Marshall K.; Reese, Erik D.; Holder, Gilbert P.; Patel, Sandeep; Holzapfel, William L.; Cooray, Asantha K.
1999-01-01
We present interferometric measurements of the Sunyaev-Zel'dovich (SZ) effect towards the galaxy cluster Abell 370. These measurements, which directly probe the pressure of the cluster's gas, show the gas is strongly aspherical, on agreement with the morphology revealed by x-ray and gravitational lensing observations. We calculate the cluster's gas mass fraction by comparing the gas mass derived from the SZ measurements to the lensing-derived gravitational mass near the critical lensing radius. We also calculate the gas mass fraction from the SZ data by deriving the total mass under the assumption that the gas is in hydrostatic equilibrium (HSE). We test the assumptions in the HSE method by comparing the total cluster mass implied by the two methods. The Hubble constant derived for this cluster, when the known systematic uncertainties are included, has a very wide range of values and therefore does not provide additional constraints on the validity of the assumptions. We examine carefully the possible systematic errors in the gas fraction measurement. The gas fraction is a lower limit to the cluster's baryon fraction and so we compare the gas mass fraction, calibrated by numerical simulations to approximately the virial radius, to measurements of the global mass fraction of baryonic matter, OMEGA(sub B)/OMEGA(sub matter). Our lower limit to the cluster baryon fraction is f(sub B) = (0.043 +/- 0.014)/h (sub 100). From this, we derive an upper limit to the universal matter density, OMEGA(sub matter) <= 0.72/h(sub 100), and a likely value of OMEGA(sub matter) <= (0.44(sup 0.15, sub -0.12)/h(sub 100).
NASA Astrophysics Data System (ADS)
Pham, H. V.; Elshall, A. S.; Tsai, F. T.; Yan, L.
2012-12-01
The inverse problem in groundwater modeling deals with a rugged (i.e. ill-conditioned and multimodal), nonseparable and noisy function since it involves solving second order nonlinear partial deferential equations with forcing terms. Derivative-based optimization algorithms may fail to reach a near global solution due to their stagnation at a local minimum solution. To avoid entrapment in a local optimum and enhance search efficiency, this study introduces the covariance matrix adaptation-evolution strategy (CMA-ES) as a local derivative-free optimization method. In the first part of the study, we compare CMA-ES with five commonly used heuristic methods and the traditional derivative-based Gauss-Newton method on a hypothetical problem. This problem involves four different cases to allow a rigorous assessment against ten criterions: ruggedness in terms of nonsmooth and multimodal, ruggedness in terms of ill-conditioning and high nonlinearity, nonseparablity, high dimensionality, noise, algorithm adaptation, algorithm tuning, performance, consistency, parallelization (scaling with number of cores) and invariance (solution vector and function values). The CMA-ES adapts a covariance matrix representing the pair-wise dependency between decision variables, which approximates the inverse of the Hessian matrix up to a certain factor. The solution is updated with the covariance matrix and an adaptable step size, which are adapted through two conjugates that implement heuristic control terms. The covariance matrix adaptation uses information from the current population of solutions and from the previous search path. Since such an elaborate search mechanism is not common in the other heuristic methods, CMA-ES proves to be more robust than other population-based heuristic methods in terms of reaching a near-optimal solution for a rugged, nonseparable and noisy inverse problem. Other favorable properties that the CMA-ES exhibits are the consistency of the solution for repeated
Pan, Feifei; Peters-lidard, Christa D.; King, Anthony Wayne
2010-11-01
Soil particle size distribution (PSD) (i.e., clay, silt, sand, and rock contents) information is one of critical factors for understanding water cycle since it affects almost all of water cycle processes, e.g., drainage, runoff, soil moisture, evaporation, and evapotranspiration. With information about soil PSD, we can estimate almost all soil hydraulic properties (e.g., saturated soil moisture, field capacity, wilting point, residual soil moisture, saturated hydraulic conductivity, pore-size distribution index, and bubbling capillary pressure) based on published empirical relationships. Therefore, a regional or global soil PSD database is essential for studying water cycle regionally or globally. At the present stage, three soil geographic databases are commonly used, i.e., the Soil Survey Geographic database, the State Soil Geographic database, and the National Soil Geographic database. Those soil data are map unit based and associated with great uncertainty. Ground soil surveys are a way to reduce this uncertainty. However, ground surveys are time consuming and labor intensive. In this study, an inverse method for estimating mean and standard deviation of soil PSD from observed soil moisture is proposed and applied to Throughfall Displacement Experiment sites in Walker Branch Watershed in eastern Tennessee. This method is based on the relationship between spatial mean and standard deviation of soil moisture. The results indicate that the suggested method is feasible and has potential for retrieving soil PSD information globally from remotely sensed soil moisture data.
Inverse PCR-based method for isolating novel SINEs from genome.
Han, Yawei; Chen, Liping; Guan, Lihong; He, Shunping
2014-04-01
Short interspersed elements (SINEs) are moderately repetitive DNA sequences in eukaryotic genomes. Although eukaryotic genomes contain numerous SINEs copy, it is very difficult and laborious to isolate and identify them by the reported methods. In this study, the inverse PCR was successfully applied to isolate SINEs from Opsariichthys bidens genome in Eastern Asian Cyprinid. A group of SINEs derived from tRNA(Ala) molecular had been identified, which were named Opsar according to Opsariichthys. SINEs characteristics were exhibited in Opsar, which contained a tRNA(Ala)-derived region at the 5' end, a tRNA-unrelated region, and AT-rich region at the 3' end. The tRNA-derived region of Opsar shared 76 % sequence similarity with tRNA(Ala) gene. This result indicated that Opsar could derive from the inactive or pseudogene of tRNA(Ala). The reliability of method was tested by obtaining C-SINE, Ct-SINE, and M-SINEs from Ctenopharyngodon idellus, Megalobrama amblycephala, and Cyprinus carpio genomes. This method is simpler than the previously reported, which successfully omitted many steps, such as preparation of probes, construction of genomic libraries, and hybridization. PMID:24122282
Tang, Yu; Hu, Chao; Liao, Qiong; Liu, Wen-long; Yang, Yan-tao; He, Hong; He, Fu-yuan
2015-01-01
The solubility parameter determination of astrageloside from Buyang Huanwu decoction with inverse gas chromatography (IGC) method evaluation was investigated in this paper. Di-n-octyl phthalate Kwai alternative sample was used to carry out methodological study. The accuracy of the measured correlation coefficient was 0.992 1. Experimental precision measured by IGC experiments showed that the results were accurate and reliable. The sample was uniformly coated on the surface of an inert carrier and N2 gas was carrier gas, a variety of polar solvents such as isopropanol, toluene, acetone, chloroform, cyclohexane as probes. TCD detector temperature was 150 degrees C, gas room temperature was 120 degrees C. Similar headspace method was used whichever over 1 μL gas into the GC measurement, Retention time t(R), t(0) and all the parameters of air and probes molecules within the column were tested. Astragaloside solubility parameter was (21.02 ± 2.4) [J x cm(-3)] ½, literature value was 19.24 [J x cm(-3)] ½, and relevant coefficient was 0.984 5. IGC method is effective and accurate to measure ingredients solubility parameter. PMID:26080552
NASA Astrophysics Data System (ADS)
D'Auria, Luca; Fernandez, Jose; Puglisi, Giuseppe; Rivalta, Eleonora; Camacho, Antonio; Nikkhoo, Mehdi; Walter, Thomas
2016-04-01
The inversion of ground deformation and gravity data is affected by an intrinsic ambiguity because of the mathematical formulation of the inverse problem. Current methods for the inversion of geodetic data rely on both parametric (i.e. assuming a source geometry) and non-parametric approaches. The former are able to catch the fundamental features of the ground deformation source but, if the assumptions are wrong or oversimplified, they could provide misleading results. On the other hand, the latter class of methods, even if not relying on stringent assumptions, could suffer from artifacts, especially when dealing with poor datasets. In the framework of the EC-FP7 MED-SUV project we aim at comparing different inverse approaches to verify how they cope with basic goals of Volcano Geodesy: determining the source depth, the source shape (size and geometry), the nature of the source (magmatic/hydrothermal) and hinting the complexity of the source. Other aspects that are important in volcano monitoring are: volume/mass transfer toward shallow depths, propagation of dikes/sills, forecasting the opening of eruptive vents. On the basis of similar experiments already done in the fields of seismic tomography and geophysical imaging, we have devised a bind test experiment. Our group was divided into one model design team and several inversion teams. The model design team devised two physical models representing volcanic events at two distinct volcanoes (one stratovolcano and one caldera). They provided the inversion teams with: the topographic reliefs, the calculated deformation field (on a set of simulated GPS stations and as InSAR interferograms) and the gravity change (on a set of simulated campaign stations). The nature of the volcanic events remained unknown to the inversion teams until after the submission of the inversion results. Here we present the preliminary results of this comparison in order to determine which features of the ground deformation and gravity source
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong
2011-12-01
Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.
An inverse method to determine the mechanical properties of the iris in vivo
2014-01-01
Background Understanding the mechanical properties of the iris can help to have an insight into the eye diseases with abnormalities of the iris morphology. Material parameters of the iris were simply calculated relying on the ex vivo experiment. However, the mechanical response of the iris in vivo is different from that ex vivo, therefore, a method was put forward to determine the material parameters of the iris using the optimization method in combination with the finite element method based on the in vivo experiment. Material and methods Ocular hypertension was induced by rapid perfusion to the anterior chamber, during perfusion intraocular pressures in the anterior and posterior chamber were record by sensors, images of the anterior segment were captured by the ultrasonic system. The displacement of the characteristic points on the surface of the iris was calculated. A finite element model of the anterior chamber was developed using the ultrasonic image before perfusion, the multi-island genetic algorithm was employed to determine the material parameters of the iris by minimizing the difference between the finite element simulation and the experimental measurements. Results Material parameters of the iris in vivo were identified as the iris was taken as a nearly incompressible second-order Ogden solid. Values of the parameters μ1, α1, μ2 and α2 were 0.0861 ± 0.0080 MPa, 54.2546 ± 12.7180, 0.0754 ± 0.0200 MPa, and 48.0716 ± 15.7796 respectively. The stability of the inverse finite element method was verified, the sensitivity of the model parameters was investigated. Conclusion Material properties of the iris in vivo could be determined using the multi-island genetic algorithm coupled with the finite element method based on the experiment. PMID:24886660
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
NASA Astrophysics Data System (ADS)
Dolman, A. J.; Shvidenko, A.; Schepaschenko, D.; Ciais, P.; Tchebakova, N.; Chen, T.; van der Molen, M. K.; Belelli Marchesini, L.; Maximov, T. C.; Maksyutov, S.; Schulze, E.-D.
2012-12-01
We determine the net land to atmosphere flux of carbon in Russia, including Ukraine, Belarus and Kazakhstan, using inventory-based, eddy covariance, and inversion methods. Our high boundary estimate is -342 Tg C yr-1 from the eddy covariance method, and this is close to the upper bounds of the inventory-based Land Ecosystem Assessment and inverse models estimates. A lower boundary estimate is provided at -1350 Tg C yr-1 from the inversion models. The average of the three methods is -613.5 Tg C yr-1. The methane emission is estimated separately at 41.4 Tg C yr-1. These three methods agree well within their respective error bounds. There is thus good consistency between bottom-up and top-down methods. The forests of Russia primarily cause the net atmosphere to land flux (-692 Tg C yr-1 from the LEA. It remains however remarkable that the three methods provide such close estimates (-615, -662, -554 Tg C yr-1) for net biome production (NBP), given the inherent uncertainties in all of the approaches. The lack of recent forest inventories, the few eddy covariance sites and associated uncertainty with upscaling and undersampling of concentrations for the inversions are among the prime causes of the uncertainty. The dynamic global vegetation models (DGVMs) suggest a much lower uptake at -91 Tg C yr-1, and we argue that this is caused by a high estimate of heterotrophic respiration compared to other methods.
NASA Astrophysics Data System (ADS)
Tsuboi, S.; Miyoshi, T.; Obayashi, M.; Tono, Y.; Ando, K.
2014-12-01
Recent progress in large scale computing by using waveform modeling technique and high performance computing facility has demonstrated possibilities to perform full-waveform inversion of three dimensional (3D) seismological structure inside the Earth. We apply the adjoint method (Liu and Tromp, 2006) to obtain 3D structure beneath Japanese Islands. First we implemented Spectral-Element Method to K-computer in Kobe, Japan. We have optimized SPECFEM3D_GLOBE (Komatitsch and Tromp, 2002) by using OpenMP so that the code fits hybrid architecture of K-computer. Now we could use 82,134 nodes of K-computer (657,072 cores) to compute synthetic waveform with about 1 sec accuracy for realistic 3D Earth model and its performance was 1.2 PFLOPS. We use this optimized SPECFEM3D_GLOBE code and take one chunk around Japanese Islands from global mesh and compute synthetic seismograms with accuracy of about 10 second. We use GAP-P2 mantle tomography model (Obayashi et al., 2009) as an initial 3D model and use as many broadband seismic stations available in this region as possible to perform inversion. We then use the time windows for body waves and surface waves to compute adjoint sources and calculate adjoint kernels for seismic structure. We have performed several iteration and obtained improved 3D structure beneath Japanese Islands. The result demonstrates that waveform misfits between observed and theoretical seismograms improves as the iteration proceeds. We now prepare to use much shorter period in our synthetic waveform computation and try to obtain seismic structure for basin scale model, such as Kanto basin, where there are dense seismic network and high seismic activity. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We used F-net seismograms of the National Research Institute for Earth Science and Disaster Prevention.
Constraining the solutions of an inverse method of stellar population synthesis
NASA Astrophysics Data System (ADS)
Moultaka, J.; Boisson, C.; Joly, M.; Pelat, D.
2004-06-01
In three previous papers (Pelat \\cite{Pelat97}, MNRAS, 284, 365; Pelat \\cite{Pelat98}, MNRAS, 299, 877; Moultaka & Pelat \\cite{Moultaka00}, MNRAS, 314, 409), we set out an inverse stellar population synthesis method that uses a database of stellar spectra. Unlike other methods, this one provides full knowledge of all possible solutions as well as a good estimation of their stability; moreover, it provides the unique approximate solution, when the problem is overdetermined, using a rigorous minimization procedure. In Boisson et al. (\\cite{Boisson00}, A&A, 357, 850), this method was applied to 10 active and 2 normal galaxies. In this paper we analyse the results of the method after constraining the solutions. Adding {a priori} physical conditions to the solutions constitutes a good way to regularize the synthesis problem. As an illustration we introduce physical constraints on the relative number of stars taking into account our present knowledge of the initial mass function in galaxies. To avoid biases on the solutions due to such constraints, we use constraints involving only inequalities between the number of stars, after dividing the H-R diagram into various groups of stellar masses. We discuss the results for a well-known globular cluster of the galaxy M 31 and discuss some of the galaxies studied in Boisson et al. (\\cite{Boisson00}, A&A, 357, 850). We find that, given the spectral resolution and the spectral domain, the method is very stable according to such constraints (i.e. the constrained solutions are almost the same as the unconstrained one). However, additional information can be derived about the evolutionary stage of the last burst of star formation, but the precise age of this particular burst seems to be questionable. Appendix A, Figs. 2-5 and Tables 4-6 are only available in electronique form at http://www.edpsciences.org moultaka@ph1.uni-koeln.de
A tracer-based inversion method for diagnosing eddy-induced diffusivity and advection
NASA Astrophysics Data System (ADS)
Bachman, S. D.; Fox-Kemper, B.; Bryan, F. O.
2015-02-01
A diagnosis method is presented which inverts a set of tracer flux statistics into an eddy-induced transport intended to apply for all tracers. The underlying assumption is that a linear flux-gradient relationship describes eddy-induced tracer transport, but a full tensor coefficient is assumed rather than a scalar coefficient which allows for down-gradient and skew transports. Thus, Lagrangian advection and anisotropic diffusion not necessarily aligned with the tracer gradient can be diagnosed. In this method, multiple passive tracers are initialized in an eddy-resolving flow simulation. Their spatially-averaged gradients form a matrix, where the gradient of each tracer is assumed to satisfy an identical flux-gradient relationship. The resulting linear system, which is overdetermined when using more than three tracers, is then solved to obtain an eddy transport tensor R which describes the eddy advection (antisymmetric part of R) and potentially anisotropic diffusion (symmetric part of R) in terms of coarse-grained variables. The mathematical basis for this inversion method is presented here, along with practical guidelines for its implementation. We present recommendations for initialization of the passive tracers, maintaining the required misalignment of the tracer gradients, correcting for nonconservative effects, and quantifying the error in the diagnosed transport tensor. A method is proposed to find unique, tracer-independent, distinct rotational and divergent Lagrangian transport operators, but the results indicate that these operators are not meaningfully relatable to tracer-independent eddy advection or diffusion. With the optimal method of diagnosis, the diagnosed transport tensor is capable of predicting the fluxes of other tracers that are withheld from the diagnosis, including even active tracers such as buoyancy, such that relative errors of 14% or less are found.
Mates, Steven P; Forster, Aaron M; Hunston, Donald; Rhorer, Richard; Everett, Richard K; Simmonds, Kirth E; Bagchi, Amit
2012-10-01
Soft elastomeric materials that mimic real soft human tissues are sought to provide realistic experimental devices to simulate the human body's response to blast loading to aid the development of more effective protective equipment. The dynamic mechanical behavior of these materials is often measured using a Kolsky bar because it can achieve both the high strain rates (>100s(-1)) and the large strains (>20%) that prevail in blast scenarios. Obtaining valid results is challenging, however, due to poor dynamic equilibrium, friction, and inertial effects. To avoid these difficulties, an inverse method was employed to determine the dynamic response of a soft, prospective biomimetic elastomer using Kolsky bar tests coupled with high-speed 3D digital image correlation. Individual tests were modeled using finite elements, and the dynamic stiffness of the elastomer was identified by matching the simulation results with test data using numerical optimization. Using this method, the average dynamic response was found to be nearly equivalent to the quasi-static response measured with stress-strain curves at compressive strains up to 60%, with an uncertainty of ±18%. Moreover, the behavior was consistent with the results in stress relaxation experiments and oscillatory tests although the latter were performed at lower strain levels.
An inverse method to recover the SFR and reddening properties from spectra of galaxies
NASA Astrophysics Data System (ADS)
Vergely, J.-L.; Lançon, A.; Mouhcine
2002-11-01
We develop a non-parametric inverse method to investigate the star formation rate, the metallicity evolution and the reddening properties of galaxies based on their spectral energy distributions (SEDs). This approach allows us to clarify the level of information present in the data, depending on its signal-to-noise ratio. When low resolution SEDs are available in the ultraviolet, optical and near-IR wavelength ranges together, we conclude that it is possible to constrain the star formation rate and the effective dust optical depth simultaneously with a signal-to-noise ratio of 25. With excellent signal-to-noise ratios, the age-metallicity relation can also be constrained. We apply this method to the well-known nuclear starburst in the interacting galaxy NGC 7714. We focus on deriving the star formation history and the reddening law. We confirm that classical extinction models cannot provide an acceptable simultaneous fit of the SED and the lines. We also confirm that, with the adopted population synthesis models and in addition to the current starburst, an episode of enhanced star formation that started more than 200 Myr ago is required. As the time elapsed since the last interaction with NGC 7715, based on dynamical studies, is about 100 Myr, our result reinforces the suggestion that this interaction might not have been the most important event in the life of NGC 7714.
Bulk Modulus of Spherical Palladium Nanoparticles by Chen-Mobius Lattice Inversion Method
NASA Astrophysics Data System (ADS)
Abdul-Hafidh, Esam
2015-03-01
Palladium is a precious and rare element that belongs to the Platinum group metals (PGMS) with the lowest density and melting point. Numerous uses of Pd in dentistry, medicine and industrial applications attracted considerable investment. Preparation and characterization of palladium nanoparticles have been conducted by many researchers, but very little effort has taken place on the study of Pd physical properties, such as, mechanical, optical, and electrical. In this study, Chen-Mobius lattice inversion method is used to calculate the cohesive energy and modulus of palladium. The method was employed to calculate the cohesive energy by summing over all pairs of atoms within palladium spherical nanoparticles. The modulus is derived from the cohesive energy curve as a function of particles' sizes. The cohesive energy has been calculated using the potential energy function proposed by (Rose et al., 1981). The results are found to be comparable with previous predictions of metallic nanoparticles. This work is supported by the Royal commission at Yanbu- Saudi Arabia.
Calibration-free inverse method for depth-profile analysis with laser-induced breakdown spectroscopy
NASA Astrophysics Data System (ADS)
Gaudiuso, R.
2016-09-01
The Calibration-free inverse method (CF-IM) is a variant of the classical CF approach that can be used for the determination of the plasma temperature using a single calibration standard. In this work, the IM was suitably modified in order to test its applicability to the depth-resolved elemental analyses of stratified samples. The single calibration standard was used as a sort of reference sample to model the acquisition conditions of the spectra, to investigate the effect of the acquisition geometry, and to account for possible crater-induced changes in the acquired spectra and plasma parameters. Thus, a depth profile of the standard sample was performed in order to obtain a plasma temperature profile, which in turn was employed, together with the experimental electron density profile, for the depth profile calibration-free analysis. The methodology was also applied to archaeological samples, with the purpose of testing the method with weathered and layered samples, and compared with the results of classical LIBS with calibration lines.
Modified Inverse First Order Reliability Method (I-FORM) for Predicting Extreme Sea States.
Eckert-Gallup, Aubrey Celia; Sallaberry, Cedric Jean-Marie; Dallman, Ann Renee; Neary, Vincent Sinclair
2014-09-01
Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulation s as a part of the stand ard current practice for designing marine structure s to survive extreme sea states. Such environmental contours are characterized by combinations of significant wave height ( ) and energy period ( ) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first - order reliability method (IFORM) i s standard design practice for generating environmental contours. In this paper, the traditional appli cation of the IFORM to generating environmental contours representing extreme sea states is described in detail and its merits and drawbacks are assessed. The application of additional methods for analyzing sea state data including the use of principal component analysis (PCA) to create an uncorrelated representation of the data under consideration is proposed. A reexamination of the components of the IFORM application to the problem at hand including the use of new distribution fitting techniques are shown to contribute to the development of more accurate a nd reasonable representations of extreme sea states for use in survivability analysis for marine struc tures. Keywords: In verse FORM, Principal Component Analysis , Environmental Contours, Extreme Sea State Characteri zation, Wave Energy Converters
NASA Astrophysics Data System (ADS)
Fernández-Oliveras, Alicia; Rubiño, Manuel; Pérez, María. M.
2013-11-01
Light propagation in biological media is characterized by the absorption coefficient, the scattering coefficient, the scattering phase function, the refractive index, and the surface conditions (roughness). By means of the inverse-adding-doubling (IAD) method, transmittance and reflectance measurements lead to the determination of the absorption coefficient and the reduced scattering coefficient. The additional measurement of the phase function performed by goniometry allows the separation of the reduced scattering coefficient into the scattering coefficient and the scattering anisotropy factor. The majority of techniques, such as the one utilized in this work, involve the use of integrating spheres to measure total transmission and reflection. We have employed an integrating sphere setup to measure the total transmittance and reflectance of dental biomaterials used in restorative dentistry. Dental biomaterials are meant to replace dental tissues, such as enamel and dentine, in irreversibly diseased teeth. In previous works we performed goniometric measurements in order to evaluate the scattering anisotropy factor for these kinds of materials. In the present work we have used the IAD method to combine the measurements performed using the integrating sphere setup with the results of the previous goniometric measurements. The aim was to optically characterize the dental biomaterials analyzed, since whole studies to assess the appropriate material properties are required in medical applications. In this context, complete optical characterizations play an important role in achieving the fulfillment of optimal quality and the final success of dental biomaterials used in restorative dentistry.
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto
2015-12-28
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730–3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
Conditionals by inversion provide a universal method for the generation of conditional alleles
Economides, Aris N.; Frendewey, David; Yang, Peter; Dominguez, Melissa G.; Dore, Anthony T.; Lobov, Ivan B.; Persaud, Trikaldarshi; Rojas, Jose; McClain, Joyce; Lengyel, Peter; Droguett, Gustavo; Chernomorsky, Rostislav; Stevens, Sean; Auerbach, Wojtek; DeChiara, Thomas M.; Pouyemirou, William; Cruz, Joseph M.; Feeley, Kieran; Mellis, Ian A.; Yasenchack, Jason; Hatsell, Sarah J.; Xie, LiQin; Latres, Esther; Huang, Lily; Zhang, Yuhong; Pefanis, Evangelos; Skokos, Dimitris; Deckelbaum, Ron A.; Croll, Susan D.; Davis, Samuel; Valenzuela, David M.; Gale, Nicholas W.; Murphy, Andrew J.; Yancopoulos, George D.
2013-01-01
Conditional mutagenesis is becoming a method of choice for studying gene function, but constructing conditional alleles is often laborious, limited by target gene structure, and at times, prone to incomplete conditional ablation. To address these issues, we developed a technology termed conditionals by inversion (COIN). Before activation, COINs contain an inverted module (COIN module) that lies inertly within the antisense strand of a resident gene. When inverted into the sense strand by a site-specific recombinase, the COIN module causes termination of the target gene’s transcription and simultaneously provides a reporter for tracking this event. COIN modules can be inserted into natural introns (intronic COINs) or directly into coding exons as part of an artificial intron (exonic COINs), greatly simplifying allele design and increasing flexibility over previous conditional KO approaches. Detailed analysis of over 20 COIN alleles establishes the reliability of the method and its broad applicability to any gene, regardless of exon–intron structure. Our extensive testing provides rules that help ensure success of this approach and also explains why other currently available conditional approaches often fail to function optimally. Finally, the ability to split exons using the COIN’s artificial intron opens up engineering modalities for the generation of multifunctional alleles. PMID:23918385
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
NASA Astrophysics Data System (ADS)
Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto
2015-12-01
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730-3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
Design optimization of axial flow hydraulic turbine runner: Part I - an improved Q3D inverse method
NASA Astrophysics Data System (ADS)
Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji
2002-06-01
With the aim of constructing a comprehensive design optimization procedure of axial flow hydraulic turbine, an improved quasi-three-dimensional inverse method has been proposed from the viewpoint of system and a set of rotational flow governing equations as well as a blade geometry design equation has been derived. The computation domain is firstly taken from the inlet of guide vane to the far outlet of runner blade in the inverse method and flows in different regions are solved simultaneously. So the influence of wicket gate parameters on the runner blade design can be considered and the difficulty to define the flow condition at the runner blade inlet is surmounted. As a pre-computation of initial blade design on S2m surface is newly adopted, the iteration of S1 and S2m surfaces has been reduced greatly and the convergence of inverse computation has been improved. The present model has been applied to the inverse computation of a Kaplan turbine runner. Experimental results and the direct flow analysis have proved the validation of inverse computation. Numerical investigations show that a proper enlargement of guide vane distribution diameter is advantageous to improve the performance of axial hydraulic turbine runner. Copyright
Site Effects Estimation by a Transfer-Station Generalized Inversion Method
NASA Astrophysics Data System (ADS)
Zhang, Wenbo; Yu, Xiangwei
2016-04-01
Site effect is one of the essential factors in characterizing strong ground motion as well as in earthquake engineering design. In this study, the generalized inversion technique (GIT) is applied to estimate site effects. Moreover, the GIT is modified to improve its analytical ability.GIT needs a reference station as a standard. Ideally the reference station is located at a rock site, and its site effect is considered to be a constant. For the same earthquake, the record spectrum of an interested station is divided by that of the reference station, and the source term is eliminated. Thus site effects and the attenuation can be acquired. In the GIT process, the amount of earthquake data available in analysis is limited to that recorded by the reference station, and the stations of which site effects can be estimated are also restricted to those stations which recorded common events with the reference station. In order to improve the limitation of the GIT, a modified GIT is put forward in this study, namely, the transfer-station generalized inversion method (TSGI). Comparing with the GIT, this modified GIT can be used to enlarge data set and increase the number of stations whose site effects can be analyzed. And this makes solution much more stable. To verify the results of GIT, a non-reference method, the genetic algorithms (GA), is applied to estimate absolute site effects. On April 20, 2013, an earthquake with magnitude of MS 7.0 occurred in the Lushan region, China. After this event, more than several hundred aftershocks with ML<3.0 occurred in this region. The purpose of this paper is to investigate the site effects and Q factor for this area based on the aftershock strong motion records from the China National Strong Motion Observation Network System. Our results show that when the TSGI is applied instead of the GIT, the total number of events used in the inversion increases from 31 to 54 and the total number of stations whose site effect can be estimated
Methods to control phase inversions and enhance mass transfer in liquid-liquid dispersions
Tsouris, Constantinos; Dong, Junhang
2002-01-01
The present invention is directed to the effects of applied electric fields on liquid-liquid dispersions. In general, the present invention is directed to the control of phase inversions in liquid-liquid dispersions. Because of polarization and deformation effects, coalescence of aqueous drops is facilitated by the application of electric fields. As a result, with an increase in the applied voltage, the ambivalence region is narrowed and shifted toward higher volume fractions of the dispersed phase. This permits the invention to be used to ensure that the aqueous phase remains continuous, even at a high volume fraction of the organic phase. Additionally, the volume fraction of the organic phase may be increased without causing phase inversion, and may be used to correct a phase inversion which has already occurred. Finally, the invention may be used to enhance mass transfer rates from one phase to another through the use of phase inversions.
Fabrication and characterization of cerium-doped barium titanate inverse opal by sol-gel method
Jin Yi; Zhu Yihua Yang Xiaoling; Li Chunzhong; Zhou Jinghong
2007-01-15
Cerium-doped barium titanate inverted opal was synthesized from barium acetate contained cerous acetate and tetrabutyl titanate in the interstitial spaces of a polystyrene (PS) opal. This procedure involves infiltration of precursors into the interstices of the PS opal template followed by hydrolytic polycondensation of the precursors to amorphous barium titanate and removal of the PS opal by calcination. The morphologies of opal and inverse opal were characterized by scanning electron microscope (SEM). The pores were characterized by mercury intrusion porosimetry (MIP). X-ray photoelectron spectroscopy (XPS) investigation showed the doping structure of cerium, barium and titanium. And powder X-ray diffraction allows one to observe the influence of doping degree on the grain size. The lattice parameters, crystal size and lattice strain were calculated by the Rietveld refinement method. The synthesis of cerium-doped barium titanate inverted opals provides an opportunity to electrically and optically engineer the photonic band structure and the possibility of developing tunable three-dimensional photonic crystal devices. - Graphical abstract: Cerium-doped barium titanate inverted opal was synthesized from barium acetate acid contained cerous acetate and tetrabutyl titanate in the interstitial spaces of a PS opal, which involves infiltration of precursors into the interstices of the PS opal template and removal of the PS opal by calcination.
Determination of optical property changes by laser treatments using inverse adding-doubling method
NASA Astrophysics Data System (ADS)
Honda, Norihiro; Ishii, Katsunori; Kimura, Akinori; Sakai, Makoto; Awazu, Kunio
2009-02-01
It is widely recognized for the realization of the pre-estimated treatment effects that the knowledge about the optical properties of the target tissues used to understanding the prediction of propagation and distribution of light within tissues would suffer from the technical problem such as the kinetic changes of the optical properties in laser irradiation. In this study, the optical properties of normal and laser coagulated chicken breast tissues and porcine intervertebral disks, normal and laser ablation have been determined in vitro in the spectral range between 350 and 1000 nm. In addition, the optical properties of the normal and photodynamic therapy (PDT) treated tumor, Lewis lung carcinoma, tissues have been determined. Diffuse reflectance and total transmittance of the samples are measured using an integrating-sphere technique. From these experimental data, the absorption coefficients and the reduced scattering coefficients of the samples are determined employing an inverse adding-doubling method. Laser coagulations and ablations have clearly increased the reduced scattering coefficient and slightly reduced the absorption coefficient. PDT treatment has increased absorption and reduced scattering coefficient. It is our expectation that these data will provide fundamental understandings on laser irradiation interactions behavior with tissues. The changes of the optical properties should be accounted for while planning the therapeutic procedure for the realization of safe laser treatments.
Inverse method predicting spinning modes radiated by a ducted fan from free-field measurements.
Lewy, Serge
2005-02-01
In the study the inverse problem of deducing the modal structure of the acoustic field generated by a ducted turbofan is addressed using conventional farfield directivity measurements. The final objective is to make input data available for predicting noise radiation in other configurations that would not have been tested. The present paper is devoted to the analytical part of that study. The proposed method is based on the equations governing ducted sound propagation and free-field radiation. It leads to fast computations checked on Rolls-Royce tests made in the framework of previous European projects. Results seem to be reliable although the system of equations to be solved is generally underdetermined (more propagating modes than acoustic measurements). A limited number of modes are thus selected according to any a priori knowledge of the sources. A first guess of the source amplitudes is obtained by adjusting the calculated maximum of radiation of each mode to the measured sound pressure level at the same angle. A least squares fitting gives the final solution. A simple correction can be made to take account of the mean flow velocity inside the nacelle which shifts the directivity patterns. It consists of modifying the actual frequency to keep the cut-off ratios unchanged. PMID:15759694
Inverse method predicting spinning modes radiated by a ducted fan from free-field measurements
NASA Astrophysics Data System (ADS)
Lewy, Serge
2005-02-01
In the study the inverse problem of deducing the modal structure of the acoustic field generated by a ducted turbofan is addressed using conventional farfield directivity measurements. The final objective is to make input data available for predicting noise radiation in other configurations that would not have been tested. The present paper is devoted to the analytical part of that study. The proposed method is based on the equations governing ducted sound propagation and free-field radiation. It leads to fast computations checked on Rolls-Royce tests made in the framework of previous European projects. Results seem to be reliable although the system of equations to be solved is generally underdetermined (more propagating modes than acoustic measurements). A limited number of modes are thus selected according to any a priori knowledge of the sources. A first guess of the source amplitudes is obtained by adjusting the calculated maximum of radiation of each mode to the measured sound pressure level at the same angle. A least squares fitting gives the final solution. A simple correction can be made to take account of the mean flow velocity inside the nacelle which shifts the directivity patterns. It consists of modifying the actual frequency to keep the cut-off ratios unchanged. .
Direct band gap silicon crystals predicted by an inverse design method
NASA Astrophysics Data System (ADS)
Oh, Young Jun; Lee, In-Ho; Lee, Jooyoung; Kim, Sunghyun; Chang, Kee Joo
2015-03-01
Cubic diamond silicon has an indirect band gap and does not absorb or emit light as efficiently as other semiconductors with direct band gaps. Thus, searching for Si crystals with direct band gaps around 1.3 eV is important to realize efficient thin-film solar cells. In this work, we report various crystalline silicon allotropes with direct and quasi-direct band gaps, which are predicted by the inverse design method which combines a conformation space annealing algorithm for global optimization and first-principles density functional calculations. The predicted allotropes exhibit energies less than 0.3 eV per atom and good lattice matches, compared with the diamond structure. The structural stability is examined by performing finite-temperature ab initio molecular dynamics simulations and calculating the phonon spectra. The absorption spectra are obtained by solving the Bethe-Salpeter equation together with the quasiparticle G0W0 approximation. For several allotropes with the band gaps around 1 eV, photovoltaic efficiencies are comparable to those of best-known photovoltaic absorbers such as CuInSe2. This work is supported by the National Research Foundation of Korea (2005-0093845 and 2008-0061987), Samsung Science and Technology Foundation (SSTF-BA1401-08), KIAS Center for Advanced Computation, and KISTI (KSC-2013-C2-040).
A proposed through-flow inverse method for the design of mixed-flow pumps
NASA Technical Reports Server (NTRS)
Borges, Joao Eduardo
1991-01-01
A through-flow (hub-to-shroud) truly inverse method is proposed and described. It uses an imposition of mean swirl, i.e., radius times mean tangential velocity, given throughout the meridional section of the turbomachine as an initial design specification. In the present implementation, it is assumed that the fluid is inviscid, incompressible, and irrotational at inlet and that the blades are supposed to have zero thickness. Only blade rows that impart to the fluid a constant work along the space are considered. An application of this procedure to design the rotor of a mixed-flow pump is described in detail. The strategy used to find a suitable mean swirl distribution and the other design inputs is also described. The final blade shape and pressure distributions on the blade surface are presented, showing that it is possible to obtain feasible designs using this technique. Another advantage of this technique is the fact that it does not require large amounts of CPU time.
NASA Astrophysics Data System (ADS)
Borisov, Dmitry; Singh, Satish C.; Fuji, Nobuaki
2015-09-01
Seismic full waveform inversion is an objective method to estimate elastic properties of the subsurface and is an important area of research, particularly in seismic exploration community. It is a data-fitting approach, where the difference between observed and synthetic data is minimized iteratively. Due to a very high computational cost, the practical implementation of waveform inversion has so far been restricted to a 2-D geometry with different levels of physics incorporated in it (e.g. elasticity/viscoelasticity) or to a 3-D geometry but using an acoustic approximation. However, the earth is three-dimensional, elastic and heterogeneous and therefore a full 3-D elastic inversion is required in order to obtain more accurate and valuable models of the subsurface. Despite the recent increase in computing power, the application of 3-D elastic full waveform inversion to real-scale problems remains quite challenging on the current computer architecture. Here, we present an efficient method to perform 3-D elastic full waveform inversion for time-lapse seismic data using a finite-difference injection method. In this method, the wavefield is computed in the whole model and is stored on a surface above a finite volume where the model is perturbed and localized inversion is performed. Comparison of the final results using the 3-D finite-difference injection method and conventional 3-D inversion performed within the whole volume shows that our new method provides significant reductions in computational time and memory requirements without any notable loss in accuracy. Our approach shows a big potential for efficient reservoir monitoring in real time-lapse experiments.
Wang, G.L.; Chew, W.C.; Cui, T.J.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.
2004-01-01
Three-dimensional (3D) subsurface imaging by using inversion of data obtained from the very early time electromagnetic system (VETEM) was discussed. The study was carried out by using the distorted Born iterative method to match the internal nonlinear property of the 3D inversion problem. The forward solver was based on the total-current formulation bi-conjugate gradient-fast Fourier transform (BCCG-FFT). It was found that the selection of regularization parameter follow a heuristic rule as used in the Levenberg-Marquardt algorithm so that the iteration is stable.
NASA Astrophysics Data System (ADS)
Usui, Y.; Uehara, M.; Okuno, K.
2012-01-01
Modern scanning magnetic microscopes have the potential for fine-scale magnetic investigations of rocks. Observations at high spatial resolution produce large volumes of data, and the interpretation of these data is a nontrivial task. We have developed software using an efficient magnetic inversion technique that explicitly constructs the spatially localized Backus-Gilbert averaging kernel. Our approach, using the subtractive optimally localized averages (SOLA) method (Pijpers, R.P., Thompson, M.J., 1992. Faster formulations of the optimally localized averages method for helioseismic inversions. Astronomy and Astrophysics 262, L33-L36), yield a unidirectional magnetization. The averaging kernel expresses the spatial resolution of the inversion and is valuable for paleomagnetic application of the scanning magnetic microscope. Inversion examples for numerical magnetization patterns are provided to exhibit the performance of the method. Examples of actual magnetic field data collected from thin sections of natural rocks measured with a magnetoimpedance (MI) magnetic microscope are also provided. Numerical tests suggest that the data-independent averaging kernel is desirable for a point-to-point comparison among multiple data. Contamination by vector magnetization components can be estimated by the averaging kernel. We conclude that the SOLA method is a useful technique for paleomagnetic and rock magnetic investigations using scanning magnetic microscopy.
ERIC Educational Resources Information Center
Axinte, D. A.
2008-01-01
The paper presents an "inverse" method to teach specialist manufacturing processes by identifying a focal representative product (RP) from which, key specialist manufacturing (KSM) processes are analysed and interrelated to assess the capability of integrated manufacturing routes. In this approach, RP should: comprise KSM processes; involve…
NASA Astrophysics Data System (ADS)
Schmidt, L. S.; Karlsson, N. B.; Hvidberg, C. S.
2016-09-01
High-resolution images of the martian surface have revealed numerous deposits with complex patterns consistent with the flow of ice. Here we applied ice-flow models and inverse methods to estimate the ice thickness and volume of these deposits.
A simulation based method to assess inversion algorithms for transverse relaxation data
NASA Astrophysics Data System (ADS)
Ghosh, Supriyo; Keener, Kevin M.; Pan, Yong
2008-04-01
NMR relaxometry is a very useful tool for understanding various chemical and physical phenomena in complex multiphase systems. A Carr-Purcell-Meiboom-Gill (CPMG) [P.T. Callaghan, Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991] experiment is an easy and quick way to obtain transverse relaxation constant (T2) in low field. Most of the samples usually have a distribution of T2 values. Extraction of this distribution of T2s from the noisy decay data is essentially an ill-posed inverse problem. Various inversion approaches have been used to solve this problem, to date. A major issue in using an inversion algorithm is determining how accurate the computed distribution is. A systematic analysis of an inversion algorithm, UPEN [G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data, Journal of Magnetic Resonance 132 (1998) 65-77; G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data II. Data spacing, T2 data, systematic data errors, and diagnostics, Journal of Magnetic Resonance 147 (2000) 273-285] was performed by means of simulated CPMG data generation. Through our simulation technique and statistical analyses, the effects of various experimental parameters on the computed distribution were evaluated. We converged to the true distribution by matching up the inversion results from a series of true decay data and a noisy simulated data. In addition to simulation studies, the same approach was also applied on real experimental data to support the simulation results.
X-Ray Imaging-Spectroscopy of Abell 1835
NASA Technical Reports Server (NTRS)
Peterson, J. R.; Paerels, F. B. S.; Kaastra, J. S.; Arnaud, M.; Reiprich T. H.; Fabian, A. C.; Mushotzky, R. F.; Jernigan, J. G.; Sakelliou, I.
2000-01-01
We present detailed spatially-resolved spectroscopy results of the observation of Abell 1835 using the European Photon Imaging Cameras (EPIC) and the Reflection Grating Spectrometers (RGS) on the XMM-Newton observatory. Abell 1835 is a luminous (10(exp 46)ergs/s), medium redshift (z = 0.2523), X-ray emitting cluster of galaxies. The observations support the interpretation that large amounts of cool gas are present in a multi-phase medium surrounded by a hot (kT(sub e) = 8.2 keV) outer envelope. We detect O VIII Ly(alpha) and two Fe XXIV complexes in the RGS spectrum. The emission measure of the cool gas below kT(sub e) = 2.7 keV is much lower than expected from standard cooling-flow models, suggesting either a more complicated cooling process than simple isobaric radiative cooling or differential cold absorption of the cooler gas.
The GenABEL Project for statistical genomics
Karssen, Lennart C.; van Duijn, Cornelia M.; Aulchenko, Yurii S.
2016-01-01
Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the “core team”, facilitating agile statistical omics methodology development and fast dissemination. PMID:27347381
The GenABEL Project for statistical genomics.
Karssen, Lennart C; van Duijn, Cornelia M; Aulchenko, Yurii S
2016-01-01
Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the "core team", facilitating agile statistical omics methodology development and fast dissemination.
The GenABEL Project for statistical genomics.
Karssen, Lennart C; van Duijn, Cornelia M; Aulchenko, Yurii S
2016-01-01
Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the "core team", facilitating agile statistical omics methodology development and fast dissemination. PMID:27347381
NASA Astrophysics Data System (ADS)
Monteiller, Vadim; Chevrot, Sébastien; Komatitsch, Dimitri; Wang, Yi
2015-08-01
We present a method for high-resolution imaging of lithospheric structures based on full waveform inversion of teleseismic waveforms. We model the propagation of seismic waves using our recently developed direct solution method/spectral-element method hybrid technique, which allows us to simulate the propagation of short-period teleseismic waves through a regional 3-D model. We implement an iterative quasi-Newton method based upon the L-BFGS algorithm, where the gradient of the misfit function is computed using the adjoint-state method. Compared to gradient or conjugate-gradient methods, the L-BFGS algorithm has a much faster convergence rate. We illustrate the potential of this method on a synthetic test case that consists of a crustal model with a crustal discontinuity at 25 km depth and a sharp Moho jump. This model contains short- and long-wavelength heterogeneities along the lateral and vertical directions. The iterative inversion starts from a smooth 1-D model derived from the IASP91 reference Earth model. We invert both radial and vertical component waveforms, starting from long-period signals filtered at 10 s and gradually decreasing the cut-off period down to 1.25 s. This multiscale algorithm quickly converges towards a model that is very close to the true model, in contrast to inversions involving short-period waveforms only, which always get trapped into a local minimum of the cost function.
Basin mass dynamic changes in China from GRACE based on a multibasin inversion method
NASA Astrophysics Data System (ADS)
Yi, Shuang; Wang, Qiuyu; Sun, Wenke
2016-05-01
Complex landforms, miscellaneous climates, and enormous populations have influenced various geophysical phenomena in China, which range from water depletion in the underground to retreating glaciers on high mountains and have attracted abundant scientific interest. This paper, which utilizes gravity observations during 2003-2014 from the Gravity Recovery and Climate Experiment (GRACE), intends to comprehensively estimate the mass status in 16 drainage basins in the region. We propose a multibasin inversion method that features resistance to stripe noise and an ability to alleviate signal attenuation from the truncation and smoothing of GRACE data. The results show both positive and negative trends. Tremendous mass accumulation has occurred from the Tibetan Plateau (12.1 ± 0.6 Gt/yr) to the Yangtze River (7.7 ± 1.3 Gt/yr) and southeastern coastal areas, which is suggested to involve an increase in the groundwater storage, lake and reservoir water volume, and the flow of materials from tectonic processes. Additionally, mass loss has occurred in the Huang-Huai-Hai-Liao River Basin (-10.2 ± 0.9 Gt/yr), the Brahmaputra-Nujiang-Lancang River Basin (-15.0 ± 1.1 Gt/yr), and Tienshan Mountain (-4.1 ± 0.3 Gt/yr), a result of groundwater pumping and glacier melting. Areas with groundwater depletion are consistent with the distribution of cities with land subsidence in North China. We find that intensified precipitation can alter the local water supply and that GRACE can adequately capture these dynamics, which could be instructive for China's South-to-North Water Diversion hydrologic project.
Application of an inverse method to interpret 231Pa/230Th observations from marine sediments
NASA Astrophysics Data System (ADS)
Burke, Andrea; Marchal, Olivier; Bradtmiller, Louisa I.; McManus, Jerry F.; François, Roger
2011-03-01
Records of 231Pa/230Th from Atlantic sediments have been interpreted to reflect changes in ocean circulation during the geologic past. Such interpretations should be tested with due regard to the limited spatial coverage of 231Pa/230Th data and the uncertainties in our current understanding of the behavior of both nuclides in the ocean. Here an inverse method is used to evaluate the information contained in 231Pa/230Th compilations for the Holocene, Last Glacial Maximum (LGM), and Heinrich Event 1 (H1). An estimate of the abyssal circulation in the modern Atlantic Ocean is obtained by combining hydrographic observations and dynamical constraints. Then sediment 231Pa/230Th data for each time interval are combined with an advection-scavenging model in order to determine their (in)consistency with the modern circulation estimate. We find that the majority of sediment 231Pa/230Th data for the Holocene, LGM, or H1 can be brought into consistency with the modern circulation if plausible assumptions are made about the large-scale distribution of 231Pa and about model uncertainties. Moreover, the adjustments in the data needed to reach compatibility with a hypothetical state of no flow (no advection) are positively biased for each time interval, suggesting that the 231Pa/230Th data (including that for H1) are more consistent with a persistence of some circulation than with no circulation. Our study does not imply that earlier claims of a circulation change during the LGM or H1 are inaccurate, but that these claims cannot be given a rigorous basis given the current uncertainties involved in the analysis of the 231Pa/230Th data.
GRACE captures basin mass dynamic changes in China based on a multi-basin inversion method
NASA Astrophysics Data System (ADS)
Yi, Shuang; Wang, Qiuyu; Sun, Wenke
2016-04-01
Complex landform, miscellaneous climate and enormous population have enriched China with geophysical phenomena ranging from water depletion in the underground to glaciers retreat on the high mountains and have aroused large scientific interests. This paper, utilizing gravity observations 2003-2014 from the Gravity Recovery and Climate Experiment (GRACE), intends to make a comprehensive estimation of mass status in 16 drainage basins in the whole region. We proposed a multi-basin inversion method, which is featured by resistance to the stripe noise and ability to alleviate signal attenuation due to truncation and smoothing of GRACE data. The results show both positive and negative trends: there is a tremendous mass accumulation spreading from the Tibetan plateau (12.2 ± 0.6 Gt/yr) to the Yangtze River (7.6 ± 1.3 Gt/yr), and further to the southeast coastal areas, which is suggested to involve an increase in the ground water storage, lake and reservoir water volume and likely materials flowed in by tectonic process; a mass loss is occurring in Huang-Huai-Hai-Liao River Basin (-10.5 ± 0.8 Gt/yr), as well as the Brahmaputra-Nujiang-Lancang River Basin (-15.0 ± 0.9 Gt/yr) and Tienshan Mountain (-4.1 ± 0.3 Gt/yr), which is a result of groundwater pumping and glacier melting. The groundwater depletion area is well consistent with the distribution of land subsidence in North China. In the end, we find intensified precipitation can alter the local water supply and GRACE is proficient to capture this dynamics, which could be instructive for the South-to-North Water Diversion - one China's giant hydrologic project.
Inverse Methods for Organic Matter Decay: From Multiple Pools to a Lognormal Continuum
NASA Astrophysics Data System (ADS)
Forney, D. C.; Rothman, D.
2011-12-01
The decomposition of plant matter is difficult to model because we lack fundamental constitutive relations between decay rates, litter composition and ecosystems. Because decay slows down with time, and organic matter is compositionally heterogeneous, models of decomposition typically consist of multiple components (pools) which decay exponentially at different rates. Yet, it is unclear how the rates, sizes, and number of these pools vary with organic matter type and ecosystem. Here, we assume that degradation is described by a continuous superposition of first order decay rates. We use an inversion technique to identify the best fitting distributions of rates associated with the LIDET litter decay study [1]. This approach directly identifies the best multi-pool solution for each dataset. However, we find that the multi-pool solution is over-parameterized and not robust to the levels of noise in the decay datasets. We therefore implement a method of regularization to identify the distribution of first order decay rates which best fits the data but not the noise. This approach reveals that the distribution is characteristically lognormal on average across all data sets. To evaluate the validity of this result, we compare decays from a lognormal rate distribution to standard multi-pool models via the Akaike Information Criterion (AICc). The AICc indicates the lognormal distribution of rates contains significantly more information about litter decomposition than multi-pool models. These results suggest that the lognormal framework for analyzing and visualizing decay rates is a valuable tool to better understand the constitutive relations between decay dynamics, composition, ecosystems, and climate. [1] Harmon, M. 2007. LTER Intersite Fine Litter Decomposition Experiment (LIDET). Forest Science Data Bank, Corvallis, OR. [Database]. Available: http://andrewsforest.oregonstate.edu/data/abstract.cfm?dbcode=TD023
Menke, W. )
1991-03-01
The author applies the method of Projection Onto Convex Sets (POCS) to the problem of solving geophysical inverse problems. The advantage of this iterative method is its flexibility in handling non-linear equality and inequality constraints, including constraints on the spectrum of unknown functions. He gives examples of using POCS to interpolate topographic profiles, topographic maps, and the physical properties of the earth between well logs.
A wide-field spectroscopic survey of Abell 1689 and Abell 1835 with VIMOS
NASA Astrophysics Data System (ADS)
Czoske, Oliver
2004-12-01
Spectroscopic surveys can add a third dimension, velocity, to the galaxy distribution in and PoS(BDMH2004)099 around clusters. The largest wide-field spectroscopic samples at present exist for near-by clusters. Czoske et al. (2001: A&A 372, 391; 2002: A&A 386, 31) present a catalogue of redshifts for 300 cluster members with V < 22 in Cl0024+1654 at z = 0.395, the largest currently available cluster ˜ redshift catalogue at such a high redshift. In that case, it was only the redshift information ex- tending to large cluster-centric distances which revealed the complex structure of what appeared in other observations to be a relaxed rich cluster. The recent advent of high-multiplex spectrographs on 8 10 meter class telescopes has made it possible to obtain large numbers of high-quality spectra of galaxies and around clusters of galaxies in a short amount of time. The data described by Czoske et al. (2001) were obtained over the course of four years. Samples larger by a factor of 2 . . . 3 can now be obtained in ˜ 10 hours of observation time. Here I present the first results from a spectroscopic survey of the two X-ray luminous clusters Abell 1689 (z = 0.185) and Abell 1835 (z = 0.25). We use the VIsible imaging Multi-Object Spectrograph (VIMOS) on VLT UT3/Melipal. The field of view of VIMOS available for spectroscopy consists of four quadrants of ˜ 7 × 7 , the separa- tion between the quadrants is ˜ 2 . Using the LR-Blue grism, one can place ˜ 100 . . . 150 slits per quadrant. The resulting spectra cover the wavelength range 3700 . . . 6700 Å with a resolution R 200. We use as the basis for object selection panoramic multi-colour images obtained with the CFH12k camera on CFHT (Czoske, 2002, PhD thesis), covering 40 × 30 in BRI for A1689 and VRI for A1835. The input catalogue has been cleaned of stars. We attempted to cover the entire CFH12k field of view by using 10 VIMOS pointings for each cluster. Due to technical problems with VIMOS only 8 and 9 masks
RADIO AND DEEP CHANDRA OBSERVATIONS OF THE DISTURBED COOL CORE CLUSTER ABELL 133
Randall, S. W.; Nulsen, P. E. J.; Forman, W. R.; Murray, S. S.; Clarke, T. E.; Owers, M. S.; Sarazin, C. L.
2010-10-10
We present results based on new Chandra and multi-frequency radio observations of the disturbed cool core cluster Abell 133. The diffuse gas has a complex bird-like morphology, with a plume of emission extending from two symmetric wing-like features. The plume is capped with a filamentary radio structure that has been previously classified as a radio relic. X-ray spectral fits in the region of the relic indicate the presence of either high-temperature gas or non-thermal emission, although the measured photon index is flatter than would be expected if the non-thermal emission is from inverse Compton scattering of the cosmic microwave background by the radio-emitting particles. We find evidence for a weak elliptical X-ray surface brightness edge surrounding the core, which we show is consistent with a sloshing cold front. The plume is consistent with having formed due to uplift by a buoyantly rising radio bubble, now seen as the radio relic, and has properties consistent with buoyantly lifted plumes seen in other systems (e.g., M87). Alternatively, the plume may be a gas sloshing spiral viewed edge-on. Results from spectral analysis of the wing-like features are inconsistent with the previous suggestion that the wings formed due to the passage of a weak shock through the cool core. We instead conclude that the wings are due to X-ray cavities formed by displacement of X-ray gas by the radio relic. The central cD galaxy contains two small-scale cold gas clumps that are slightly offset from their optical and UV counterparts, suggestive of a galaxy-galaxy merger event. On larger scales, there is evidence for cluster substructure in both optical observations and the X-ray temperature map. We suggest that the Abell 133 cluster has recently undergone a merger event with an interloping subgroup, initialing gas sloshing in the core. The torus of sloshed gas is seen close to edge-on, leading to the somewhat ragged appearance of the elliptical surface brightness edge. We show
NASA Astrophysics Data System (ADS)
Edwards, L. O. V.; Alpert, H. S.; Trierweiler, I. L.; Abraham, T.; Beizer, V. G.
2016-09-01
We present the first results from an integral field unit (IFU) spectroscopic survey of a ˜75 kpc region around three brightest cluster galaxies (BCGs), combining over 100 IFU fibres to study the intracluster light (ICL). We fit population synthesis models to estimate age and metallicity. For Abell 85 and Abell 2457, the ICL is best-fit with a fraction of old, metal-rich stars like in the BCG, but requires 30-50 per cent young and metal-poor stars, a component not found in the BCGs. This is consistent with the ICL having been formed by a combination of interactions with less massive, younger, more metal-poor cluster members in addition to stars that form the BCG. We find that the three galaxies are in different stages of evolution and may be the result of different formation mechanisms. The BCG in Abell 85 is near a relatively young, metal-poor galaxy, but the dynamical friction time-scale is long and the two are unlikely to be undergoing a merger. The outer regions of Abell 2457 show a higher relative fraction of metal-poor stars, and we find one companion, with a higher fraction of young, metal-poor stars than the BCG, which is likely to merge within a gigayear. Several luminous red galaxies are found at the centre of the cluster IIZw108, with short merger time-scales, suggesting that the system is about to embark on a series of major mergers to build up a dominant BCG. The young, metal-poor component found in the ICL is not found in the merging galaxies.
Solving the inverse Ising problem by mean-field methods in a clustered phase space with many states.
Decelle, Aurélien; Ricci-Tersenghi, Federico
2016-07-01
In this work we explain how to properly use mean-field methods to solve the inverse Ising problem when the phase space is clustered, that is, many states are present. The clustering of the phase space can occur for many reasons, e.g., when a system undergoes a phase transition, but also when data are collected in different regimes (e.g., quiescent and spiking regimes in neural networks). Mean-field methods for the inverse Ising problem are typically used without taking into account the eventual clustered structure of the input configurations and may lead to very poor inference (e.g., in the low-temperature phase of the Curie-Weiss model). In this work we explain how to modify mean-field approaches when the phase space is clustered and we illustrate the effectiveness of our method on different clustered structures (low-temperature phases of Curie-Weiss and Hopfield models). PMID:27575082
Solving the inverse Ising problem by mean-field methods in a clustered phase space with many states
NASA Astrophysics Data System (ADS)
Decelle, Aurélien; Ricci-Tersenghi, Federico
2016-07-01
In this work we explain how to properly use mean-field methods to solve the inverse Ising problem when the phase space is clustered, that is, many states are present. The clustering of the phase space can occur for many reasons, e.g., when a system undergoes a phase transition, but also when data are collected in different regimes (e.g., quiescent and spiking regimes in neural networks). Mean-field methods for the inverse Ising problem are typically used without taking into account the eventual clustered structure of the input configurations and may lead to very poor inference (e.g., in the low-temperature phase of the Curie-Weiss model). In this work we explain how to modify mean-field approaches when the phase space is clustered and we illustrate the effectiveness of our method on different clustered structures (low-temperature phases of Curie-Weiss and Hopfield models).
Jiang, Hai-ling; Yang, Hang; Chen, Xiao-ping; Wang, Shu-dong; Li, Xue-ke; Liu, Kai; Cen, Yi
2015-04-01
Spectral index method was widely applied to the inversion of crop chlorophyll content. In the present study, PSR3500 spectrometer and SPAD-502 chlorophyll fluorometer were used to acquire the spectrum and relative chlorophyll content (SPAD value) of winter wheat leaves on May 2nd 2013 when it was at the jointing stage of winter wheat. Then the measured spectra were resampled to simulate TM multispectral data and Hyperion hyperspectral data respectively, using the Gaussian spectral response function. We chose four typical spectral indices including normalized difference vegetation index (NDVD, triangle vegetation index (TVI), the ratio of modified transformed chlorophyll absorption ratio index (MCARI) to optimized soil adjusted vegetation index (OSAVI) (MCARI/OSAVI) and vegetation index based on universal pattern decomposition (VIUPD), which were constructed with the feature bands sensitive to the vegetation chlorophyll. After calculating these spectral indices based on the resampling TM and Hyperion data, the regression equation between spectral indices and chlorophyll content was established. For TM, the result indicates that VIUPD has the best correlation with chlorophyll (R2 = 0.819 7) followed by NDVI (R2 = 0.791 8), while MCARI/OSAVI and TVI also show a good correlation with R2 higher than 0.5. For the simulated Hyperion data, VIUPD again ranks first with R2 = 0.817 1, followed by MCARI/OSAVI (R2 = 0.658 6), while NDVI and TVI show very low values with R2 less than 0.2. It was demonstrated that VIUPD has the best accuracy and stability to estimate chlorophyll of winter wheat whether using simulated TM data or Hyperion data, which reaffirms that VIUPD is comparatively sensor independent. The chlorophyll estimation accuracy and stability of MCARI/OSAVI also works well, partly because OSAVI could reduce the influence of backgrounds. Two broadband spectral indices NDVI and TVI are weak for the chlorophyll estimation of simulated Hyperion data mainly because of
NASA Astrophysics Data System (ADS)
Karl, S.; Neuberg, J.
2011-12-01
Volcanoes exhibit a variety of seismic signals. One specific type, the so-called long-period (LP) or low-frequency event, has proven to be crucial for understanding the internal dynamics of the volcanic system. These long period (LP) seismic events have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements (Chouet, 1996; Neuberg et al., 2006). While the seismic wavefield is well established, the actual trigger mechanism of these events is still poorly understood. Neuberg et al. (2006) proposed a conceptual model for the trigger of LP events at Montserrat involving the brittle failure of magma in the glass transition in response to the upwards movement of magma. In an attempt to gain a better quantitative understanding of the driving forces of LPs, inversions for the physical source mechanisms have become increasingly common. Previous studies have assumed a point source for waveform inversion. Knowing that applying a point source model to synthetic seismograms representing an extended source process does not yield the real source mechanism, it can, however, still lead to apparent moment tensor elements which then can be compared to previous results in the literature. Therefore, this study follows the proposed concepts of Neuberg et al. (2006), modelling the extended LP source as an octagonal arrangement of double couples approximating a circular ringfault bounding the circumference of the volcanic conduit. Synthetic seismograms were inverted for the physical source mechanisms of LPs using the moment tensor inversion code TDMTISO_INVC by Dreger (2003). Here, we will present the effects of changing the source parameters on the apparent moment tensor elements. First results show that, due to negative interference, the amplitude of the seismic signals of a ringfault structure is greatly reduced when compared to a single double couple source. Furthermore, best inversion results yield a
MUSE observations of the lensing cluster Abell 1689
NASA Astrophysics Data System (ADS)
Bina, D.; Pelló, R.; Richard, J.; Lewis, J.; Patrício, V.; Cantalupo, S.; Herenz, E. C.; Soto, K.; Weilbacher, P. M.; Bacon, R.; Vernet, J. D. R.; Wisotzki, L.; Clément, B.; Cuby, J. G.; Lagattuta, D. J.; Soucail, G.; Verhamme, A.
2016-05-01
Context. This paper presents the results obtained with the Multi Unit Spectroscopic Explorer (MUSE) for the core of the lensing cluster Abell 1689, as part of MUSE's commissioning at the ESO Very Large Telescope. Aims: Integral-field observations with MUSE provide a unique view of the central 1 × 1 arcmin2 region at intermediate spectral resolution in the visible domain, allowing us to conduct a complete census of both cluster galaxies and lensed background sources. Methods: We performed a spectroscopic analysis of all sources found in the MUSE data cube. Two hundred and eighty-two objects were systematically extracted from the cube based on a guided-and-manual approach. We also tested three different tools for the automated detection and extraction of line emitters. Cluster galaxies and lensed sources were identified based on their spectral features. We investigated the multiple-image configuration for all known sources in the field. Results: Previous to our survey, 28 different lensed galaxies displaying 46 multiple images were known in the MUSE field of view, most of them were detected through photometric redshifts and lensing considerations. Of these, we spectroscopically confirm 12 images based on their emission lines, corresponding to 7 different lensed galaxies between z = 0.95 and 5.0. In addition, 14 new galaxies have been spectroscopically identified in this area thanks to MUSE data, with redshifts ranging between 0.8 and 6.2. All background sources detected within the MUSE field of view correspond to multiple-imaged systems lensed by A1689. Seventeen sources in total are found at z ≥ 3 based on their Lyman-α emission, with Lyman-α luminosities ranging between 40.5 ≲ log (Lyα) ≲ 42.5 after correction for magnification. This sample is particularly sensitive to the slope of the luminosity function toward the faintest end. The density of sources obtained in this survey is consistent with a steep value of α ≤ -1.5, although this result still
NASA Astrophysics Data System (ADS)
Kirby, Jon F.
2014-09-01
The effective elastic thickness (Te) is a geometric measure of the flexural rigidity of the lithosphere, which describes the resistance to bending under the application of applied, vertical loads. As such, it is likely that its magnitude has a major role in governing the tectonic evolution of both continental and oceanic plates. Of the several ways to estimate Te, one has gained popularity in the 40 years since its development because it only requires gravity and topography data, both of which are now readily available and provide excellent coverage over the Earth and even the rocky planets and moons of the solar system. This method, the ‘inverse spectral method’, develops measures of the relationship between observed gravity and topography data in the spatial frequency (wavenumber) domain, namely the admittance and coherence. The observed measures are subsequently inverted against the predictions of thin, elastic plate models, giving estimates of Te and other lithospheric parameters. This article provides a review of inverse spectral methodology and the studies that have used it. It is not, however, concerned with the geological or geodynamic significance or interpretation of Te, nor does it discuss and compare Te results from different methods in different provinces. Since the three main aspects of the subject are thin elastic plate flexure, spectral analysis, and inversion methods, the article broadly follows developments in these. The review also covers synthetic plate modelling, and concludes with a summary of the controversy currently surrounding inverse spectral methods, whether or not the large Te values returned in cratonic regions are artefacts of the method, or genuine observations.
Leyre, Sven; Meuret, Youri; Durinck, Guy; Hofkens, Johan; Deconinck, Geert; Hanselaer, Peter
2014-04-01
The accuracy of optical simulations including bulk diffusors is heavily dependent on the accuracy of the bulk scattering properties. If no knowledge on the physical scattering effects is available, an iterative procedure is usually used to obtain the scattering properties, such as the inverse Monte Carlo method or the inverse adding-doubling (AD) method. In these methods, a predefined phase function with one free parameter is usually used to limit the number of free parameters. In this work, three predefined phase functions (Henyey-Greenstein, two-term Henyey-Greenstein, and Gegenbauer kernel (GK) phase function) are implemented in the inverse AD method to determine the optical properties of two strongly diffusing materials: low-density polyethylene and TiO₂ particles. Using the presented approach, an estimation of the effective phase function was made. It was found that the use of the GK phase function resulted in the best agreement between calculated and experimental transmittance, reflectance, and scattered radiant intensity distribution for the LDPE sample. For the TiO₂ sample, a good agreement was obtained with both the two-term Henyey-Greenstein and the GK phase function. PMID:24787170
Hassaballah, Abdallah I.; Hassan, Mohsen A.; Mardi, Azizi N.; Hamdi, Mohd
2013-01-01
The determination of the myocardium’s tissue properties is important in constructing functional finite element (FE) models of the human heart. To obtain accurate properties especially for functional modeling of a heart, tissue properties have to be determined in vivo. At present, there are only few in vivo methods that can be applied to characterize the internal myocardium tissue mechanics. This work introduced and evaluated an FE inverse method to determine the myocardial tissue compressibility. Specifically, it combined an inverse FE method with the experimentally-measured left ventricular (LV) internal cavity pressure and volume versus time curves. Results indicated that the FE inverse method showed good correlation between LV repolarization and the variations in the myocardium tissue bulk modulus K (K = 1/compressibility), as well as provided an ability to describe in vivo human myocardium material behavior. The myocardium bulk modulus can be effectively used as a diagnostic tool of the heart ejection fraction. The model developed is proved to be robust and efficient. It offers a new perspective and means to the study of living-myocardium tissue properties, as it shows the variation of the bulk modulus throughout the cardiac cycle. PMID:24367544
An inversion method for retrieving soil moisture information from satellite altimetry observations
NASA Astrophysics Data System (ADS)
Uebbing, Bernd; Forootan, Ehsan; Kusche, Jürgen; Braakmann-Folgmann, Anne
2016-04-01
Soil moisture represents an important component of the terrestrial water cycle that controls., evapotranspiration and vegetation growth. Consequently, knowledge on soil moisture variability is essential to understand the interactions between land and atmosphere. Yet, terrestrial measurements are sparse and their information content is limited due to the large spatial variability of soil moisture. Therefore, over the last two decades, several active and passive radar and satellite missions such as ERS/SCAT, AMSR, SMOS or SMAP have been providing backscatter information that can be used to estimate surface conditions including soil moisture which is proportional to the dielectric constant of the upper (few cm) soil layers . Another source of soil moisture information are satellite radar altimeters, originally designed to measure sea surface height over the oceans. Measurements of Jason-1/2 (Ku- and C-Band) or Envisat (Ku- and S-Band) nadir radar backscatter provide high-resolution along-track information (~ 300m along-track resolution) on backscatter every ~10 days (Jason-1/2) or ~35 days (Envisat). Recent studies found good correlation between backscatter and soil moisture in upper layers, especially in arid and semi-arid regions, indicating the potential of satellite altimetry both to reconstruct and to monitor soil moisture variability. However, measuring soil moisture using altimetry has some drawbacks that include: (1) the noisy behavior of the altimetry-derived backscatter (due to e.g., existence of surface water in the radar foot-print), (2) the strong assumptions for converting altimetry backscatters to the soil moisture storage changes, and (3) the need for interpolating between the tracks. In this study, we suggest a new inversion framework that allows to retrieve soil moisture information from along-track Jason-2 and Envisat satellite altimetry data, and we test this scheme over the Australian arid and semi-arid regions. Our method consists of: (i
NASA Astrophysics Data System (ADS)
Olsen, Scott Charles
In this dissertation, new inverse scattering algorithms are derived for the Helmholtz equation using the Extended Born field model (eikonal rescattered field), and the angular spectrum (parabolic) layered field model. These two field models performed the 'best' of all the field models evaluated. Algorithms are solved with conjugate gradient methods. An advanced ultrasonic data acquisition system is also designed. Many different field models for use in a reconstruction algorithm are investigated. 'Layered' field models that mathematically partition the field calculation in layers in space possess the advantage that the field in layer n is calculated from the field in layer n - 1. Several of the 'layered' field models are investigated in terms of accuracy and computational complexity. Field model accuracy using field rescattering is also tested. The models investigated are the eikonal field model, the angular spectrum (AS) field model, and the parabolic field models known as the Split-Step Fast-Fourier Transform and the Crank-Nicolson algorithms. All of the 'layered' field models can be referred to as Extended Born field models since the 'layered' field models are more accurate than the Born approximated total field. The Rescattered Extended Born (eikonal rescattered field) Transmission Mode (REBTM) algorithm with the AS field model and the Nonrescattered AS Reconstruction (NASR) algorithm are tested with several types of objects: a single-layer cylinder, double-layer cylinders, two double-layer cylinders and the breast model. Both algorithms, REBTM and NASR work well; however, the NASR algorithm is faster and more accurate than the REBTM algorithm. The NASR algorithm is matched well with the requirements of breast model reconstructions. A major purpose of new scanner development is to collect both transmission and reflection data from multiple ultrasonic transducer arrays to test the next generation of reconstruction algorithms. The data acquisition system advanced
Basis set expansion for inverse problems in plasma diagnostic analysis.
Jones, B; Ruiz, C L
2013-07-01
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Basis set expansion for inverse problems in plasma diagnostic analysis
Jones, B.; Ruiz, C. L.
2013-07-15
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20–25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Basis set expansion for inverse problems in plasma diagnostic analysis
NASA Astrophysics Data System (ADS)
Jones, B.; Ruiz, C. L.
2013-07-01
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)], 10.1063/1.1482156 is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
NASA Astrophysics Data System (ADS)
Sergienko, Olga
2013-04-01
Since Doug MacAyeal's pioneering studies of the ice-stream basal traction optimizations by control methods, inversions for unknown parameters (e.g., basal traction, accumulation patterns, etc) have become a hallmark of the present-day ice-sheet modeling. The common feature of such inversion exercises is a direct relationship between optimized parameters and observations used in the optimization procedure. For instance, in the standard optimization for basal traction by the control method, ice-stream surface velocities constitute the control data. The optimized basal traction parameters explicitly appear in the momentum equations for the ice-stream velocities (compared to the control data). The inversion for basal traction is carried out by minimization of the cost (or objective, misfit) function that includes the momentum equations facilitated by the Lagrange multipliers. Here, we build upon this idea, and demonstrate how to optimize for parameters indirectly related to observed data using a suite of nested constraints (like Russian dolls) with additional sets of Lagrange multipliers in the cost function. This method opens the opportunity to use data from a variety of sources and types (e.g., velocities, radar layers, surface elevation changes, etc.) in the same optimization process.
NASA Astrophysics Data System (ADS)
Yang, T. T.; Ntone, F.
1981-05-01
Curved wall diffusers designed by using an inverse method of solution of potential flow theory have been shown to be both short and highly efficient. These features make this type of diffuser attractive in thrust ejector applications. In ejectors, however, the flow at the diffuser inlet is nearly a uniform shear flow. This paper presents a method used in examining the flow velocity along the diffuser wall and some of the analytical results for diffusers designed with potential flow theory and receiving a rotational flow. The inlet flow vorticity and the diffuser area ratios prescribed in the inverse solution of the irrotational flow are the parameters of the study. The geometry of a sample ejector using such a diffuser and its estimated thrust augmentation ratio are also presented.
NASA Astrophysics Data System (ADS)
Pagnacco, E.; de Cursi, E. Souza; Sampaio, R.
2016-07-01
This study concerns the computation of frequency responses of linear stochastic mechanical systems through a modal analysis. A new strategy, based on transposing standards deterministic deflated and subspace inverse power methods into stochastic framework, is introduced via polynomial chaos representation. Applicability and effectiveness of the proposed schemes is demonstrated through three simple application examples and one realistic application example. It is shown that null and repeated-eigenvalue situations are addressed successfully.
NASA Astrophysics Data System (ADS)
Agata, R.; Ichimura, T.; Hori, T.; Hirahara, K.; Hashimoto, C.; Hori, M.
2015-12-01
Inverse analysis of the coseismic/postseismic slip using postseismic deformation observation data is an important topic in geodetic inversion. Inverse analysis method may be improved by using numerical simulation (e.g. finite element (FE) method) of viscoelastic deformation, the model of which is of high-fidelity to the available high-resolution crustal data. The authors had been developing a large-scale simulation method using such FE high-fidelity models (HFM), assuming use of K computer, the current fastest supercomputer in Japan. In this study, we developed an inverse analysis method incorporating HFM, in which the asthenosphere viscosity and fault slip are estimated simultaneously, since the value of viscosity in the simulation is not trivial. We carried out numerical experiments using synthetic crustal deformation data. Based on Ichimura et al. (2013), we constructed an HFM in the domain of 2048x1536x850 km, which includes the Tohoku region in northeast Japan. We used the data set of JTOPO30 (2003), Koketsu et al. (2008) and CAMP standard model (Hashimoto et al. 2004) for the model geometry. The HFM is currently in 2km resolution, resulting in 0.5 billion degrees-of-freedom. The figure shows the overview of HFM. Synthetic crustal deformation data of three years after an earthquake in the location of GEONET, GPS/A observation points, and S-net were used. Inverse analysis was formulated as minimization of L2 norm of the difference between the FE simulation results and the observation data with respect to viscosity and fault slip, combining quasi-Newton algorithm with adjoint method. Coseismic slip was expressed by superposition of 53 subfaults, with four viscoelastic layers. We carried out 90 forward simulations, and the 57 parameters converged to the true values. Due to the fast computation method, it took only five hours using 2048 nodes (1/40 of entire resource) of K computer. In the future, we would like to also consider estimation of after slip and apply
Chang, M.E.; Hartley, D.; Cardelino, C.; Chang, W.L.
1996-12-31
The Urban Airshed Model (UAM) has been used by the State of Georgia in an attempt to demonstrate attainment of the ozone standard for Atlanta. A recent comparison of UAM data to ambient data collected during the 1992 Southern Oxidants Study Atlanta intensive revealed that the model accurately predicts ozone concentrations, but is poorly simulates the concurrent ozone precursors. There were discrepancies in both the anthropogenic and biogenic precursors. For anthropogenic emissions, the ambient ratios of anthropogenic hydrocarbons (AHC) to oxides of nitrogen (NO{sub x}), AHC/NO{sub x}, and carbon monoxide (CO) to NO{sub x}, CO/NO{sub x}, were higher than the emission ratios by 20% and 73% respectively. In this study, the authors use an inverse method to reconcile the differences between the observed and predicted concentrations of ozone precursors. Inverse methods have been successfully applied to many fields including medical imaging, missile guidance, and underground tomography. This study is the first to apply the inverse method to estimate the emissions of the relatively short-lived species important to urban oxidants formation.
NASA Astrophysics Data System (ADS)
Nezhad, Mohsen Motahari; Shojaeefard, Mohammad Hassan; Shahraki, Saeid
2016-02-01
In this study, the experiments aimed at analyzing thermally the exhaust valve in an air-cooled internal combustion engine and estimating the thermal contact conductance in fixed and periodic contacts. Due to the nature of internal combustion engines, the duration of contact between the valve and its seat is too short, and much time is needed to reach the quasi-steady state in the periodic contact between the exhaust valve and its seat. Using the methods of linear extrapolation and the inverse solution, the surface contact temperatures and the fixed and periodic thermal contact conductance were calculated. The results of linear extrapolation and inverse methods have similar trends, and based on the error analysis, they are accurate enough to estimate the thermal contact conductance. Moreover, due to the error analysis, a linear extrapolation method using inverse ratio is preferred. The effects of pressure, contact frequency, heat flux, and cooling air speed on thermal contact conductance have been investigated. The results show that by increasing the contact pressure the thermal contact conductance increases substantially. In addition, by increasing the engine speed the thermal contact conductance decreases. On the other hand, by boosting the air speed the thermal contact conductance increases, and by raising the heat flux the thermal contact conductance reduces. The average calculated error equals to 12.9 %.
A method to constrain the configuration of the subsurface structure in 3-D gravity inversion
Hu, Y.; Rabinowitz, P.D.
1996-12-31
A three-dimensional inversion technique is developed to investigate the structure of the oceanic crust, using high quality offshore bathymetry, gravity and seismic data. The gravity signatures associated with variations in the thickness of the oceanic crust are isolated from the observed free-air anomaly by subtracting the gravitational effects of seafloor topography and the upper mantle thermal structure, downward continued to the mean depth of the crust/mantle interface and converted onto the relief on that surface. The thickness of the oceanic crust is then calculated by subtracting sea water depth from the depth of the gravity-inferred crust/mantle interface. Seismic refraction data was introduced directly as a constraint in the construction of the initial model for the configuration of the crust/mantle interface and the iterative process of the 3-D joint inversion to reduce the ambiguity in gravity interpretation. This technique can be easily applied to the offshore areas to interpret bathymetry, gravity and seismic data that have been routinely collected for the purpose of geophysical exploration. Compared to the unconstrained gravity inversion, this technique can predict a 3-D crustal model that fits better both gravity and seismic observation data of the study area.
The optimized gradient method for full waveform inversion and its spectral implementation
NASA Astrophysics Data System (ADS)
Wu, Zedong; Alkhalifah, Tariq
2016-06-01
At the heart of the full waveform inversion (FWI) implementation is wavefield extrapolation, and specifically its accuracy and cost. To obtain accurate, dispersion free wavefields, the extrapolation for modelling is often expensive. Combining an efficient extrapolation with a novel gradient preconditioning can render an FWI implementation that efficiently converges to an accurate model. We, specifically, recast the extrapolation part of the inversion in terms of its spectral components for both data and gradient calculation. This admits dispersion free wavefields even at large extrapolation time steps, which improves the efficiency of the inversion. An alternative spectral representation of the depth axis in terms of sine functions allows us to impose a free surface boundary condition, which reflects our medium boundaries more accurately. Using a newly derived perfectly matched layer formulation for this spectral implementation, we can define a finite model with absorbing boundaries. In order to reduce the nonlinearity in FWI, we propose a multiscale conditioning of the objective function through combining the different directional components of the gradient to optimally update the velocity. Through solving a simple optimization problem, it specifically admits the smoothest approximate update while guaranteeing its ascending direction. An application to the Marmousi model demonstrates the capability of the proposed approach and justifies our assertions with respect to cost and convergence.
NASA Astrophysics Data System (ADS)
Iriana, Windy; Tonokura, Kenichi; Kawasaki, Masahiro; Inoue, Gen; Kusin, Kitso; Limin, Suwido H.
2016-09-01
Evaluation of CO2 flux from peatland soil respiration is important to understand the effect of land use change on the global carbon cycle and climate change and particularly to support carbon emission reduction policies. However, quantitative estimation of emitted CO2 fluxes in Indonesia is constrained by existing field data. Current methods for CO2 measurement are limited by high initial cost, manpower, and the difficulties associated with construction issues. Measurement campaigns were performed using a newly developed nocturnal temperature-inversion trap method, which measures the amount of CO2 trapped beneath the nocturnal inversion layer, in the dry season of 2013 at a drained tropical peatland near Palangkaraya, Central Kalimantan, Indonesia. This method is cost-effective and data processing is easier than other flux estimation methods. We compared CO2 fluxes measured using this method with the published data from the existing eddy covariance and closed chamber methods. The maximum value of our measurement results was 10% lower than maximum value of eddy covariance method and average value was 6% higher than average of chamber method in drained tropical peatlands. In addition, the measurement results shows good correlation with groundwater table. The results of this comparison suggest that this methodology for the CO2 flux measurement is useful for field research in tropical peatlands.
NASA Astrophysics Data System (ADS)
Ren, Zhiming; Liu, Yang; Zhang, Qunshan
2014-05-01
Full waveform inversion (FWI) has the potential to provide preferable subsurface model parameters. The main barrier of its applications to real seismic data is heavy computational amount. Numerical modelling methods are involved in both forward modelling and backpropagation of wavefield residuals, which spend most of computational time in FWI. We develop a time-space domain finite-difference (FD) method and adaptive variable-length spatial operator scheme in numerical simulation of viscoacoustic equation and extend them into the viscoacoustic FWI. Compared with conventional FD methods, different operator lengths are adopted for different velocities and quality factors, which can reduce the amount of computation without reducing accuracy. Inversion algorithms also play a significant role in FWI. In conventional single-scale methods, it is likely to converge to local minimums especially when the initial model is far from the real model. To tackle the problem, we introduce the second generation wavelet transform to implement the multiscale FWI. Compared to other multiscale methods, our method has advantages of ease of implementation and better time-frequency local analysis ability. The L2 norm is widely used in FWI and gives invalid model estimates when the data is contaminated with strong non-uniform noises. We apply the L1-norm and the Huber-norm criteria in the time-domain FWI to improve its antinoise ability. Our strategies have been successfully applied in synthetic experiments to both onshore and offshore reflection seismic data. The results of the viscoacoustic Marmousi example indicate that our new FWI scheme consumes smaller computer resources. In addition, the viscoacoustic Overthrust example shows its better convergence and more reasonable velocity and quality factor structures. All these results demonstrate that our method can improve inversion accuracy and computational efficiency of FWI.
A Newton-CG method for large-scale three-dimensional elastic full-waveform seismic inversion
NASA Astrophysics Data System (ADS)
Epanomeritakis, I.; Akçelik, V.; Ghattas, O.; Bielak, J.
2008-06-01
We present a nonlinear optimization method for large-scale 3D elastic full-waveform seismic inversion. The method combines outer Gauss-Newton nonlinear iterations with inner conjugate gradient linear iterations, globalized by an Armijo backtracking line search, solved on a sequence of finer grids and higher frequencies to remain in the vicinity of the global optimum, inexactly terminated to prevent oversolving, preconditioned by L-BFGS/Frankel, regularized by a total variation operator to capture sharp interfaces, finely discretized by finite elements in the Lamé parameter space to provide flexibility and avoid bias, implemented in matrix-free fashion with adjoint-based computation of reduced gradient and reduced Hessian-vector products, checkpointed to avoid full spacetime waveform storage, and partitioned spatially across processors to parallelize the solutions of the forward and adjoint wave equations and the evaluation of gradient-like information. Several numerical examples demonstrate the grid independence of linear and nonlinear iterations, the effectiveness of the preconditioner, the ability to solve inverse problems with up to 17 million inversion parameters on up to 2048 processors, the effectiveness of multiscale continuation in keeping iterates in the basin of attraction of the global minimum, and the ability to fit the observational data while reconstructing the model with reasonable resolution and capturing sharp interfaces.
Doughty, C.A.
1996-05-01
The hydrologic properties of heterogeneous geologic media are estimated by simultaneously inverting multiple observations from well-test data. A set of pressure transients observed during one or more interference tests is compared to the corresponding values obtained by numerically simulating the tests using a mathematical model. The parameters of the mathematical model are varied and the simulation repeated until a satisfactory match to the observed pressure transients is obtained, at which point the model parameters are accepted as providing a possible representation of the hydrologic property distribution. Restricting the search to parameters that represent fractal hydrologic property distributions can improve the inversion process. Far fewer parameters are needed to describe heterogeneity with a fractal geometry, improving the efficiency and robustness of the inversion. Additionally, each parameter set produces a hydrologic property distribution with a hierarchical structure, which mimics the multiple scales of heterogeneity often seen in natural geological media. Application of the IFS inverse method to synthetic interference-test data shows that the method reproduces the synthetic heterogeneity successfully for idealized heterogeneities, for geologically-realistic heterogeneities, and when the pressure data includes noise.
Kimura, Wayne D.; Romea, Richard D.; Steinhauer, Loren C.
1998-01-01
A method and apparatus for exchanging energy between relativistic charged particles and laser radiation using inverse diffraction radiation or inverse transition radiation. The beam of laser light is directed onto a particle beam by means of two optical elements which have apertures or foils through which the particle beam passes. The two apertures or foils are spaced by a predetermined distance of separation and the angle of interaction between the laser beam and the particle beam is set at a specific angle. The separation and angle are a function of the wavelength of the laser light and the relativistic energy of the particle beam. In a diffraction embodiment, the interaction between the laser and particle beams is determined by the diffraction effect due to the apertures in the optical elements. In a transition embodiment, the interaction between the laser and particle beams is determined by the transition effect due to pieces of foil placed in the particle beam path.
NASA Astrophysics Data System (ADS)
Dorn, O.; Lesselier, D.
2010-07-01
Inverse problems in electromagnetics have a long history and have stimulated exciting research over many decades. New applications and solution methods are still emerging, providing a rich source of challenging topics for further investigation. The purpose of this special issue is to combine descriptions of several such developments that are expected to have the potential to fundamentally fuel new research, and to provide an overview of novel methods and applications for electromagnetic inverse problems. There have been several special sections published in Inverse Problems over the last decade addressing fully, or partly, electromagnetic inverse problems. Examples are: Electromagnetic imaging and inversion of the Earth's subsurface (Guest Editors: D Lesselier and T Habashy) October 2000 Testing inversion algorithms against experimental data (Guest Editors: K Belkebir and M Saillard) December 2001 Electromagnetic and ultrasonic nondestructive evaluation (Guest Editors: D Lesselier and J Bowler) December 2002 Electromagnetic characterization of buried obstacles (Guest Editors: D Lesselier and W C Chew) December 2004 Testing inversion algorithms against experimental data: inhomogeneous targets (Guest Editors: K Belkebir and M Saillard) December 2005 Testing inversion algorithms against experimental data: 3D targets (Guest Editors: A Litman and L Crocco) February 2009 In a certain sense, the current issue can be understood as a continuation of this series of special sections on electromagnetic inverse problems. On the other hand, its focus is intended to be more general than previous ones. Instead of trying to cover a well-defined, somewhat specialized research topic as completely as possible, this issue aims to show the broad range of techniques and applications that are relevant to electromagnetic imaging nowadays, which may serve as a source of inspiration and encouragement for all those entering this active and rapidly developing research area. Also, the
Disentangling Structures in the Cluster of Galaxies Abell 133
NASA Technical Reports Server (NTRS)
Way, Michael J.; DeVincenzi, Donald (Technical Monitor)
2002-01-01
A dynamical analysis of the structure of the cluster of galaxies Abell 133 will be presented using multi-wavelength data combined from multiple space and earth based observations. New and familiar statistical clustering techniques are used in combination in an attempt to gain a fully consistent picture of this interesting nearby cluster of galaxies. The type of analysis presented should be typical of cluster studies in the future, especially those to come from the surveys like the Sloan Digital Sky Survey and the 2DF.
On the feasibility of inversion methods based on models of urban sky glow
NASA Astrophysics Data System (ADS)
Kolláth, Z.; Kránicz, B.
2014-05-01
Multi-wavelength imaging luminance photometry of sky glow provides a huge amount of information on light pollution. However, the understanding of the measured data involves the combination of different processes and data of radiation transfer, atmospheric physics and atmospheric constitution. State-of-the-art numerical radiation transfer models provide the possibility to define an inverse problem to obtain information on the emission intensity distribution of a city and perhaps the physical properties of the atmosphere. We provide numerical tests on the solvability and feasibility of such procedures.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.
2015-07-01
This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated
NASA Astrophysics Data System (ADS)
Filippi, Anthony Matthew
For complex systems, sufficient a priori knowledge is often lacking about the mathematical or empirical relationship between cause and effect or between inputs and outputs of a given system. Automated machine learning may offer a useful solution in such cases. Coastal marine optical environments represent such a case, as the optical remote sensing inverse problem remains largely unsolved. A self-organizing, cybernetic mathematical modeling approach known as the group method of data handling (GMDH), a type of statistical learning network (SLN), was used to generate explicit spectral inversion models for optically shallow coastal waters. Optically shallow water light fields represent a particularly difficult challenge in oceanographic remote sensing. Several algorithm-input data treatment combinations were utilized in multiple experiments to automatically generate inverse solutions for various inherent optical property (IOP), bottom optical property (BOP), constituent concentration, and bottom depth estimations. The objective was to identify the optimal remote-sensing reflectance Rrs(lambda) inversion algorithm. The GMDH also has the potential of inductive discovery of physical hydro-optical laws. Simulated data were used to develop generalized, quasi-universal relationships. The Hydrolight numerical forward model, based on radiative transfer theory, was used to compute simulated above-water remote-sensing reflectance Rrs(lambda) psuedodata, matching the spectral channels and resolution of the experimental Naval Research Laboratory Ocean PHILLS (Portable Hyperspectral Imager for Low-Light Spectroscopy) sensor. The input-output pairs were for GMDH and artificial neural network (ANN) model development, the latter of which was used as a baseline, or control, algorithm. Both types of models were applied to in situ and aircraft data. Also, in situ spectroradiometer-derived Rrs(lambda) were used as input to an optimization-based inversion procedure. Target variables
NASA Astrophysics Data System (ADS)
Dadić, Martin
2013-06-01
The increased interest in vacuum tube audio amplifiers led to an increased interest in mathematical modelling of such kind of amplifiers. The main purpose of this paper is to develop a novel global numerical approach in calculation of the harmonic distortion (HD) and intermodulation distortion (IM) of vacuum-triode audio amplifiers, suitable for applications using brute-force of modern computers. Since the 3/2 power law gives only the transcharacteristic inverse of a vacuum triode amplifier, unknown plate currents are determined in this paper iteratively using Newton's method. Using the resulting input/output pairs, harmonic distortions and intermodulations are calculated using discrete Fourier transform and three different analytical methods.
NASA Technical Reports Server (NTRS)
Hoessel, J. G.; Gunn, J. E.; Thuan, T. X.
1980-01-01
Two-color aperture photometry of the brightest galaxies in a complete sample of nearby Abell clusters is presented. The results are used to anchor the bright end of the Hubble diagram; essentially the entire formal error for this method is then due to the sample of distant clusters used. New determinations of the systematic trend of galaxy absolute magnitude with the cluster properties of richness and Bautz-Morgan type are derived. When these new results are combined with the Gunn and Oke (1975) data on high-redshift clusters, a formal value (without accounting for any evolution) of q sub 0 = -0.55 + or - 0.45 (1 standard deviations) is found.
An Approximation to the Periodic Solution of a Differential Equation of Abel
NASA Astrophysics Data System (ADS)
Mickens, Ronald E.
2011-10-01
The Abel equation, in canonical form, is y^' = sint- y^3 (*) and corresponds to the singular (ɛ --> 0) limit of the nonlinear, forced oscillator ɛy^'' + y^' + y^3 = sint, ɛ-> 0. (**) Equation (*) has the property that it has a unique periodic solution defined on (-∞,∞). Further, as t increases, all solutions are attracted into the strip |y| < 1 and any two different solutions y1(t) and y2(t) satisfy the condition Lim [y1(t) - y2(t)] = 0, (***) t --> ∞ and for t negatively decreasing, each solution, except for the periodic solution, becomes unbounded.ootnotetextU. Elias, American Mathematical Monthly, vol.115, (Feb. 2008), pps. 147-149. Our purpose is to calculate an approximation to the unique periodic solution of Eq. (*) using the method of harmonic balance. We also determine an estimation for the blow-up time of the non-periodic solutions.
NASA Astrophysics Data System (ADS)
Agata, R.; Ichimura, T.; Hirahara, K.; Hori, T.; Hyodo, M.; Hori, M.
2013-12-01
Many studies have focused on geodetic inversion analysis method of coseismic slip distribution with combination of observation data of coseismic crustal deformation on the ground and simplified crustal models such like analytical solution in elastic half-space (Okada, 1985). On the other hand, displacements on the seafloor or near trench axes due to actual earthquakes has been observed by seafloor observatories (e.g. the 2011 Tohoku-oki Earthquake (Tohoku Earthquake) (Sato et. al. 2011) (Kido et. al. 2011)). Also, some studies on tsunamis due to the Tohoku Earthquake indicate that large fault slips near the trench axis may have occurred. Those facts suggest that crustal models considering complex geometry and heterogeneity of the material property near the trench axis should be used for geodetic inversion analysis. Therefore, our group has developed a mesh generation method for finite element models of the Japanese Islands of higher fidelity and a fast crustal deformation analysis method for the models. Degree-of-freedom of the models generated by this method is about 150 million. In this research, the method is extended for inversion analyses of coseismic slip distribution. Since inversion analyses need computation of hundreds of slip response functions due to a unit fault slip assigned for respective divided cells on the fault, parallel computing environment is used. Plural crustal deformation analyses are simultaneously run in a Message Passing Interface (MPI) job. In the job, dynamic load balancing is implemented so that a better parallel efficiency is obtained. Submitting the necessary number of serial job of our previous method is also possible, but the proposed method needs less computation time, places less stress on file systems, and allows simpler job management. A method for considering the fault slip right near the trench axis is also developed. As the displacement distribution of unit fault slip for computing response function, 3rd order B
NASA Astrophysics Data System (ADS)
Dolman, A. J.; Shvidenko, A.; Schepaschenko, D.; Ciais, P.; Tchebakova, N.; Chen, T.; van der Molen, M. K.; Belelli Marchesini, L.; Maximov, T. C.; Maksyutov, S.; Schulze, E.-D.
2012-06-01
We determine the carbon balance of Russia, including Ukraine, Belarus and Kazakhstan using inventory based, eddy covariance, Dynamic Global Vegetation Models (DGVM), and inversion methods. Our current best estimate of the net biosphere to atmosphere flux is -0.66 Pg C yr-1. This sink is primarily caused by forests that using two independent methods are estimated to take up -0.69 Pg C yr-1. Using inverse models yields an average net biopshere to atmosphere flux of the same value with a interannual variability of 35 % (1σ). The total estimated biosphere to atmosphere flux from eddy covariance observations over a limited number of sites amounts to -1 Pg C yr-1. Fires emit 137 to 121 Tg C yr-1 using two different methods. The interannual variability of fire emissions is large, up to a factor 0.5 to 3. Smaller fluxes to the ocean and inland lakes, trade are also accounted for. Our best estimate for the Russian net biosphere to atmosphere flux then amounts to -659 Tg C yr-1 as the average of the inverse models of -653 Tg C yr-1, bottom up -563 Tg C yr-1 and the independent landscape approach of -761 Tg C yr-1. These three methods agree well within their error bounds, so there is good consistency between bottom up and top down methods. The best estimate of the net land to atmosphere flux, including the fossil fuel emissions is -145 to -73 Tg C yr-1. Estimated methane emissions vary considerably with one inventory-based estimate providing a net land to atmosphere flux of 12.6 Tg C-CH4yr -1 and an independent model estimate for the boreal and Arctic zones of Eurasia of 27.6 Tg C-CH4 yr-1.
Best Basis Methods for the Modelling and Inversion of Potential Fields
NASA Astrophysics Data System (ADS)
Michel, Volker; Telschow, Roger
2016-04-01
There are many trial functions (e.g. on the sphere) available which can be used for the modelling of a potential field. Among them are orthogonal polynomials such as spherical harmonics and radial basis functions such as spline or wavelet basis functions. We present an algorithm, the Regularized Functional Matching Pursuit (RFMP), and an enhancement (the ROFMP), which construct a kind of a best basis out of trial functions of different kinds. This basis is tailored for the particular problem and the given data set. The objective of the optimization is the minimization of the Tikhonov-regularized data misfit. One main advantage is that the constructed approximation inherits the advantages of the different basis systems. By including spherical harmonics, coarse global structures can be represented in a sparse way. However, the additional use of spline basis functions allows a stable handling of scattered data grids. Furthermore, the inclusion of wavelets and scaling functions yields a multiscale analysis of the potential. In addition, ill-posed inverse problems (like a downward continuation or the inverse gravimetric problem) can be regularized with the algorithm. We show some numerical examples to demonstrate the possibilities which the algorithms provide.
A combinatorial study of inverse Heusler alloys by first-principles computational methods.
Gillessen, Michael; Dronskowski, Richard
2010-02-01
In continuation of our recent combinatorial work on 810 X(2)YZ full Heusler alloys, a computational study of the same class of materials but with the inverse (XY)XZ crystal structure has been performed on the basis of first-principles (GGA) total-energy calculations using pseudopotentials and plane waves. The predicted enthalpies of formation evidence 27 phases to be thermochemically stable against the elements and the regular X(2)YZ type. A chemical-bonding study yields an inherent tendency for structural distortion in a majority of these alloys, and we predict the existence of the new tetragonal phase Fe(2)CuGa (P4(2)/ncm; a = 5.072 A, c = 7.634 A; c/a approximately 1.51) with a saturation moment of mu = 4.69 micro(B) per formula unit. Thirteen more likewise new, isotypical phases are predicted to show essentially the same behavior. Six phases turn out to be the most stable in the inverse tetragonal arrangement. The course of the magnetic properties as a function of the valence-electron concentration is analyzed using a Slater-Pauling approach.
Finnveden, Svante; Hörlin, Nils-Erik; Barbagallo, Mathias
2014-04-01
Viscoelastic properties of porous materials, typical of those used in vehicles for noise insulation and absorption, are estimated from measurements and inverse finite element procedures. The measurements are taken in a near vacuum and cover a broad frequency range: 20 Hz to 1 kHz. The almost cubic test samples were made of 25 mm foam covered by a "heavy layer" of rubber. They were mounted in a vacuum chamber on an aluminum table, which was excited in the vertical and horizontal directions with a shaker. Three kinds of response are measured allowing complete estimates of the viscoelastic moduli for isotropic materials and also providing some information on the degree of material anisotropicity. First, frequency independent properties are estimated, where dissipation is described by constant loss factors. Then, fractional derivative models that capture the variation with frequency of the stiffness and damping are adapted. The measurement setup is essentially two-dimensional and calculations are three-dimensional and for a state of plane strain. The good agreement between measured and calculated response provides some confidence in the presented procedures. If, however, the material model cannot fit the measurements well, the inverse procedure yields a certain degree of arbitrariness to the parameter estimation. PMID:25234982
A combinatorial study of inverse Heusler alloys by first-principles computational methods.
Gillessen, Michael; Dronskowski, Richard
2010-02-01
In continuation of our recent combinatorial work on 810 X(2)YZ full Heusler alloys, a computational study of the same class of materials but with the inverse (XY)XZ crystal structure has been performed on the basis of first-principles (GGA) total-energy calculations using pseudopotentials and plane waves. The predicted enthalpies of formation evidence 27 phases to be thermochemically stable against the elements and the regular X(2)YZ type. A chemical-bonding study yields an inherent tendency for structural distortion in a majority of these alloys, and we predict the existence of the new tetragonal phase Fe(2)CuGa (P4(2)/ncm; a = 5.072 A, c = 7.634 A; c/a approximately 1.51) with a saturation moment of mu = 4.69 micro(B) per formula unit. Thirteen more likewise new, isotypical phases are predicted to show essentially the same behavior. Six phases turn out to be the most stable in the inverse tetragonal arrangement. The course of the magnetic properties as a function of the valence-electron concentration is analyzed using a Slater-Pauling approach. PMID:19554554
Inversion Method for Early Detection of ARES-1 Case Breach Failure
NASA Technical Reports Server (NTRS)
Mackey, Ryan M.; Kulikov, Igor K.; Bajwa, Anupa; Berg, Peter; Smelyanskiy, Vadim
2010-01-01
A document describes research into the problem of detecting a case breach formation at an early stage of a rocket flight. An inversion algorithm for case breach allocation is proposed and analyzed. It is shown how the case breach can be allocated at an early stage of its development by using the rocket sensor data and the output data from the control block of the rocket navigation system. The results are simulated with MATLAB/Simulink software. The efficiency of an inversion algorithm for a case breach location is discussed. The research was devoted to the analysis of the ARES-l flight during the first 120 seconds after the launch and early prediction of case breach failure. During this time, the rocket is propelled by its first-stage Solid Rocket Booster (SRB). If a breach appears in SRB case, the gases escaping through it will produce the (side) thrust directed perpendicular to the rocket axis. The side thrust creates torque influencing the rocket attitude. The ARES-l control system will compensate for the side thrust until it reaches some critical value, after which the flight will be uncontrollable. The objective of this work was to obtain the start time of case breach development and its location using the rocket inertial navigation sensors and GNC data. The algorithm was effective for the detection and location of a breach in an SRB field joint at an early stage of its development.
Inverse methods for the mechanical characterization of materials at high strain rates
NASA Astrophysics Data System (ADS)
Hernandez, C.; Maranon, A.; Ashcroft, I. A.; Casas-Rodriguez, J. P.
2012-08-01
Mechanical material characterization represents a research challenge. Furthermore, special attention is directed to material characterization at high strain rates as the mechanical properties of some materials are influenced by the rate of loading. Diverse experimental techniques at high strain rates are available, such as the drop-test, the Taylor impact test or the Split Hopkinson pressure bar among others. However, the determination of the material parameters associated to a given mathematical constitutive model from the experimental data is a complex and indirect problem. This paper presents a material characterization methodology to determine the material parameters of a given material constitutive model from a given high strain rate experiment. The characterization methodology is based on an inverse technique in which an inverse problem is formulated and solved as an optimization procedure. The input of the optimization procedure is the characteristic signal from the high strain rate experiment. The output of the procedure is the optimum set of material parameters determined by fitting a numerical simulation to the high strain rate experimental signal.
NASA Astrophysics Data System (ADS)
Hara, Tatsuhiko
2004-08-01
We implement the Direct Solution Method (DSM) on a vector-parallel supercomputer and show that it is possible to significantly improve its computational efficiency through parallel computing. We apply the parallel DSM calculation to waveform inversion of long period (250-500 s) surface wave data for three-dimensional (3-D) S-wave velocity structure in the upper and uppermost lower mantle. We use a spherical harmonic expansion to represent lateral variation with the maximum angular degree 16. We find significant low velocities under south Pacific hot spots in the transition zone. This is consistent with other seismological studies conducted in the Superplume project, which suggests deep roots of these hot spots. We also perform simultaneous waveform inversion for 3-D S-wave velocity and Q structure. Since resolution for Q is not good, we develop a new technique in which power spectra are used as data for inversion. We find good correlation between long wavelength patterns of Vs and Q in the transition zone such as high Vs and high Q under the western Pacific.
NASA Astrophysics Data System (ADS)
Muta, Osamu; Akaiwa, Yoshihiko
In this paper, we propose a simple peak power reduction (PPR) method based on adaptive inversion of parity-check block of codeword in BCH-coded OFDM system. In the proposed method, the entire parity-check block of the codeword is adaptively inversed by multiplying weighting factors (WFs) so as to minimize PAPR of the OFDM signal, symbol-by-symbol. At the receiver, these WFs are estimated based on the property of BCH decoding. When the primitive BCH code with single error correction such as (31,26) code is used, to estimate the WFs, the proposed method employs a significant bit protection method which assigns a significant bit to the best subcarrier selected among all possible subcarriers. With computer simulation, when (31,26), (31,21) and (32,21) BCH codes are employed, PAPR of the OFDM signal at the CCDF (Complementary Cumulative Distribution Function) of 10-4 is reduced by about 1.9, 2.5 and 2.5dB by applying the PPR method, while achieving the BER performance comparable to the case with the perfect WF estimation in exponentially decaying 12-path Rayleigh fading condition.
Chen, X.; Ashcroft, I. A.; Wildman, R. D.; Tuck, C. J.
2015-01-01
A method using experimental nanoindentation and inverse finite-element analysis (FEA) has been developed that enables the spatial variation of material constitutive properties to be accurately determined. The method was used to measure property variation in a three-dimensional printed (3DP) polymeric material. The accuracy of the method is dependent on the applicability of the constitutive model used in the inverse FEA, hence four potential material models: viscoelastic, viscoelastic–viscoplastic, nonlinear viscoelastic and nonlinear viscoelastic–viscoplastic were evaluated, with the latter enabling the best fit to experimental data. Significant changes in material properties were seen in the depth direction of the 3DP sample, which could be linked to the degree of cross-linking within the material, a feature inherent in a UV-cured layer-by-layer construction method. It is proposed that the method is a powerful tool in the analysis of manufacturing processes with potential spatial property variation that will also enable the accurate prediction of final manufactured part performance. PMID:26730216
NASA Astrophysics Data System (ADS)
Gillet-Chaulet, F.; Gagliardini, O.; Nodet, M.; Ritz, C.; Durand, G.; Zwinger, T.; Seddik, H.; Greve, R.
2010-12-01
About a third of the current sea level rise is attributed to the release of Greenland and Antarctic ice, and their respective contribution is continuously increasing since the first diagnostic of the acceleration of their coastal outlet glaciers, a decade ago. Due to their related societal implications, good scenario of the ice sheets evolutions are needed to constrain the sea level rise forecast in the coming centuries. The quality of the model predictions depend primary on the good description of the physical processes involved and on a good initial state reproducing the main present observations (geometry, surface velocities and ideally the trend in elevation change). We model ice dynamics on the whole Greenland ice sheet using the full-Stokes finite element code Elmer. The finite element mesh is generated using the anisotropic mesh adaptation tool YAMS, and shows a high density around the major ice streams. For the initial state, we use an iterative procedure to compute the ice velocities, the temperature field, and the basal sliding coefficient field. The basal sliding coefficient is obtained with an inverse method by minimizing a cost function that measures the misfit between the present day surface velocities and the modelled surface velocities. We use two inverse methods for this: an inverse Robin problem recently proposed by Arthern and Gudmundsson (J. Glaciol. 2010), and a control method taking advantage of the fact that the Stokes equations are self adjoint in the particular case of a Newtonian rheology. From the initial states obtained by these two methods, we run transient simulations to evaluate the impact of the initial state of the Greenland ice sheet onto its related contribution to sea level rise for the next centuries.
Free-energy functional method for inverse problem of self assembly
NASA Astrophysics Data System (ADS)
Torikai, Masashi
2015-04-01
A new theoretical approach is described for the inverse self-assembly problem, i.e., the reconstruction of the interparticle interaction from a given structure. This theory is based on the variational principle for the functional that is constructed from a free energy functional in combination with Percus's approach [J. Percus, Phys. Rev. Lett. 8, 462 (1962)]. In this theory, the interparticle interaction potential for the given structure is obtained as the function that maximizes the functional. As test cases, the interparticle potentials for two-dimensional crystals, such as square, honeycomb, and kagome lattices, are predicted by this theory. The formation of each target lattice from an initial random particle configuration in Monte Carlo simulations with the predicted interparticle interaction indicates that the theory is successfully applied to the test cases.
Free-energy functional method for inverse problem of self assembly.
Torikai, Masashi
2015-04-14
A new theoretical approach is described for the inverse self-assembly problem, i.e., the reconstruction of the interparticle interaction from a given structure. This theory is based on the variational principle for the functional that is constructed from a free energy functional in combination with Percus's approach [J. Percus, Phys. Rev. Lett. 8, 462 (1962)]. In this theory, the interparticle interaction potential for the given structure is obtained as the function that maximizes the functional. As test cases, the interparticle potentials for two-dimensional crystals, such as square, honeycomb, and kagome lattices, are predicted by this theory. The formation of each target lattice from an initial random particle configuration in Monte Carlo simulations with the predicted interparticle interaction indicates that the theory is successfully applied to the test cases.
NASA Astrophysics Data System (ADS)
Leparoux, D.; Bretaudeau, F.; Brossier, R.; Operto, S.; Virieux, J.
2011-12-01
Seismic imaging of subsurface is useful for civil engineering and landscape management topics. The usual methods use surface waves phase velocities or first arrival times of body waves. However, for complex structures, such methods can be inefficient and Full Waveform Inversion (FWI) promises relevant performances because all the signal is taken into account. FWI has been originally developed for deep explorations (Pratt et al. 1999). Heterogeneities and strong attenuation in the near surface make difficult the adaptation of the FWI to shallower media (Bretaudeau et al. 2009). For this reason, we have developed a physical modeling measurement bench that performs small scale seismic recording in well controlled contexts (Bretaudeau et al. 2011). In this paper we assess the capacity of the FWI method (Brossier 2010) for imaging a subsurface structure including a low velocity layer and a lateral variation of interfaces. The analog model is a 180mm long and 50mm thick layered epoxy resin block (fig. 1). Seismic data generated with a punctual piezoelectric source emitting a 120KHz Ricker wavelet at the medium surface were collected by an heterodyne laser interferometer. The laser allows recording the absolute normal particle displacement without contact, avoiding disturbances caused by coupling. The laser interferometer and the piezoelectric source were attached to automated arms that could be moved over the model surface to a precision of 0.01mm (fig. 1). The acquisition survey includes 241 receiver and 37 source positions respectively spaced at 1 and 5 mm. Figure 2 shows 2D maps of the Vs parameter after inversion of data sequentially processed with 13 frequencies. The geometry of the sloped interface is recovered. A low velocity zone is imaged but with a thickness thinner than expected. Moreover, artifacts appear in the near surface. Experimental modeling results showed the capacity of the FWI in this case and provided key issues for further works about inversion by
NASA Astrophysics Data System (ADS)
van Rooij, Michael P. C.
Current turbomachinery design systems increasingly rely on multistage Computational Fluid Dynamics (CFD) as a means to assess performance of designs. However, design weaknesses attributed to improper stage matching are addressed using often ineffective strategies involving a costly iterative loop between blading modification, revision of design intent, and evaluation of aerodynamic performance. A design methodology is presented which greatly improves the process of achieving design-point aerodynamic matching. It is based on a three-dimensional viscous inverse design method which generates the blade camber surface based on prescribed pressure loading, thickness distribution and stacking line. This inverse design method has been extended to allow blading analysis and design in a multi-blade row environment. Blade row coupling was achieved through a mixing plane approximation. Parallel computing capability in the form of MPI has been implemented to reduce the computational time for multistage calculations. Improvements have been made to the flow solver to reach the level of accuracy required for multistage calculations. These include inclusion of heat flux, temperature-dependent treatment of viscosity, and improved calculation of stress components and artificial dissipation near solid walls. A validation study confirmed that the obtained accuracy is satisfactory at design point conditions. Improvements have also been made to the inverse method to increase robustness and design fidelity. These include the possibility to exclude spanwise sections of the blade near the endwalls from the design process, and a scheme that adjusts the specified loading area for changes resulting from the leading and trailing edge treatment. Furthermore, a pressure loading manager has been developed. Its function is to automatically adjust the pressure loading area distribution during the design calculation in order to achieve a specified design objective. Possible objectives are overall
NASA Technical Reports Server (NTRS)
Metcalf, Thomas R.; Canfield, Richard C.; Avrett, Eugene H.; Metcalf, Frederic T.
1990-01-01
Various methods of inverting solar Mg I 4571 and 5173 spectral line observations are examined to find the best method of using these lines to calculate the vertical temperature and electron density structure around the temperature minimum region. Following a perturbation analysis by Mein (1971), a Fredholm integral equation of the first kind is obtained which can be inverted to yield these temperature and density structures as a function of time. Several inversion methods are tested and compared. The methods are used to test data as well as to a subset of observations of these absorption lines taken on February 3, 1986 before and during a solar flare. A small but significant increase is found in the temperature and a relatively large increase in the electron density during this flare. The observations are inconsistent with heating and ionization by an intense beam of electrons and with ionization by UV photoionization of Si I.
Radio occultation data analysis by the radioholographic method
NASA Astrophysics Data System (ADS)
Hocke, K.; Pavelyev, A. G.; Yakovlev, O. I.; Barthes, L.; Jakowski, N.
1999-10-01
The radioholographic method is briefly described and tested by using data of 4 radio occultation events observed by the GPS/MET experiment on 9 February 1997. The central point of the radioholographic method (Pavelyev, 1998) is the generation of a radiohologram along the LEO satellite trajectory which allows the calculation of angular spectra of the received GPS radio wave field at the LEO satellite. These spectra are promising in view of detection, analysis and reduction of multipath/diffraction effects, study of atmospheric irregularities and estimation of bending angle error. Initial analysis of angular spectra calculated by the multiple signal classification (MUSIC) method gives evidence that considerable multibeam propagation occurs at ray perigee heights below 20 km and at heights around 80-120 km for the 4 GPS/MET occultation events. Temperature profiles obtained by our analysis (radioholographic method, Abel inversion) are compared with those of the traditional retrieval by the UCAR GPS/MET team (bending angle from slope of phase front, Abel inversion). In 3 of 4 cases we found good agreement (standard deviation σT~1.5°K between both retrievals at heights 0-30 km).
NASA Astrophysics Data System (ADS)
Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.
2012-12-01
We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of
An inverse method was developed to integrate satellite observations of atmospheric pollutant column concentrations and direct sensitivities predicted by a regional air quality model in order to discern biases in the emissions of the pollutant precursors.
NASA Astrophysics Data System (ADS)
Chen, X.; Rubin, Y.; Baldocchi, D. D.
2005-12-01
Understanding the interactions between soil, plant, and the atmosphere under water-stressed conditions is important for ecosystems where water availability is limited. In such ecosystems, the amount of water transferred from the soil to the atmosphere is controlled not only by weather conditions and vegetation type but also by soil water availability. Although researchers have proposed different approaches to model the impact of soil moisture on plant activities, the parameters involved are difficult to measure. However, using measurements of observed latent heat and carbon fluxes, as well as soil moisture data, Bayesian inversion methods can be employed to estimate the various model parameters. In our study, actual Evapotranspiration (ET) of an ecosystem is approximated by the Priestley-Taylor relationship, with the Priestley-Taylor coefficient modeled as a function of soil moisture content. Soil moisture limitation on root uptake is characterized in a similar manner as the Feddes' model. The inference of Bayesian inversion is processed within the framework of graphical theories. Due to the difficulty of obtaining exact inference, the Markov chain Monte Carlo (MCMC) method is implemented using a free software package, BUGS (Bayesian inference Using Gibbs Sampling). The proposed methodology is applied to a Mediterranean Oak-Savanna FLUXNET site in California, where continuous measurements of actual ET are obtained from eddy-covariance technique and soil moisture contents are monitored by several time domain reflectometry probes located within the footprint of the flux tower. After the implementation of Bayesian inversion, the posterior distributions of all the parameters exhibit enhancement in information compared to the prior distributions. The generated samples based on data in year 2003 are used to predict the actual ET in year 2004 and the prediction uncertainties are assessed in terms of confidence intervals. Our tests also reveal the usefulness of various
NASA Astrophysics Data System (ADS)
Stone, J.; Masterlark, T.; Feigl, K.
2010-12-01
Migration of magma within an active volcano produces a deformation signature at the Earth’s surface. The internal structure of a volcano and specific movements of the magma control the actual deformation that we observe. Relatively simple models that simulate magma injection as a pressurized body embedded in a half-space with uniform elastic properties (e.g., Mogi) describe the characteristic radially-symmetric deformation patterns that are commonly observed for episodes of volcano inflation or deflation. Inverse methods based on Mogi-type models can precisely and efficiently estimate the non-linear parameters that describe the geometry (position and shape) of the deformation source, as well as the linear parameter that describes the strength (pressure) of the deformation source. Although such models mimic the observed deformation, they assume a rheologic structure that drastically oversimplifies the plumbing beneath a volcano. This incompatibility can lead to biases in estimating the parameters of the model. Alternatively, Finite Element Models (FEMs) can simulate a pressurized body embedded in a problem domain having an arbitrary distribution of material properties that better corresponds to the internal structure of an active volcano. FEMs have been used in inverse methods for estimating linear deformation source parameters, such as the source pressure. However, perturbations of the non-linear parameters that describe the geometry of the source require automated re-meshing of the problem domain -a significant obstacle to implementing FEM-based nonlinear inverse methods in volcano deformation studies. We present a parametric executable (C++ source code), which automatically generates Abaqus FEMs that simulate a pressurized ellipsoid embedded in an axisymmetric problem domain, having an a priori distribution of material properties. We demonstrate this executable by analyzing InSAR-observed deformation of the 1997 eruption of Okmok Volcano, Alaska as an example
NASA Astrophysics Data System (ADS)
Stukel, Michael R.; Landry, Michael R.; Ohman, Mark D.; Goericke, Ralf; Samo, Ty; Benitez-Nelson, Claudia R.
2012-03-01
Despite the increasing use of linear inverse modeling techniques to elucidate fluxes in undersampled marine ecosystems, the accuracy with which they estimate food web flows has not been resolved. New Markov Chain Monte Carlo (MCMC) solution methods have also called into question the biases of the commonly used L2 minimum norm (L 2MN) solution technique. Here, we test the abilities of MCMC and L 2MN methods to recover field-measured ecosystem rates that are sequentially excluded from the model input. For data, we use experimental measurements from process cruises of the California Current Ecosystem (CCE-LTER) Program that include rate estimates of phytoplankton and bacterial production, micro- and mesozooplankton grazing, and carbon export from eight study sites varying from rich coastal upwelling to offshore oligotrophic conditions. Both the MCMC and L 2MN methods predicted well-constrained rates of protozoan and mesozooplankton grazing with reasonable accuracy, but the MCMC method overestimated primary production. The MCMC method more accurately predicted the poorly constrained rate of vertical carbon export than the L 2MN method, which consistently overestimated export. Results involving DOC and bacterial production were equivocal. Overall, when primary production is provided as model input, the MCMC method gives a robust depiction of ecosystem processes. Uncertainty in inverse ecosystem models is large and arises primarily from solution under-determinacy. We thus suggest that experimental programs focusing on food web fluxes expand the range of experimental measurements to include the nature and fate of detrital pools, which play large roles in the model.
A method to approximate the inverse of a part of the additive relationship matrix.
Faux, P; Gengler, N
2015-06-01
Single-step genomic predictions need the inverse of the part of the additive relationship matrix between genotyped animals (A22 ). Gains in computing time are feasible with an algorithm that sets up the sparsity pattern of A22-1 (SP algorithm) using pedigree searches, when A22-1 is close to sparse. The objective of this study is to present a modification of the SP algorithm (RSP algorithm) and to assess its use in approximating A22-1 when the actual A22-1 is dense. The RSP algorithm sets up a restricted sparsity pattern of A22-1 by limiting the pedigree search to a maximum number of searched branches. We have tested its use on four different simulated genotyped populations, from 10 000 to 75 000 genotyped animals. Accuracy of approximation is tested by replacing the actual A22-1 by its approximation in an equivalent mixed model including only genotyped animals. Results show that limiting the pedigree search to four branches is enough to provide accurate approximations of A22-1, which contain approximately 80% of zeros. Computing approximations is not expensive in time but may require a great amount of memory (at maximum, approximately 81 min and approximately 55 Gb of RAM for 75 000 genotyped animals using parallel processing on four threads). PMID:25560252
NASA Astrophysics Data System (ADS)
Murphy, R. Kim; Sabbagh, Harold A.; Sabbagh, Elias H.; Zhou, Liming; Bernacchi, William; Aldrin, John C.; Forsyth, David; Lindgren, Eric
2016-02-01
The use of coupled integral equations and anomalous currents allows us to efficiently remove `background effects' in either forward or inverse modeling. This is especially true when computing the change in impedance due to a small flaw in the presence of a larger background anomaly. It is more accurate than simply computing the response with and without the flaw and then subtracting the two nearly equal values to obtain the small difference due to the flaw. The problem that we address in this paper involves a 'SplitD' probe that includes complex, noncircular coils, as well as ferrite cores, inserted within a bolt hole, and exciting both the bolt hole and an adjacent flaw. This introduces three coupled anomalies, each with its on 'scale.' The largest, of course, is the bolt hole, followed (generally) by the probe, and then the flaw. The overall system is represented mathematically by three coupled volume-integral equations. We describe the development of the model and its code, which is a part of the general eddy-current modeling code, VIC-3D®. We present initial validation results, as well as a number of model computations with flaws located at various places within the bolt hole.
Mass, heat and nutrient fluxes in the Atlantic Ocean determined by inverse methods. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rintoul, Stephen Rich
1988-01-01
Inverse methods are applied to historical hydrographic data to address two aspects of the general circulation of the Atlantic Ocean. The method allows conservation statements for mass and other properties, along with a variety of other constraints, to be combined in a dynamically consistent way to estimate the absolute velocity field and associated property transports. The method was first used to examine the exchange of mass and heat between the South Atlantic and the neighboring ocean basins. The second problem addressed concerns the circulation and property fluxes across the 24 and 36 deg N in the subtropical North Atlantic. Conservation statements are considered for the nutrients as well as mass, and the nutrients are found to contribute significant information independent of temperature and salinity.
NASA Technical Reports Server (NTRS)
Nakanishi, I.; Anderson, D. L.
1984-01-01
In the present investigation, the single-station method reported by Brune et al. (1960) is utilized for an analysis of long-period Love(G) and Rayleigh(R) waves recorded on digital seismic networks. The analysis was conducted to study the lateral heterogeneity of surface wave velocities. The data set is examined, and a description is presented of the single-station method. Attention is given to an error analysis for velocity measurements, the estimation of geographical distribution of surface wave velocities, the global distribution of surface wave velocities, and the correlation between the surface wave velocities and the heat flow on the geoid. The conducted measurements and inversions of surface wave velocities are used as a basis to derive certain conclusions. It is found that the application of the single-station method to long-period surface waves recorded on digital networks makes it possible to reach an accuracy level comparable to great circle velocity measurements.
NASA Astrophysics Data System (ADS)
Zhang, L.; Xu, M.; Huang, M.; Yu, G.
2009-11-01
Modeling ecosystem carbon cycle on the regional and global scales is crucial to the prediction of future global atmospheric CO2 concentration and thus global temperature which features large uncertainties due mainly to the limitations in our knowledge and in the climate and ecosystem models. There is a growing body of research on parameter estimation against available carbon measurements to reduce model prediction uncertainty at regional and global scales. However, the systematic errors with the observation data have rarely been investigated in the optimization procedures in previous studies. In this study, we examined the feasibility of reducing the impact of systematic errors on parameter estimation using normalization methods, and evaluated the effectiveness of three normalization methods (i.e. maximum normalization, min-max normalization, and z-score normalization) on inversing key parameters, for example the maximum carboxylation rate (Vcmax,25) at a reference temperature of 25°C, in a process-based ecosystem model for deciduous needle-leaf forests in northern China constrained by the leaf area index (LAI) data. The LAI data used for parameter estimation were composed of the model output LAI (truth) and various designated systematic errors and random errors. We found that the estimation of Vcmax,25 could be severely biased with the composite LAI if no normalization was taken. Compared with the maximum normalization and the min-max normalization methods, the z-score normalization method was the most robust in reducing the impact of systematic errors on parameter estimation. The most probable values of estimated Vcmax,25 inversed by the z-score normalized LAI data were consistent with the true parameter values as in the model inputs though the estimation uncertainty increased with the magnitudes of random errors in the observations. We concluded that the z-score normalization method should be applied to the observed or measured data to improve model parameter
SHOCKING TAILS IN THE MAJOR MERGER ABELL 2744
Owers, Matt S.; Couch, Warrick J.; Nulsen, Paul E. J.; Randall, Scott W.
2012-05-01
We identify four rare 'jellyfish' galaxies in Hubble Space Telescope imagery of the major merger cluster Abell 2744. These galaxies harbor trails of star-forming knots and filaments which have formed in situ in gas tails stripped from the parent galaxies, indicating they are in the process of being transformed by the environment. Further evidence for rapid transformation in these galaxies comes from their optical spectra, which reveal starburst, poststarburst, and active galactic nucleus features. Most intriguingly, three of the jellyfish galaxies lie near intracluster medium features associated with a merging 'Bullet-like' subcluster and its shock front detected in Chandra X-ray images. We suggest that the high-pressure merger environment may be responsible for the star formation in the gaseous tails. This provides observational evidence for the rapid transformation of galaxies during the violent core passage phase of a major cluster merger.
Giant ringlike radio structures around galaxy cluster Abell 3376.
Bagchi, Joydeep; Durret, Florence; Neto, Gastão B Lima; Paul, Surajit
2006-11-01
In the current paradigm of cold dark matter cosmology, large-scale structures are assembling through hierarchical clustering of matter. In this process, an important role is played by megaparsec (Mpc)-scale cosmic shock waves, arising in gravity-driven supersonic flows of intergalactic matter onto dark matter-dominated collapsing structures such as pancakes, filaments, and clusters of galaxies. Here, we report Very Large Array telescope observations of giant ( approximately 2 Mpc by 1.6 Mpc), ring-shaped nonthermal radio-emitting structures, found at the outskirts of the rich cluster of galaxies Abell 3376. These structures may trace the elusive shock waves of cosmological large-scale matter flows, which are energetic enough to power them. These radio sources may also be the acceleration sites where magnetic shocks are possibly boosting cosmic-ray particles with energies of up to 10(18) to 10(19) electron volts.
The central star of the planetary nebula Abell 78
NASA Technical Reports Server (NTRS)
Kaler, J. B.; Feibelman, W. A.
1984-01-01
The ultraviolet spectrum of the nucleus of Abell 78, one of the two planetaries known to contain zones of nearly pure helium, is studied. The line spectrum and wind velocities are examined, the determination of interstellar extinction for assessing circumstellar dust is improved, and the temperature, luminosity, and core mass are derived. The results for A78 are compared with results for A30, and it is concluded that the dust distributions around the two central stars are quite different. The temperature of the A78 core is not as high as previously believed, and almost certainly lies between 67,000 K and 130,000 K. The most likely temperature range is 77,000-84,000 K. The core mass lies between 0.56 and 0.70 solar mass, with the most likely values between 0.56 and 0.58 solar mass.
Shocking Tails in the Major Merger Abell 2744
NASA Astrophysics Data System (ADS)
Owers, Matt S.; Couch, Warrick J.; Nulsen, Paul E. J.; Randall, Scott W.
2012-05-01
We identify four rare "jellyfish" galaxies in Hubble Space Telescope imagery of the major merger cluster Abell 2744. These galaxies harbor trails of star-forming knots and filaments which have formed in situ in gas tails stripped from the parent galaxies, indicating they are in the process of being transformed by the environment. Further evidence for rapid transformation in these galaxies comes from their optical spectra, which reveal starburst, poststarburst, and active galactic nucleus features. Most intriguingly, three of the jellyfish galaxies lie near intracluster medium features associated with a merging "Bullet-like" subcluster and its shock front detected in Chandra X-ray images. We suggest that the high-pressure merger environment may be responsible for the star formation in the gaseous tails. This provides observational evidence for the rapid transformation of galaxies during the violent core passage phase of a major cluster merger.
The Sunyaev-Zeldovich Effect Spectrum of Abell 2163
NASA Technical Reports Server (NTRS)
LaRoque, S.; Reese, E. D.; Holder, G. P.; Carlstrom, J. E.; Holzapfel, W. L.; Joy, M. K.; Grego, L.; Rose, M. Franklin (Technical Monitor)
2001-01-01
We present a measurement of the Sunyaev-Zeldovich effect (SZE) at 30 GHz for the galaxy cluster Abell 2163. Combining this data point with previous measurements at 140, 220, and 270 GHz from the SuZIE and Daibolo experiments, we construct them most complete SZE spectrum to date. The spectrum is fitted to determine the compton y parameter and the peculiar velocity for this cluster; our results are y_0=3.6 x 10(circumflex)4 and v_p=360 km s(circumflex)-1. These results include corrections for contamination by Galactic dust emission; we find the contamination level to be much less than previously reported. The dust emission, while strong, is distributed over much larger angular scales than the cluster signal and contributes little to the measured signal when the proper SZE observing strategy is taken into account.
Black holes a-wandering in Abell 2261
NASA Astrophysics Data System (ADS)
Spolaor, Sarah; Ford, Holland; Gultekin, Kayhan; Lauer, Tod R.; Lazio, T. Joseph W.; Loeb, Abraham; Moustakas, Leonidas A.; Postman, Marc; Taylor, Joanna M.
2016-01-01
The brightest cluster galaxy in Abell 2261 (BCG2261) has an exceptionally large, flat, and asymmetric core, thought to have been shaped by a binary supermassive black hole inspiral and subsequent gravitational recoil. BCG2261 should contain a 10^10 Msun black hole, but it lacks the central cusp that should mark such a massive black hole. Based on the presence of central radio emission, we have explored the core of this galaxy with HST and the VLA to identify the presence and location of the active nucleus in this galaxy's core. We present our exploration of whether this system in fact contains direct evidence of a recoiling binary supermassive black hole. A recoiling core in this system would represent a pointed observational test of three preeminent theoretical predictions: that scouring forms cores, that SMBHs may recoil after coalescence, and that recoil can strongly influence core formation and morphology.
A shock front at the radio relic of Abell 2744
NASA Astrophysics Data System (ADS)
Eckert, D.; Jauzac, M.; Vazza, F.; Owers, M. S.; Kneib, J.-P.; Tchernin, C.; Intema, H.; Knowles, K.
2016-09-01
Radio relics are Mpc-scale diffuse radio sources at the peripheries of galaxy clusters which are thought to trace outgoing merger shocks. We present XMM-Newton and Suzaku observations of the galaxy cluster Abell 2744 (z = 0.306), which reveal the presence of a shock front 1.5 Mpc east of the cluster core. The surface-brightness jump coincides with the position of a known radio relic. Although the surface-brightness jump indicates a weak shock with a Mach number M=1.7_{-0.3}^{+0.5}, the plasma in the post-shock region has been heated to a very high temperature (˜13 keV) by the passage of the shock wave. The low-acceleration efficiency expected from such a weak shock suggests that mildly relativistic electrons have been re-accelerated by the passage of the shock front.
Giant ringlike radio structures around galaxy cluster Abell 3376.
Bagchi, Joydeep; Durret, Florence; Neto, Gastão B Lima; Paul, Surajit
2006-11-01
In the current paradigm of cold dark matter cosmology, large-scale structures are assembling through hierarchical clustering of matter. In this process, an important role is played by megaparsec (Mpc)-scale cosmic shock waves, arising in gravity-driven supersonic flows of intergalactic matter onto dark matter-dominated collapsing structures such as pancakes, filaments, and clusters of galaxies. Here, we report Very Large Array telescope observations of giant ( approximately 2 Mpc by 1.6 Mpc), ring-shaped nonthermal radio-emitting structures, found at the outskirts of the rich cluster of galaxies Abell 3376. These structures may trace the elusive shock waves of cosmological large-scale matter flows, which are energetic enough to power them. These radio sources may also be the acceleration sites where magnetic shocks are possibly boosting cosmic-ray particles with energies of up to 10(18) to 10(19) electron volts. PMID:17082451
Jeschke, G; Mandelshtam, V A; Shaka, A J
1999-03-01
Harmonic inversion of electron spin echo envelope (ESEEM) time-domain signals by filter diagonalization is investigated as an alternative to Fourier transformation. It is demonstrated that this method features enhanced resolution compared to Fourier-transform magnitude spectra, since it can eliminate dispersive contributions to the line shape, even if no linear phase correction is possible. Furthermore, instrumental artifacts can be easily removed from the spectra if they are narrow either in time or frequency domain. This applies to echo crossings that are only incompletely eliminated by phase cycling and to spurious spectrometer frequencies, respectively. The method is computationally efficient and numerically stable and does not require extensive parameter adjustments or advance knowledge of the number of spectral lines. Experiments on gamma-irradiated methyl-alpha-d-glucopyranoside show that more information can be obtained from typical ESEEM time-domain signals by filter-diagonalization than by Fourier transformation.
NASA Astrophysics Data System (ADS)
Ortiz, R.; Carrasco, E.; Páez, G.; Sánchez-Blanco, E.; Gil de Paz, Armando; Gallego, J.; Cedazo, R.; Iglesias-Páramo, J.
2014-07-01
Optical tolerances are specified to achieve the desired performance of any optical system. Traditionally the diverse sets of tolerances of a system are proposed by the designer of each of the subsystems. In this work we propose a method to corroborate the design tolerances and simultaneously to provide extra data of each parameter to the manufacturer. It consists of an inverse analysis in which we fix a modified merit function as a constant and evaluate distinct models of perturbed lenses via Monte Carlo simulations, determining the best possible tolerance for each parameter, and indirectly providing information of sensitivity of the parameters. The method was used to carry out an extensive tolerance analysis of MEGARA, a multi-object spectrograph in development for the GTC. The key parameters of the optics are discussed, the overall performance is tested and diverse recommendations and adjustments to the design tolerances are made towards fabrication at INAOE and CIO in Mexico.
Two-dimensional charged particle image inversion using a polar basis function expansion
Garcia, Gustavo A.; Nahon, Laurent; Powis, Ivan
2004-11-01
We present an inversion method called pBasex aimed at reconstructing the original Newton sphere of expanding charged particles from its two-dimensional projection by fitting a set of basis functions with a known inverse Abel integral. The basis functions have been adapted to the polar symmetry of the photoionization process to optimize the energy and angular resolution while minimizing the CPU time and the response to the cartesian noise that could be given by the detection system. The method presented here only applies to systems with a unique axis of symmetry although it can be adapted to overcome this restriction. It has been tested on both simulated and experimental noisy images and compared to the Fourier-Hankel algorithm and the original Cartesian basis set used by [Dribinski et al.Rev. Sci. Instrum. 73, 2634 (2002)], and appears to give a better performance where odd Legendre polynomials are involved, while in the images where only even terms are present the method has been shown to be faster and simpler without compromising its accuracy.
The distribution of dark and luminous matter in the unique galaxy cluster merger Abell 2146
NASA Astrophysics Data System (ADS)
King, Lindsay J.; Clowe, Douglas I.; Coleman, Joseph E.; Russell, Helen R.; Santana, Rebecca; White, Jacob A.; Canning, Rebecca E. A.; Deering, Nicole J.; Fabian, Andrew C.; Lee, Brandyn E.; Li, Baojiu; McNamara, Brian R.
2016-06-01
Abell 2146 (z = 0.232) consists of two galaxy clusters undergoing a major merger. The system was discovered in previous work, where two large shock fronts were detected using the Chandra X-ray Observatory, consistent with a merger close to the plane of the sky, caught soon after first core passage. A weak gravitational lensing analysis of the total gravitating mass in the system, using the distorted shapes of distant galaxies seen with Advanced Camera for Surveys - Wide Field Channel on Hubble Space Telescope, is presented. The highest peak in the reconstruction of the projected mass is centred on the brightest cluster galaxy (BCG) in Abell 2146-A. The mass associated with Abell 2146-B is more extended. Bootstrapped noise mass reconstructions show the mass peak in Abell 2146-A to be consistently centred on the BCG. Previous work showed that BCG-A appears to lag behind an X-ray cool core; although the peak of the mass reconstruction is centred on the BCG, it is also consistent with the X-ray peak given the resolution of the weak lensing mass map. The best-fitting mass model with two components centred on the BCGs yields M200 = 1.1^{+0.3}_{-0.4} × 1015 and 3^{+1}_{-2} × 1014 M⊙ for Abell 2146-A and Abell 2146-B, respectively, assuming a mass concentration parameter of c = 3.5 for each cluster. From the weak lensing analysis, Abell 2146-A is the primary halo component, and the origin of the apparent discrepancy with the X-ray analysis where Abell 2146-B is the primary halo is being assessed using simulations of the merger.
Tunnicliffe, Elizabeth M.; Pavlides, Michael; Robson, Matthew D.
2016-01-01
Purpose To characterize the effect of fat on modified Look–Locker inversion recovery (MOLLI) T 1 maps of the liver. The balanced steady‐state free precession (bSSFP) sequence causes water and fat signals to have opposite phase when repetition time (TR) = 2.3 msec at 3T. In voxels that contain both fat and water, the MOLLI T 1 measurement is influenced by the choice of TR. Materials and Methods MOLLI T 1 measurements of the liver were simulated using the Bloch equations while varying the hepatic lipid content (HLC). Phantom scans were performed on margarine phantoms, using both MOLLI and spin echo inversion recovery sequences. MOLLI T 1 at 3T and HLC were determined in patients (n = 8) before and after bariatric surgery. Results At 3T, with HLC in the 0–35% range, higher fat fraction values lead to longer MOLLI T 1 values when TR = 2.3 msec. Patients were found to have higher MOLLI T 1 at elevated HLC (T 1 = 929 ± 97 msec) than at low HLC (T 1 = 870 ± 44 msec). Conclusion At 3T, MOLLI T 1 values are affected by HLC, substantially changing MOLLI T 1 in a clinically relevant range of fat content. J. Magn. Reson. Imaging 2016;44:105–111. PMID:26762615
Muto, A.; Scambos, T.A.; Steffen, K.; Slater, A.G.; Clow, G.D.
2011-01-01
We use measured firn temperatures down to depths of 80 to 90 m at four locations in the interior of Dronning Maud Land, East Antarctica to derive surface temperature histories spanning the past few decades using two different inverse methods. We find that the mean surface temperatures near the ice divide (the highest-elevation ridge of East Antarctic Ice Sheet) have increased approximately 1 to 1.5 K within the past ???50 years, although the onset and rate of this warming vary by site. Histories at two locations, NUS07-5 (78.65S, 35.64E) and NUS07-7 (82.07S, 54.89E), suggest that the majority of this warming took place in the past one or two decades. Slight cooling to no change was indicated at one location, NUS08-5 (82.63S, 17.87E), off the divide near the Recovery Lakes region. In the most recent decade, inversion results indicate both cooler and warmer periods at different sites due to high interannual variability and relatively high resolution of the inverted surface temperature histories. The overall results of our analysis fit a pattern of recent climate trends emerging from several sources of the Antarctic temperature reconstructions: there is a contrast in surface temperature trends possibly related to altitude in this part of East Antarctica. Copyright 2011 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Nielsen, Bjørn Fredrik; Lysaker, Marius; Tveito, Aslak
2007-01-01
The electrical activity in the heart is modeled by a complex, nonlinear, fully coupled system of differential equations. Several scientists have studied how this model, referred to as the bidomain model, can be modified to incorporate the effect of heart infarctions on simulated ECG (electrocardiogram) recordings. We are concerned with the associated inverse problem; how can we use ECG recordings and mathematical models to identify the position, size and shape of heart infarctions? Due to the extreme CPU efforts needed to solve the bidomain equations, this model, in its full complexity, is not well-suited for this kind of problems. In this paper we show how biological knowledge about the resting potential in the heart and level set techniques can be combined to derive a suitable stationary model, expressed in terms of an elliptic PDE, for such applications. This approach leads to a nonlinear ill-posed minimization problem, which we propose to regularize and solve with a simple iterative scheme. Finally, our theoretical findings are illuminated through a series of computer simulations for an experimental setup involving a realistic heart in torso geometry. More specifically, experiments with synthetic ECG recordings, produced by solving the bidomain model, indicate that our method manages to identify the physical characteristics of the ischemic region(s) in the heart. Furthermore, the ill-posed nature of this inverse problem is explored, i.e. several quantitative issues of our scheme are explored.
Gu, Y D; Ren, X J; Li, J S; Lake, M J; Zhang, Q Y; Zeng, Y J
2010-06-01
Metatarsal fracture is one of the most common foot injuries, particularly in athletes and soldiers, and is often associated with landing in inversion. An improved understanding of deformation of the metatarsals under inversion landing conditions is essential in the diagnosis and prevention of metatarsal injuries. In this work, a detailed three-dimensional (3D) finite element foot model was developed to investigate the effect of inversion positions on stress distribution and concentration within the metatarsals. The predicted plantar pressure distribution showed good agreement with data from controlled biomechanical tests. The deformation and stresses of the metatarsals during landing at different inversion angles (normal landing, 10 degree inversion and 20 degree inversion angles) were comparatively studied. The results showed that in the lateral metatarsals stress increased while in the medial metatarsals stress decreased with the angle of inversion. The peak stress point was found to be near the proximal part of the fifth metatarsal, which corresponds with reported clinical observations of metatarsal injuries.
The wonderful apparatus of John Jacob Abel called the "artificial kidney".
Eknoyan, Garabed
2009-01-01
Hemodialysis, which now provides life-saving therapy to millions of individuals, began as an exploratory attempt to sustain the lives of selected patients in the 1950s. That was a century after the formulation of the concept and determination of the laws governing dialysis. The first step in the translation of the laboratory principles of dialysis to living animals was the "vividiffusion" apparatus developed by John Jacob Abel (1859-1938), dubbed the "artificial kidney" in the August 11, 1913 issue of The Times of London reporting the demonstration of vividiffusion by Abel at University College. The detailed article in the January 18, 1914 of the New York Times, reproduced here, is based on the subsequent medical reports published by Abel et al. Tentative attempts of human dialysis in the decade that followed based on the vividiffusion apparatus of Abel and his materials (collodion, hirudin, and glass) met with failure and had to be abandoned. Practical dialysis became possible in the 1940s and thereafter after cellophane, heparin, and teflon became available. Abel worked in an age of great progress and experimental work in the basic sciences that laid the foundations of science-driven medicine. It was a "Heroic Age of Medicine," when medical discoveries and communicating them to the public were assuming increasing importance. This article provides the cultural, social, scientific, and medical background in which Abel worked, developed and reported his wonderful apparatus called the "artificial kidney."
Inverse transonic airfoil design methods including boundary layer and viscous interaction effects
NASA Technical Reports Server (NTRS)
Carlson, L. A.
1979-01-01
The development and incorporation into TRANDES of a fully conservative analysis method utilizing the artificial compressibility approach is described. The method allows for lifting cases and finite thickness airfoils and utilizes a stretched coordinate system. Wave drag and massive separation studies are also discussed.
NASA Astrophysics Data System (ADS)
Giudici, Mauro; Casabianca, Davide; Comunian, Alessandro
2015-04-01
The basic classical inverse problem of groundwater hydrology aims at determining aquifer transmissivity (T ) from measurements of hydraulic head (h), estimates or measures of source terms and with the least possible knowledge on hydraulic transmissivity. The theory of inverse problems shows that this is an example of ill-posed problem, for which non-uniqueness and instability (or at least ill-conditioning) might preclude the computation of a physically acceptable solution. One of the methods to reduce the problems with non-uniqueness, ill-conditioning and instability is a tomographic approach, i.e., the use of data corresponding to independent flow situations. The latter might correspond to different hydraulic stimulations of the aquifer, i.e., to different pumping schedules and flux rates. Three inverse methods have been analyzed and tested to profit from the use of multiple sets of data: the Differential System Method (DSM), the Comparison Model Method (CMM) and the Double Constraint Method (DCM). DSM and CMM need h all over the domain and thus the first step for their application is the interpolation of measurements of h at sparse points. Moreover, they also need the knowledge of the source terms (aquifer recharge, well pumping rates) all over the aquifer. DSM is intrinsically based on the use of multiple data sets, which permit to write a first-order partial differential equation for T , whereas CMM and DCM were originally proposed to invert a single data set and have been extended to work with multiple data sets in this work. CMM and DCM are based on Darcy's law, which is used to update an initial guess of the T field with formulas based on a comparison of different hydraulic gradients. In particular, the CMM algorithm corrects the T estimate with ratio of the observed hydraulic gradient and that obtained with a comparison model which shares the same boundary conditions and source terms as the model to be calibrated, but a tentative T field. On the other hand
NASA Astrophysics Data System (ADS)
Fang, Jun; Zhang, Lizao; Duan, Huiping; Huang, Lei; Li, Hongbin
2016-05-01
The application of sparse representation to SAR/ISAR imaging has attracted much attention over the past few years. This new class of sparse representation based imaging methods present a number of unique advantages over conventional range-Doppler methods, the basic idea behind these works is to formulate SAR/ISAR imaging as a sparse signal recovery problem. In this paper, we propose a new two-dimensional pattern-coupled sparse Bayesian learning(SBL) method to capture the underlying cluster patterns of the ISAR target images. Based on this model, an expectation-maximization (EM) algorithm is developed to infer the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. Experimental results demonstrate that the proposed method is able to achieve a substantial performance improvement over existing algorithms, including the conventional SBL method.
Inverse methods-based estimation of plate coupling in a plate motion model governed by mantle flow
NASA Astrophysics Data System (ADS)
Ratnaswamy, V.; Stadler, G.; Gurnis, M.
2013-12-01
Plate motion is primarily controlled by buoyancy (slab pull) which occurs at convergent plate margins where oceanic plates undergo deformation near the seismogenic zone. Yielding within subducting plates, lateral variations in viscosity, and the strength of seismic coupling between plate margins likely have an important control on plate motion. Here, we wish to infer the inter-plate coupling for different subduction zones, and develop a method for inferring it as a PDE-constrained optimization problem, where the cost functional is the misfit in plate velocities and is constrained by the nonlinear Stokes equation. The inverse models have well resolved slabs, plates, and plate margins in addition to a power law rheology with yielding in the upper mantle. Additionally, a Newton method is used to solve the nonlinear Stokes equation with viscosity bounds. We infer plate boundary strength using an inexact Gauss-Newton method with line search for backtracking. Each inverse model is applied to two simple 2-D scenarios (each with three subduction zones), one with back-arc spreading and one without. For each case we examine the sensitivity of the inversion to the amount of surface velocity used: 1) full surface velocity data and 2) surface velocity data simplified using a single scalar average (2-D equivalent to an Euler pole) for each plate. We can recover plate boundary strength in each case, even in the presence of highly nonlinear flow with extreme variations in viscosity. Additionally, we ascribe an uncertainty in each plate's velocity and perform an uncertainty quantification (UQ) through the Hessian of the misfit in plate velocities. We find that as plate boundaries become strongly coupled, the uncertainty in the inferred plate boundary strength decreases. For very weak, uncoupled subduction zones, the uncertainty of inferred plate margin strength increases since there is little sensitivity between plate margin strength and plate velocity. This result is significant
NASA Astrophysics Data System (ADS)
Ren, Cong
Nowadays, the micro-tubular solid oxide fuel cells (MT-SOFCs), especially the anode supported MT-SOFCs have been extensively developed to be applied for SOFC stacks designation, which can be potentially used for portable power sources and vehicle power supply. To prepare MT-SOFCs with high electrochemical performance, one of the main strategies is to optimize the microstructure of the anode support. Recently, a novel phase inversion method has been applied to prepare the anode support with a unique asymmetrical microstructure, which can improve the electrochemical performance of the MT-SOFCs. Since several process parameters of the phase inversion method can influence the pore formation mechanism and final microstructure, it is essential and necessary to systematically investigate the relationship between phase inversion process parameters and final microstructure of the anode supports. The objective of this study is aiming at correlating the process parameters and microstructure and further preparing MT-SOFCs with enhanced electrochemical performance. Non-solvent, which is used to trigger the phase separation process, can significantly influence the microstructure of the anode support fabricated by phase inversion method. To investigate the mechanism of non-solvent affecting the microstructure, water and ethanol/water mixture were selected for the NiO-YSZ anode supports fabrication. The presence of ethanol in non-solvent can inhibit the growth of the finger-like pores in the tubes. With the increasing of the ethanol concentration in the non-solvent, a relatively dense layer can be observed both in the outside and inside of the tubes. The mechanism of pores growth and morphology obtained by using non-solvent with high concentration ethanol was explained based on the inter-diffusivity between solvent and non-solvent. Solvent and non-solvent pair with larger Dm value is benefit for the growth of finger-like pores. Three cells with different anode geometries was
NASA Astrophysics Data System (ADS)
Theobald, Mark R.; Crittenden, Peter D.; Tang, Y. Sim; Sutton, Mark A.
2013-12-01
Penguin colonies represent some of the most concentrated sources of ammonia emissions to the atmosphere in the world. The ammonia emitted into the atmosphere can have a large influence on the nitrogen cycling of ecosystems near the colonies. However, despite the ecological importance of the emissions, no measurements of ammonia emissions from penguin colonies have been made. The objective of this work was to determine the ammonia emission rate of a penguin colony using inverse-dispersion modelling and gradient methods. We measured meteorological variables and mean atmospheric concentrations of ammonia at seven locations near a colony of Adélie penguins in Antarctica to provide input data for inverse-dispersion modelling. Three different atmospheric dispersion models (ADMS, LADD and a Lagrangian stochastic model) were used to provide a robust emission estimate. The Lagrangian stochastic model was applied both in ‘forwards’ and ‘backwards’ mode to compare the difference between the two approaches. In addition, the aerodynamic gradient method was applied using vertical profiles of mean ammonia concentrations measured near the centre of the colony. The emission estimates derived from the simulations of the three dispersion models and the aerodynamic gradient method agreed quite well, giving a mean emission of 1.1 g ammonia per breeding pair per day (95% confidence interval: 0.4-2.5 g ammonia per breeding pair per day). This emission rate represents a volatilisation of 1.9% of the estimated nitrogen excretion of the penguins, which agrees well with that estimated from a temperature-dependent bioenergetics model. We found that, in this study, the Lagrangian stochastic model seemed to give more reliable emission estimates in ‘forwards’ mode than in ‘backwards’ mode due to the assumptions made.
Xiao, Xiao; Hua, Xue-Ming; Wu, Yi-Xiong; Li, Fang
2012-09-01
Pulsed TIG welding is widely used in industry due to its superior properties, and the measurement of arc temperature is important to analysis of welding process. The relationship between particle densities of Ar and temperature was calculated based on the theory of spectrum, the relationship between emission coefficient of spectra line at 794.8 nm and temperature was calculated, arc image of spectra line at 794.8 nm was captured by high speed camera, and both the Abel inversion and Fowler-Milne method were used to calculate the temperature distribution of pulsed TIG welding. PMID:23240389
Rajeev, K; Parameswaran, K
1998-07-20
Two iterative methods of inverting lidar backscatter signals to determine altitude profiles of aerosol extinction and altitude-resolved aerosol size distribution (ASD) are presented. The first method is for inverting two-wavelength lidar signals in which the shape of the ASD is assumed to be of power-law type, and the second method is for inverting multiwavelength lidar signals without assuming any a priori analytical form of ASD. An arbitrary value of the aerosol extinction-to-backscatter ratio (S(1)) is assumed initially to invert the lidar signals, and the ASD determined by use of the spectral dependence of the retrieved aerosol extinction coefficients is used to improve the value of S(1) iteratively. The methods are tested for different forms of altitude-dependent ASD's by use of simulated lidar-backscatter-signal profiles. The effect of random noise on the lidar backscatter signals is also studied.
E-coil: an inverse boundary element method for a quasi-static problem.
Sanchez, Clemente Cobos; Garcia, Salvador Gonzalez; Power, Henry
2010-06-01
Boundary element methods represent a valuable approach for designing gradient coils; these methods are based on meshing the current carrying surface into an array of boundary elements. The temporally varying magnetic fields produced by gradient coils induce electric currents in conducting tissues and so the exposure of human subjects to these magnetic fields has become a safety concern, especially with the increase in the strength of the field gradients used in magnetic resonance imaging. Here we present a boundary element method for the design of coils that minimize the electric field induced in prescribed conducting systems. This work also details some numerical examples of the application of this coil design method. The reduction of the electric field induced in a prescribed region inside the coils is also evaluated.
Vargas-Ubera, Javier; Aguilar, J Félix; Gale, David Michel
2007-01-01
By means of a numerical study we show particle-size distributions retrieved with the Chin-Shifrin, Phillips-Twomey, and singular value decomposition methods. Synthesized intensity data are generated using Mie theory, corresponding to unimodal normal, gamma, and lognormal distributions of spherical particles, covering the size parameter range from 1 to 250. Our results show the advantages and disadvantages of each method, as well as the range of applicability for the Fraunhofer approximation as compared to rigorous Mie theory.
Xu, Ninghan; Bai, Benfeng; Tan, Qiaofeng; Jin, Guofan
2013-09-01
Aspect ratio, width, and end-cap factor are three critical parameters defined to characterize the geometry of metallic nanorod (NR). In our previous work [Opt. Express 21, 2987 (2013)], we reported an optical extinction spectroscopic (OES) method that can measure the aspect ratio distribution of gold NR ensembles effectively and statistically. However, the measurement accuracy was found to depend on the estimate of the width and end-cap factor of the nanorod, which unfortunately cannot be determined by the OES method itself. In this work, we propose to improve the accuracy of the OES method by applying an auxiliary scattering measurement of the NR ensemble which can help to estimate the mean width of the gold NRs effectively. This so-called optical extinction/scattering spectroscopic (OESS) method can fast characterize the aspect ratio distribution as well as the mean width of gold NR ensembles simultaneously. By comparing with the transmission electron microscopy experimentally, the OESS method shows the advantage of determining two of the three critical parameters of the NR ensembles (i.e., the aspect ratio and the mean width) more accurately and conveniently than the OES method.
A new multistage groundwater transport inverse method: Presentation, evaluation, and implications
Anderman, E.R.; Hill, M.C.
1999-01-01
More computationally efficient methods of using concentration data are needed to estimate groundwater flow and transport parameters. This work introduces and evaluates a three-stage nonlinear-regression-based iterative procedure in which trial advective-front locations link decoupled flow and transport models. Method accuracy and efficiency are evaluated by comparing results to those obtained when flow- and transport-model parameters are estimated simultaneously. The new method is evaluated as conclusively as possible by using a simple test case that includes distinct flow and transport parameters, but does not include any approximations that are problem dependent. The test case is analytical; the only flow parameter is a constant velocity, and the transport parameters are longitudinal and transverse dispersivity. Any difficulties detected using the new method in this ideal situation are likely to be exacerbated in practical problems. Monte-Carlo analysis of observation error ensures that no specific error realization obscures the results. Results indicate that, while this, and probably other, multistage methods do not always produce optimal parameter estimates, the computational advantage may make them useful in some circumstances, perhaps as a precursor to using a simultaneous method.
An inverse finite element method for determining residual and current stress fields in solids
NASA Astrophysics Data System (ADS)
Tartibi, M.; Steigmann, D. J.; Komvopoulos, K.
2016-11-01
The life expectancy of a solid component is traditionally predicted by assessing its expected stress cycle and comparing it to experimentally determined stress states at failure. The accuracy of this procedure is often compromised by unforeseen extremes in the loading cycle or material degradation. Residually stressed parts may either have longer or shorter lifespans than predicted. Thus, determination of the current state of stress (i.e., the residual stress in the absence of external loading) and material properties is particularly important. Typically, the material properties of a solid are determined by fitting experimental data obtained from the measured deformation response in the stress-free configuration. However, the characterization of the mechanical behavior of a residually stressed body requires, in principle, a method that is not restricted to specific constitutive models. Complementing a recently developed technique, known as the reversed updated Lagrangian finite element method (RULFEM), a new method called estimating the current state of stress (ECSS) is presented herein. ECSS is based on three-dimensional full-field displacement and force data of a body perturbed by small displacements and complements the first step of the incremental RULFEM method. The present method generates the current state of stress (or residual stress in the absence of external tractions) and the incremental elasticity tensor of each finite element used to discretize the deformable body. The validity of the ECSS method is demonstrated by two noise-free simulation cases.
An inverse finite element method for determining residual and current stress fields in solids
NASA Astrophysics Data System (ADS)
Tartibi, M.; Steigmann, D. J.; Komvopoulos, K.
2016-08-01
The life expectancy of a solid component is traditionally predicted by assessing its expected stress cycle and comparing it to experimentally determined stress states at failure. The accuracy of this procedure is often compromised by unforeseen extremes in the loading cycle or material degradation. Residually stressed parts may either have longer or shorter lifespans than predicted. Thus, determination of the current state of stress (i.e., the residual stress in the absence of external loading) and material properties is particularly important. Typically, the material properties of a solid are determined by fitting experimental data obtained from the measured deformation response in the stress-free configuration. However, the characterization of the mechanical behavior of a residually stressed body requires, in principle, a method that is not restricted to specific constitutive models. Complementing a recently developed technique, known as the reversed updated Lagrangian finite element method (RULFEM), a new method called estimating the current state of stress (ECSS) is presented herein. ECSS is based on three-dimensional full-field displacement and force data of a body perturbed by small displacements and complements the first step of the incremental RULFEM method. The present method generates the current state of stress (or residual stress in the absence of external tractions) and the incremental elasticity tensor of each finite element used to discretize the deformable body. The validity of the ECSS method is demonstrated by two noise-free simulation cases.
NASA Astrophysics Data System (ADS)
Stohl, A.; Seibert, P.; Arduini, J.; Eckhardt, S.; Fraser, P.; Greally, B. R.; Maione, M.; O'Doherty, S.; Prinn, R. G.; Reimann, S.; Saito, T.; Schmidbauer, N.; Simmonds, P. G.; Vollmer, M. K.; Weiss, R. F.; Yokouchi, Y.
2008-11-01
A new analytical inversion method has been developed to determine the regional and global emissions of long-lived atmospheric trace gases. It exploits in situ measurement data from a global network and builds on backward simulations with a Lagrangian particle dispersion model. The emission information is extracted from the observed concentration increases over a baseline that is itself objectively determined by the inversion algorithm. The method was applied to two hydrofluorocarbons (HFC-134a, HFC-152a) and a hydrochlorofluorocarbon (HCFC-22) for the period January 2005 until March 2007. Detailed sensitivity studies with synthetic as well as with real measurement data were done to quantify the influence on the results of the a priori emissions and their uncertainties as well as of the observation and model errors. It was found that the global a posteriori emissions of HFC-134a, HFC-152a and HCFC-22 all increased from 2005 to 2006. Large increases (21%, 16%, 18%, respectively) from 2005 to 2006 were found for China, whereas the emission changes in North America and Europe were modest. For Europe, the a posteriori emissions of HFC-134a and HFC-152a were slightly higher than the a priori emissions reported to the United Nations Framework Convention on Climate Change (UNFCCC). For HCFC-22, the a posteriori emissions for Europe were substantially (by almost a factor 2) higher than the a priori emissions used, which were based on HCFC consumption data reported to the United Nations Environment Programme (UNEP). Combined with the reported strongly decreasing HCFC consumption in Europe, this suggests a substantial time lag between the reported timing of the HCFC-22 consumption and the actual timing of the HCFC-22 emission. Conversely, in China where HCFC consumption is increasing rapidly according to the UNEP data, the a posteriori emissions are only about 40% of the a priori emissions. This reveals a substantial storage of HCFC-22 and potential for future emissions in
NASA Astrophysics Data System (ADS)
Stohl, A.; Seibert, P.; Arduini, J.; Eckhardt, S.; Fraser, P.; Greally, B. R.; Lunder, C.; Maione, M.; Mühle, J.; O'Doherty, S.; Prinn, R. G.; Reimann, S.; Saito, T.; Schmidbauer, N.; Simmonds, P. G.; Vollmer, M. K.; Weiss, R. F.; Yokouchi, Y.
2009-03-01
A new analytical inversion method has been developed to determine the regional and global emissions of long-lived atmospheric trace gases. It exploits in situ measurement data from three global networks and builds on backward simulations with a Lagrangian particle dispersion model. The emission information is extracted from the observed concentration increases over a baseline that is itself objectively determined by the inversion algorithm. The method was applied to two hydrofluorocarbons (HFC-134a, HFC-152a) and a hydrochlorofluorocarbon (HCFC-22) for the period January 2005 until March 2007. Detailed sensitivity studies with synthetic as well as with real measurement data were done to quantify the influence on the results of the a priori emissions and their uncertainties as well as of the observation and model errors. It was found that the global a posteriori emissions of HFC-134a, HFC-152a and HCFC-22 all increased from 2005 to 2006. Large increases (21%, 16%, 18%, respectively) from 2005 to 2006 were found for China, whereas the emission changes in North America (-9%, 23%, 17%, respectively) and Europe (11%, 11%, -4%, respectively) were mostly smaller and less systematic. For Europe, the a posteriori emissions of HFC-134a and HFC-152a were slightly higher than the a priori emissions reported to the United Nations Framework Convention on Climate Change (UNFCCC). For HCFC-22, the a posteriori emissions for Europe were substantially (by almost a factor 2) higher than the a priori emissions used, which were based on HCFC consumption data reported to the United Nations Environment Programme (UNEP). Combined with the reported strongly decreasing HCFC consumption in Europe, this suggests a substantial time lag between the reported time of the HCFC-22 consumption and the actual time of the HCFC-22 emission. Conversely, in China where HCFC consumption is increasing rapidly according to the UNEP data, the a posteriori emissions are only about 40% of the a priori
NASA Technical Reports Server (NTRS)
Larour, E.; Rignot, E.; Joughin, I.; Aubry, D.
2005-01-01
The Antarctic Ice Sheet is surrounded by large floating ice shelves that spread under their own weight into the ocean. Ice shelf rigidity depends on ice temperature and fabrics, and is influenced by ice flow and the delicate balance between bottom and surface accumulation. Here, we use an inverse control method to infer the rigidity of the Ronne Ice Shelf that best matches observations of ice velocity from satellite radar interferometry. Ice rigidity, or flow law parameter B, is shown to vary between 300 and 900 kPa a(sup 1/3). Ice is softer along the side margins due to frictional heating, and harder along the outflow of large glaciers, which advect cold continental ice. Melting at the bottom surface of the ice shelf increases its rigidity, while freezing decreases it. Accurate numerical modelling of ice shelf flow must account for this spatial variability in mechanical characteristics.
NASA Astrophysics Data System (ADS)
Virieux, J.; Bretaudeau, F.; Metivier, L.; Brossier, R.
2013-12-01
Simultaneous inversion of seismic velocities and source parameters have been a long standing challenge in seismology since the first attempts to mitigate trade-off between very different parameters influencing travel-times (Spencer and Gubbins 1980, Pavlis and Booker 1980) since the early development in the 1970s (Aki et al 1976, Aki and Lee 1976, Crosson 1976). There is a strong trade-off between earthquake source positions, initial times and velocities during the tomographic inversion: mitigating these trade-offs is usually carried empirically (Lemeur et al 1997). This procedure is not optimal and may lead to errors in the velocity reconstruction as well as in the source localization. For a better simultaneous estimation of such multi-parametric reconstruction problem, one may take benefit of improved local optimization such as full Newton method where the Hessian influence helps balancing between different physical parameter quantities and improving the coverage at the point of reconstruction. Unfortunately, the computation of the full Hessian operator is not easily computed in large models and with large datasets. Truncated Newton (TCN) is an alternative optimization approach (Métivier et al. 2012) that allows resolution of the normal equation H Δm = - g using a matrix-free conjugate gradient algorithm. It only requires to be able to compute the gradient of the misfit function and Hessian-vector products. Traveltime maps can be computed in the whole domain by numerical modeling (Vidale 1998, Zhao 2004). The gradient and the Hessian-vector products for velocities can be computed without ray-tracing using 1st and 2nd order adjoint-state methods for the cost of 1 and 2 additional modeling step (Plessix 2006, Métivier et al. 2012). Reciprocity allows to compute accurately the gradient and the full Hessian for each coordinates of the sources and for their initial times. Then the resolution of the problem is done through two nested loops. The model update Δm is
Jain, Pankaj C; Varadarajan, Raghavan
2014-03-15
With the development of deep sequencing methodologies, it has become important to construct site saturation mutant (SSM) libraries in which every nucleotide/codon in a gene is individually randomized. We describe methodologies for the rapid, efficient, and economical construction of such libraries using inverse polymerase chain reaction (PCR). We show that if the degenerate codon is in the middle of the mutagenic primer, there is an inherent PCR bias due to the thermodynamic mismatch penalty, which decreases the proportion of unique mutants. Introducing a nucleotide bias in the primer can alleviate the problem. Alternatively, if the degenerate codon is placed at the 5' end, there is no PCR bias, which results in a higher proportion of unique mutants. This also facilitates detection of deletion mutants resulting from errors during primer synthesis. This method can be used to rapidly generate SSM libraries for any gene or nucleotide sequence, which can subsequently be screened and analyzed by deep sequencing.
NASA Astrophysics Data System (ADS)
Tarmizi, S. N. M.; Asmat, A.; Sumari, S. M.
2014-02-01
PM10 is one of the air contaminants that can be harmful to human health. Meteorological factors and changes of monsoon season may affect the distribution of these particles. The objective of this study is to determine the temporal and spatial particulate matter (PM10) concentration distribution in Klang Valley, Malaysia by using the Inverse Distance Weighted (IDW) method at different monsoon season and meteorological conditions. PM10 and meteorological data were obtained from the Malaysian Department of Environment (DOE). Particles distribution data were added to the geographic database on a seasonal basis. Temporal and spatial patterns of PM10 concentration distribution were determined by using ArcGIS 9.3. The higher PM10 concentrations are observed during Southwest monsoon season. The values are lower during the Northeast monsoon season. Different monsoon seasons show different meteorological conditions that effect PM10 distribution.
Inversion methods for the measurements of MHD-like density fluctuations by Heavy Ion Beam Diagnostic
NASA Astrophysics Data System (ADS)
Malaquias, A.; Henriques, R. B.; Nedzelsky, I. S.
2015-09-01
We report here on the recent developments in the deconvolution of the path integral effects for the study of MHD pressure-like fluctuations measured by Heavy Ion Beam Diagnostic. In particular, we develop improved methods to account for and remove the path integral effect on the determination of the ionization generation factors, including the double ionization of the primary beam. We test the method using the HIBD simulation code which computes the real beam trajectories and attenuations due to electron impact ionization for any selected synthetic profiles of plasma current, plasma potential, electron temperature and density. Simulations have shown the numerical method to be highly effective in ISTTOK within an overall accuracy of a few percent (< 3%). The method here presented can effectively reduce the path integral effects and may serve as the basis to develop improved retrieving techniques for plasma devices working even in higher density ranges. The method is applied to retrieve the time evolution and spatial structure of m=1 and m=2 modes. The 2D MHD mode-like structure is reconstructed by means of a spatial projection of all 1D measurements obtained during one full rotation of the mode. A shorter version of this contribution is due to be published in PoS at: 1st EPS conference on Plasma Diagnostics
Liang, Wei; Murakawa, Hidekazu
2014-01-01
Welding-induced deformation not only negatively affects dimension accuracy but also degrades the performance of product. If welding deformation can be accurately predicted beforehand, the predictions will be helpful for finding effective methods to improve manufacturing accuracy. Till now, there are two kinds of finite element method (FEM) which can be used to simulate welding deformation. One is the thermal elastic plastic FEM and the other is elastic FEM based on inherent strain theory. The former only can be used to calculate welding deformation for small or medium scale welded structures due to the limitation of computing speed. On the other hand, the latter is an effective method to estimate the total welding distortion for large and complex welded structures even though it neglects the detailed welding process. When the elastic FEM is used to calculate the welding-induced deformation for a large structure, the inherent deformations in each typical joint should be obtained beforehand. In this paper, a new method based on inverse analysis was proposed to obtain the inherent deformations for weld joints. Through introducing the inherent deformations obtained by the proposed method into the elastic FEM based on inherent strain theory, we predicted the welding deformation of a panel structure with two longitudinal stiffeners. In addition, experiments were carried out to verify the simulation results. PMID:25276856
Liang, Wei; Murakawa, Hidekazu
2014-01-01
Welding-induced deformation not only negatively affects dimension accuracy but also degrades the performance of product. If welding deformation can be accurately predicted beforehand, the predictions will be helpful for finding effective methods to improve manufacturing accuracy. Till now, there are two kinds of finite element method (FEM) which can be used to simulate welding deformation. One is the thermal elastic plastic FEM and the other is elastic FEM based on inherent strain theory. The former only can be used to calculate welding deformation for small or medium scale welded structures due to the limitation of computing speed. On the other hand, the latter is an effective method to estimate the total welding distortion for large and complex welded structures even though it neglects the detailed welding process. When the elastic FEM is used to calculate the welding-induced deformation for a large structure, the inherent deformations in each typical joint should be obtained beforehand. In this paper, a new method based on inverse analysis was proposed to obtain the inherent deformations for weld joints. Through introducing the inherent deformations obtained by the proposed method into the elastic FEM based on inherent strain theory, we predicted the welding deformation of a panel structure with two longitudinal stiffeners. In addition, experiments were carried out to verify the simulation results. PMID:25276856
Liang, Wei; Murakawa, Hidekazu
2014-01-01
Welding-induced deformation not only negatively affects dimension accuracy but also degrades the performance of product. If welding deformation can be accurately predicted beforehand, the predictions will be helpful for finding effective methods to improve manufacturing accuracy. Till now, there are two kinds of finite element method (FEM) which can be used to simulate welding deformation. One is the thermal elastic plastic FEM and the other is elastic FEM based on inherent strain theory. The former only can be used to calculate welding deformation for small or medium scale welded structures due to the limitation of computing speed. On the other hand, the latter is an effective method to estimate the total welding distortion for large and complex welded structures even though it neglects the detailed welding process. When the elastic FEM is used to calculate the welding-induced deformation for a large structure, the inherent deformations in each typical joint should be obtained beforehand. In this paper, a new method based on inverse analysis was proposed to obtain the inherent deformations for weld joints. Through introducing the inherent deformations obtained by the proposed method into the elastic FEM based on inherent strain theory, we predicted the welding deformation of a panel structure with two longitudinal stiffeners. In addition, experiments were carried out to verify the simulation results.
Liu, Bin; Zhang, Bingbing; Wan, Chao; Dong, Yihuan
2014-01-01
In order to reduce the motion artifact caused by the patient in cerebral DSA images, a non-rigid registration method based on stretching transformation is presented in this paper. Unlike other traditional methods, it does not need bilinear interpolation which is rather time-consuming and even produce 'originally non-existent gray value'. By this method, the mask image is rasterized to generate appropriate control points. The Energy of Histogram of Differences criterion is adopted as similarity measurement, and the Powell algorithm is utilized for acceleration. A forward stretching transformation is used to complete motion estimation and an inverse stretching transformation to generate target image by pixel mapping strategy. This method is effective to maintain the topological relationships of the gray value before and after the image deformation. The mask image remains clear and accurate contours, and the quality of the subtraction image after the registration is favorable. This method can provide support for clinical treatment and diagnosis of cerebral disease. PMID:24212008
NASA Astrophysics Data System (ADS)
Xue, Haile; Shen, Xueshun; Chou, Jifan
2015-10-01
Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the prediction equations can be estimated inversely by using the past data, which are presumed to represent the imperfection of the NWP model (model error, denoted as ME). In this first paper of a two-part series, an iteration method for obtaining the MEs in past intervals is presented, and the results from testing its convergence in idealized experiments are reported. Moreover, two batches of iteration tests were applied in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August 2009 and January-February 2010. The datasets associated with the initial conditions and sea surface temperature (SST) were both based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results showed that 6th h forecast errors were reduced to 10% of their original value after a 20-step iteration. Then, off-line forecast error corrections were estimated linearly based on the 2-month mean MEs and compared with forecast errors. The estimated error corrections agreed well with the forecast errors, but the linear growth rate of the estimation was steeper than the forecast error. The advantage of this iteration method is that the MEs can provide the foundation for online correction. A larger proportion of the forecast errors can be expected to be canceled out by properly introducing the model error correction into GRAPES-GFS.
Coordinate transformation method for the solution of inverse problem in 2D and 3D scatterometry
NASA Astrophysics Data System (ADS)
Ponnusamy, Sekar
2005-05-01
For scatterometry applications, diffraction analysis of gratings is carried out by using Rigorous Coupled Wave Analysis (RCWA). Though RCWA method is originally developed for lamellar gratings, arbitrary profiles can be analyzed using staircase approximation with S-Matrix propagation of field components. For improved accuracy, more number of Fourier waves need to be included in Floquet-Bloch expansion of the field components and also more number of slices are to be made in staircase approximation. These requirements increase the time required for the analysis. A coordinate transformation method (CTM) developed by Chandezon et. al renders the arbitrary grating profile into a plane surface in the new coordinate system and hence it does not require slicing. This method is extended to 3D structures by several authors notably, by Harris et al for non-orthogonal unit cells and by Granet for correct Fourier expansion. Also extended is to handle sharp-edged gratings through adaptive spatial resolution. In this paper, an attempt is made to employ CTM with correct Fourier expansion in conjunction with adaptive spatial resolution, for scatterometry applications. A MATLAB program is developed, and thereby, demonstrated that CTM can be used for diffraction analysis of trapezoidal profiles that are typically encountered in scatterometry applications.
Inverse Functions and their Derivatives.
ERIC Educational Resources Information Center
Snapper, Ernst
1990-01-01
Presented is a method of interchanging the x-axis and y-axis for viewing the graph of the inverse function. Discussed are the inverse function and the usual proofs that are used for the function. (KR)
NASA Astrophysics Data System (ADS)
Tian, Wenyi; Yuan, Xiaoming
2016-11-01
Linear inverse problems with total variation regularization can be reformulated as saddle-point problems; the primal and dual variables of such a saddle-point reformulation can be discretized in piecewise affine and constant finite element spaces, respectively. Thus, the well-developed primal-dual approach (a.k.a. the inexact Uzawa method) is conceptually applicable to such a regularized and discretized model. When the primal-dual approach is applied, the resulting subproblems may be highly nontrivial and it is necessary to discuss how to tackle them and thus make the primal-dual approach implementable. In this paper, we suggest linearizing the data-fidelity quadratic term of the hard subproblems so as to obtain easier ones. A linearized primal-dual method is thus proposed. Inspired by the fact that the linearized primal-dual method can be explained as an application of the proximal point algorithm, a relaxed version of the linearized primal-dual method, which can often accelerate the convergence numerically with the same order of computation, is also proposed. The global convergence and worst-case convergence rate measured by the iteration complexity are established for the new algorithms. Their efficiency is verified by some numerical results.
NASA Astrophysics Data System (ADS)
Shonkwiler, K. B.; Ham, J. M.; Williams, C.
2012-12-01
Development Initiative. Food and Agriculture Organization of the United Nations, Rome, Italy. [2] Loubet, B., Génermont, S., Ferrara, R., Bedos, C., Decuq, C., Personne, E., Fanucci, O., Durand, B., Rana, G., Cellier, P., 2010. An inverse model to estimate ammonia emissions from fields. Eur. J. Soil Sci. 61: 793-805. Panorama of a weather station (left) utilizing micrometeorological methods to aid in estimating emissions of methane and ammonia from an anaerobic livestock lagoon (center) at a commercial dairy in Northern Colorado, USA.
Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V
2006-12-31
Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)
The GLAS physical inversion method for analysis of HIRS2/MSU sounding data
NASA Technical Reports Server (NTRS)
Susskind, J.; Rosenfield, J.; Reuter, D.; Chahine, M. T.
1982-01-01
Goddard Laboratory for Atmospheric Sciences has developed a method to derive atmospheric temperature profiles, sea or land surface temperatures, sea ice extent and snow cover, and cloud heights and fractional cloud, from HIRS2/MSU radiance data. Chapter 1 describes the physics used in the radiative transfer calculations and demonstrates the accuracy of the calculations. Chapter 2 describes the rapid transmittance algorithm used and demonstrates its accuracy. Chapter 3 describes the theory and application of the techniques used to analyze the satellite data. Chapter 4 shows results obtained for January 1979.
Magnain, Caroline; Elias, Mady; Frigerio, Jean-Marc
2008-07-01
In a previous article [J. Opt. Soc. Am. A 24, 2196 (2007)] we have modeled skin color using the radiative transfer equation, solved by the auxiliary function method. Three main parameters have been determined as being predominant in the diversity of skin color: the concentrations of melanosomes and of red blood cells and the oxygen saturation of blood. From the reflectance spectrum measured on real Caucasian skin, these parameters are now evaluated by minimizing the standard deviation on the adjusted wavelength range between the experimental spectrum and simulated spectra gathered in a database.
A Markov Chain Monte Carlo method for the groundwater inverse problem.
Lu, Z.; Higdon, D. M.; Zhang, D.
2004-01-01
In this study, we develop a Markov Chain Monte Carlo method (MCMC) to estimate the hydraulic conductivity field conditioned on the direct measurements of hydraulic conductivity and indirect measurements of dependent variables such as hydraulic head for saturated flow in randomly heterogeneous porous media. The log hydraulic conductivity field is represented (parameterized) by the combination of some basis kernels centered at fixed spatial locations. The prior distribution for the vector of coefficients {theta} are taken from a posterior distribution {pi}({theta}/d) that is proportional to the product of the likelihood function of measurements d given parameter vector {theta} and the prior distribution of {theta}. Starting from any initial setting, a partial realization of a Markov chain is generated by updating only one component of {theta} at a time according to Metropolis rules. This ensures that output from this chain has {pi}({theta}/d) as its stationary distribution. The posterior mean of the parameter {theta} (and thus the mean log hydraulic conductivity conditional to measurements on hydraulic conductivity, and hydraulic head) can be estimated from the Markov chain realizations (ignoring some early realizations). The uncertainty associated with the mean filed can also be assessed from these realizations. In addition, the MCMC approach provides an alternative for estimating conditional predictions of hydraulic head and concentration and their associated uncertainties. Numerical examples for flow in a hypothetic random porous medium show that estimated log hydraulic conductivity field from the MCMC approach is closer to the original hypothetical random field than those obtained using kriging or cokriging methods.
The planetary nebula Abell 48 and its [WN] nucleus
NASA Astrophysics Data System (ADS)
Frew, David J.; Bojičić, I. S.; Parker, Q. A.; Stupar, M.; Wachter, S.; DePew, K.; Danehkar, A.; Fitzgerald, M. T.; Douchin, D.
2014-05-01
We have conducted a detailed multi-wavelength study of the peculiar nebula Abell 48 and its central star. We classify the nucleus as a helium-rich, hydrogen-deficient star of type [WN4-5]. The evidence for either a massive WN or a low-mass [WN] interpretation is critically examined, and we firmly conclude that Abell 48 is a planetary nebula (PN) around an evolved low-mass star, rather than a Population I ejecta nebula. Importantly, the surrounding nebula has a morphology typical of PNe, and is not enriched in nitrogen, and thus not the `peeled atmosphere' of a massive star. We estimate a distance of 1.6 kpc and a reddening, E(B - V) = 1.90 mag, the latter value clearly showing the nebula lies on the near side of the Galactic bar, and cannot be a massive WN star. The ionized mass (˜0.3 M⊙) and electron density (700 cm-3) are typical of middle-aged PNe. The observed stellar spectrum was compared to a grid of models from the Potsdam Wolf-Rayet (PoWR) grid. The best-fitting temperature is 71 kK, and the atmospheric composition is dominated by helium with an upper limit on the hydrogen abundance of 10 per cent. Our results are in very good agreement with the recent study of Todt et al., who determined a hydrogen fraction of 10 per cent and an unusually large nitrogen fraction of ˜5 per cent. This fraction is higher than any other low-mass H-deficient star, and is not readily explained by current post-AGB models. We give a discussion of the implications of this discovery for the late-stage evolution of intermediate-mass stars. There is now tentative evidence for two distinct helium-dominated post-AGB lineages, separate to the helium- and carbon-dominated surface compositions produced by a late thermal pulse. Further theoretical work is needed to explain these recent discoveries.
NASA Astrophysics Data System (ADS)
Larour, E. Y.; Rignot, E.; Joughin, I.
2004-12-01
Ice shelves floating around the Antarctic Ice Sheet spread under their own weight into the ocean. The ice flow is controlled by the rigidity of ice, and a delicate interaction with bottom and surface accumulation. Rigidity ( or flow law parameter B ) depends mainly on temperature [Paterson, 1994], and fabrics. This study presents an inverse control method developped to infer B on ice shelves. The method is based on finding the best fit with observations of ice velocity from satellite radar interferometry. The model was tested on the Ronne Ice Shelf, and the results show flow law parameter B varying between 300 kPa a1/3 and 900 kPa a1/3. Minimums appear along the ice margins which could be due to ice softening (viscous heating). High values are found in the wake of large glaciers which advect large quantities of cold ice. Some areas near the grounding lines experience basal melting, which increases rigidity. Melting near the icefront corresponds to areas of decreased rigidity. This method allows the modeller to account for variations in the distribution of B in ice flow models. We thank the California Institude of Technology for making this study possible.
Böckmann, C
2001-03-20
A specially developed method is proposed to retrieve the particle volume distribution, the mean refractive index, and other important physical parameters, e.g., the effective radius, volume, surface area, and number concentrations of tropospheric and stratospheric aerosols, from optical data by use of multiple wavelengths. This algorithm requires neither a priori knowledge of the analytical shape of the distribution nor an initial guess of the distribution. As a result, even bimodal and multimodal distributions can be retrieved without any advance knowledge of the number of modes. The nonlinear ill-posed inversion is achieved by means of a hybrid method combining regularization by discretization, variable higher-order B-spline functions and a truncated singular-value decomposition. The method can be used to handle different lidar devices that work with various values and numbers of wavelengths. It is shown, to my knowledge for the first time, that only one extinction and three backscatter coefficients are sufficient for the solution. Moreover, measurement errors up to 20% are allowed. This result could be achieved by a judicious fusion of different properties of three suitable regularization parameters. Finally, numerical results with an additional unknown refractive index show the possibility of successfully recovering both unknowns simultaneously from the lidar data: the aerosol volume distribution and the refractive index.
NASA Astrophysics Data System (ADS)
Kočí, Jan; Maděra, Jiří; Černý, Robert
2013-10-01
Verification of genetic programming (GP) as a new approach for solving inverse problems of moisture transport in building materials is presented. The GP is applied on experimental data in order to optimize the moisture diffusivity as a function of moisture content. The results show that GP is very powerful tool for the inverse analysis of transport equations.
NASA Astrophysics Data System (ADS)
Maris, Virginie
An existing 3-D magnetotelluric (MT) inversion program written for a single processor personal computer (PC) has been modified and parallelized using OpenMP, in order to run the program efficiently on a multicore workstation. The program uses the Gauss-Newton inversion algorithm based on a staggered-grid finite-difference forward problem, requiring explicit calculation of the Frechet derivatives. The most time-consuming tasks are calculating the derivatives and determining the model parameters at each iteration. Forward modeling and derivative calculations are parallelized by assigning the calculations for each frequency to separate threads, which execute concurrently. Model parameters are obtained by factoring the Hessian using the LDLT method, implemented using a block-cyclic algorithm and compact storage. MT data from 102 tensor stations over the East Flank of the Coso Geothermal Field, California are inverted. Less than three days are required to invert the dataset for ˜ 55,000 inversion parameters on a 2.66 GHz 8-CPU PC with 16 GB of RAM. Inversion results, recovered from a halfspace rather than initial 2-D inversions, qualitatively resemble models from massively parallel 3-D inversion by other researchers and overall, exhibit an improved fit. A steeply west-dipping conductor under the western East Flank is tentatively correlated with a zone of high-temperature ionic fluids based on known well production and lost circulation intervals. Beneath the Main Field, vertical and north-trending shallow conductors are correlated with geothermal producing intervals as well.
Microencapsulation of maltogenic α-amylase in poly(urethane-urea) shell: inverse emulsion method.
Maciulyte, Sandra; Kochane, Tatjana; Budriene, Saulute
2015-01-01
The novel poly(urethane-urea) microcapsules (PUUMC) were obtained by the interfacial polyaddition reaction between the oil-soluble hexamethylene diisocyanate (HMDI) and the water soluble poly(vinyl alcohol) (PVA) in a water-in-oil (W/O) emulsion. The PVA was used instead of diols. Maltogenase L (maltogenic α-amylase from Bacillus stearothermophilus (E. C. 3.2.1.133) (MG) was encapsulated in the PUUMC during or after formation of capsules. The PUUMC were thoroughly characterised by chemical analytical methods, FT-IR, SEM, thermal analysis, surface area, pore volume and size analysis. Furthermore, by carefully analysing the influencing factors including: catalyst and surfactants and their concentrations, the initial molar ratio of PVA and HMDI, stirring rate and ratio of dispersed phase to external phase, the optimum synthesis conditions were found out. A controlled release of MG could be observed in many cases. Delayed-release capsules were obtained when initial concentration of HMDI was increased. These capsules have potential application in biotechnology for saccharification of starch. PMID:26190216
Nonlinear evolution-type equations and their exact solutions using inverse variational methods
NASA Astrophysics Data System (ADS)
Kara, A. H.; Khalique, C. M.
2005-05-01
We present the role of invariants in obtaining exact solutions of differential equations. Firstly, conserved vectors of a partial differential equation (p.d.e.) allow us to obtain reduced forms of the p.d.e. for which some of the Lie point symmetries (in vector field form) are easily concluded and, therefore, provide a mechanism for further reduction. Secondly, invariants of reduced forms of a p.d.e. are obtainable from a variational principle even though the p.d.e. itself does not admit a Lagrangian. In this latter case, the reductions carry all the usual advantages regarding Noether symmetries and double reductions. The examples we consider are nonlinear evolution-type equations such as the Korteweg-deVries equation, but a detailed analysis is made on the Fisher equation (which describes reaction-diffusion waves in biology, inter alia). Other diffusion-type equations lend themselves well to the method we describe (e.g., the Fitzhugh Nagumo equation, which is briefly discussed). Some aspects of Painlevé properties are also suggested.
Drory Retwitzer, Matan; Kifer, Ilona; Sengupta, Supratim; Yakhini, Zohar; Barash, Danny
2015-01-01
Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is to find additional eukaryotic riboswitches since more than 20 riboswitch classes have been found in prokaryotes but only one class has been found in eukaryotes. Moreover, this single known class of eukaryotic riboswitch, namely the TPP riboswitch class, has been found in bacteria, archaea, fungi and plants but not in animals. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods such as a combination of BLAST and pattern matching techniques that incorporate base-pairing considerations. None of these approaches perform energy minimization structure predictions. There is a clear motivation to develop new bioinformatics methods, aside of the ongoing advances in covariance models, that will sample the sequence search space more flexibly using structural guidance while retaining the computational efficiency of sequence-based methods. We present a new energy minimization approach that transforms structure-based search into a sequence-based search, thereby enabling the utilization of well established sequence-based search utilities such as BLAST and FASTA. The transformation to sequence space is obtained by using an extended inverse RNA folding problem solver with sequence and structure constraints, available within RNAfbinv. Examples in applying the new method are presented for the purine and preQ1 riboswitches. The method is described in detail along with its findings in prokaryotes. Potential uses in finding novel eukaryotic riboswitches and optimizing pre-designed synthetic riboswitches based on ligand simulations are discussed. The method components are freely available for use. PMID
NASA Astrophysics Data System (ADS)
Ren, Cong
Nowadays, the micro-tubular solid oxide fuel cells (MT-SOFCs), especially the anode supported MT-SOFCs have been extensively developed to be applied for SOFC stacks designation, which can be potentially used for portable power sources and vehicle power supply. To prepare MT-SOFCs with high electrochemical performance, one of the main strategies is to optimize the microstructure of the anode support. Recently, a novel phase inversion method has been applied to prepare the anode support with a unique asymmetrical microstructure, which can improve the electrochemical performance of the MT-SOFCs. Since several process parameters of the phase inversion method can influence the pore formation mechanism and final microstructure, it is essential and necessary to systematically investigate the relationship between phase inversion process parameters and final microstructure of the anode supports. The objective of this study is aiming at correlating the process parameters and microstructure and further preparing MT-SOFCs with enhanced electrochemical performance. Non-solvent, which is used to trigger the phase separation process, can significantly influence the microstructure of the anode support fabricated by phase inversion method. To investigate the mechanism of non-solvent affecting the microstructure, water and ethanol/water mixture were selected for the NiO-YSZ anode supports fabrication. The presence of ethanol in non-solvent can inhibit the growth of the finger-like pores in the tubes. With the increasing of the ethanol concentration in the non-solvent, a relatively dense layer can be observed both in the outside and inside of the tubes. The mechanism of pores growth and morphology obtained by using non-solvent with high concentration ethanol was explained based on the inter-diffusivity between solvent and non-solvent. Solvent and non-solvent pair with larger Dm value is benefit for the growth of finger-like pores. Three cells with different anode geometries was
Affagard, Jean-Sébastien; Feissel, Pierre; Bensamoun, Sabine F
2015-11-26
The mechanical behavior of muscle tissue is an important field of investigation with different applications in medicine, car crash and sport, for example. Currently, few in vivo imaging techniques are able to characterize the mechanical properties of muscle. Thus, this study presents an in vivo method to identify a hyperelatic behavior from a displacement field measured with ultrasound and Digital Image Correlation (DIC) techniques. This identification approach was composed of 3 inter-dependent steps. The first step was to perform a 2D MRI acquisition of the thigh in order to obtain a manual segmentation of muscles (quadriceps, ischio, gracilis and sartorius) and fat tissue, and then develop a Finite Element model. In addition, a Neo-Hookean model was chosen to characterize the hyperelastic behavior (C10, D) in order to simulate a displacement field. Secondly, an experimental compression device was developed in order to measure the in vivo displacement fields in several areas of the thigh. Finally, an inverse method was performed to identify the C10 and D parameters of each soft tissue. The identification procedure was validated with a comparison with the literature. The relevance of this study was to identify the mechanical properties of each investigated soft tissues.
Zhu, Bing; Chen, Yizhou; Zhao, Jian
2014-01-01
An integrated chassis control (ICC) system with active front steering (AFS) and yaw stability control (YSC) is introduced in this paper. The proposed ICC algorithm uses the improved Inverse Nyquist Array (INA) method based on a 2-degree-of-freedom (DOF) planar vehicle reference model to decouple the plant dynamics under different frequency bands, and the change of velocity and cornering stiffness were considered to calculate the analytical solution in the precompensator design so that the INA based algorithm runs well and fast on the nonlinear vehicle system. The stability of the system is guaranteed by dynamic compensator together with a proposed PI feedback controller. After the response analysis of the system on frequency domain and time domain, simulations under step steering maneuver were carried out using a 2-DOF vehicle model and a 14-DOF vehicle model by Matlab/Simulink. The results show that the system is decoupled and the vehicle handling and stability performance are significantly improved by the proposed method.
Kerr, H.G.; White, N.
1996-03-01
A general, automatic method for determining the three-dimensional geometry of a normal fault of any shape and size is applied to a three-dimensional seismic reflection data set from the Nun River field, Nigeria. In addition to calculating fault geometry, the method also automatically retrieves the extension direction without requiring any previous information about either the fault shape or the extension direction. Solutions are found by minimizing the misfit between sets of faults that are calculated from the observed geometries of two or more hanging-wall beds. In the example discussed here, the predicted fault surface is in excellent agreement with the shape of the seismically imaged fault. Although the calculated extension direction is oblique to the average strike of the fault, the value of this parameter is not well resolved. Our approach differs markedly from standard section-balancing models in two important ways. First, we do not assume that the extension direction is known, and second, the use of inverse theory ensures that formal confidence bounds can be determined for calculated fault geometries. This ability has important implications for a range of geological problems encountered at both exploration and production scales. In particular, once the three-dimensional displacement field has been constrained, the difficult but important problem of three-dimensional palinspastic restoration of hanging-wall structures becomes tractable.
Müller, David; Cattaneo, Stefano; Meier, Florian; Welz, Roland; de Vries, Tjerk; Portugal-Cohen, Meital; Antonio, Diana C; Cascio, Claudia; Calzolai, Luigi; Gilliland, Douglas; de Mello, Andrew
2016-04-01
We demonstrate the use of inverse supercritical carbon dioxide (scCO2) extraction as a novel method of sample preparation for the analysis of complex nanoparticle-containing samples, in our case a model sunscreen agent with titanium dioxide nanoparticles. The sample was prepared for analysis in a simplified process using a lab scale supercritical fluid extraction system. The residual material was easily dispersed in an aqueous solution and analyzed by Asymmetrical Flow Field-Flow Fractionation (AF4) hyphenated with UV- and Multi-Angle Light Scattering detection. The obtained results allowed an unambiguous determination of the presence of nanoparticles within the sample, with almost no background from the matrix itself, and showed that the size distribution of the nanoparticles is essentially maintained. These results are especially relevant in view of recently introduced regulatory requirements concerning the labeling of nanoparticle-containing products. The novel sample preparation method is potentially applicable to commercial sunscreens or other emulsion-based cosmetic products and has important ecological advantages over currently used sample preparation techniques involving organic solvents.
Kagel, J R; Rossi, D T; Hoffman, K L; Leja, B; Lathia, C D
1999-11-01
A chiral HPLC method to quantify in vivo enantiomeric inversion of prodrug CI-1010 (IR) or its drug IIR (PD 146923), a radiosensitizer, upon X-irradiation of dosed rats was developed. These polar enantiomers were separated only by using normal-phase chiral HPLC. A Chiralpak AS column provided the best separation. Isolation of analytes from plasma employed solid-phase extraction (SPE), and required conditions that were compatible with normal-phase HPLC. Options for SPE were restricted by the chemically reactive nature of both prodrug and drug, which produced analyte losses as high as 100%. Acceptable recoveries using SPE required evaluation of conditions for analyte chemical stability. The validated method gave a lower-limit of quantitation (LLOQ) of 200 ng/ml for each enantiomer extracted from 0.15 ml of plasma. The LLOQ of the inverted enantiomer could be detected in the presence of 10,000 ng/ml of the dosed enantiomer. Precision (RSD) ranged from 14.2 to 4.4%, and from 24.2 to 5.1% for IIS and IIR, respectively. Accuracy (RE) was +/- 13.1 and +/- 13.2%, respectively. Recoveries ranged from 44.3 to 71.4%, and from 40.7 to 67.9%, for IIS and IIR, respectively.
Lewy, Serge
2008-07-01
Spinning modes generated by a ducted turbofan at a given frequency determine the acoustic free-field directivity. An inverse method starting from measured directivity patterns is interesting in providing information on the noise sources without requiring tedious spinning-mode experimental analyses. According to a previous article, equations are based on analytical modal splitting inside a cylindrical duct and on a Rayleigh or a Kirchhoff integral on the duct exit cross section to get far-field directivity. Equations are equal in number to free-field measurement locations and the unknowns are the propagating mode amplitudes (there are generally more unknowns than equations). A MATLAB procedure has been implemented by using either the pseudoinverse function or the backslash operator. A constraint comes from the fact that squared modal amplitudes must be positive which involves an iterative least squares fitting. Numerical simulations are discussed along with several examples based on tests performed by Rolls-Royce in the framework of a European project. It is assessed that computation is very fast and it well fits the measured directivities, but the solution depends on the method and is not unique. This means that the initial set of modes should be chosen according to any known physical property of the acoustic sources. PMID:18646973
Müller, David; Cattaneo, Stefano; Meier, Florian; Welz, Roland; de Vries, Tjerk; Portugal-Cohen, Meital; Antonio, Diana C; Cascio, Claudia; Calzolai, Luigi; Gilliland, Douglas; de Mello, Andrew
2016-04-01
We demonstrate the use of inverse supercritical carbon dioxide (scCO2) extraction as a novel method of sample preparation for the analysis of complex nanoparticle-containing samples, in our case a model sunscreen agent with titanium dioxide nanoparticles. The sample was prepared for analysis in a simplified process using a lab scale supercritical fluid extraction system. The residual material was easily dispersed in an aqueous solution and analyzed by Asymmetrical Flow Field-Flow Fractionation (AF4) hyphenated with UV- and Multi-Angle Light Scattering detection. The obtained results allowed an unambiguous determination of the presence of nanoparticles within the sample, with almost no background from the matrix itself, and showed that the size distribution of the nanoparticles is essentially maintained. These results are especially relevant in view of recently introduced regulatory requirements concerning the labeling of nanoparticle-containing products. The novel sample preparation method is potentially applicable to commercial sunscreens or other emulsion-based cosmetic products and has important ecological advantages over currently used sample preparation techniques involving organic solvents. PMID:26931426
2014-01-01
An integrated chassis control (ICC) system with active front steering (AFS) and yaw stability control (YSC) is introduced in this paper. The proposed ICC algorithm uses the improved Inverse Nyquist Array (INA) method based on a 2-degree-of-freedom (DOF) planar vehicle reference model to decouple the plant dynamics under different frequency bands, and the change of velocity and cornering stiffness were considered to calculate the analytical solution in the precompensator design so that the INA based algorithm runs well and fast on the nonlinear vehicle system. The stability of the system is guaranteed by dynamic compensator together with a proposed PI feedback controller. After the response analysis of the system on frequency domain and time domain, simulations under step steering maneuver were carried out using a 2-DOF vehicle model and a 14-DOF vehicle model by Matlab/Simulink. The results show that the system is decoupled and the vehicle handling and stability performance are significantly improved by the proposed method. PMID:24782676
A shock at the radio relic position in Abell 115
NASA Astrophysics Data System (ADS)
Botteon, A.; Gastaldello, F.; Brunetti, G.; Dallacasa, D.
2016-07-01
We analysed a deep Chandra observation (334 ks) of the galaxy cluster Abell 115 and detected a shock cospatial with the radio relic. The X-ray surface brightness profile across the shock region presents a discontinuity, corresponding to a density compression factor {C}=2.0± 0.1, leading to a Mach number {M}=1.7± 0.1 ({M}=1.4-2 including systematics). Temperatures measured in the upstream and downstream regions are consistent with what expected for such a shock: T_u=4.3^{+1.0}_{-0.6}{keV} and T_d=7.9^{+1.4}_{-1.1}{keV}, respectively, implying a Mach number {M}=1.8^{+0.5}_{-0.4}. So far, only few other shocks discovered in galaxy clusters are consistently detected from both density and temperature jumps. The spatial coincidence between this discontinuity and the radio relic edge strongly supports the view that shocks play a crucial role in powering these synchrotron sources. We suggest that the relic is originated by shock re-acceleration of relativistic electrons rather than acceleration from the thermal pool. The position and curvature of the shock and the associated relic are consistent with an off-axis merger with unequal mass ratio where the shock is expected to bend around the core of the less massive cluster.
The Radio Luminosity Function and Galaxy Evolution of Abell 2256
NASA Astrophysics Data System (ADS)
Forootaninia, Zahra
2015-05-01
This thesis presents a study of the radio luminosity function and the evolution of galaxies in the Abell 2256 cluster (z=0.058, richness class 2). Using the NED database and VLA deep data with an rms sensitivity of 18 mu Jy.beam--1, we identified 257 optical galaxies as members of A2256, of which 83 are radio galaxies. Since A2256 is undergoing a cluster-cluster merger, it is a good candidate to study the radio activity of galaxies in the cluster. We calculated the Univariate and Bivariate radio luminosity functions for A2256, and compared the results to studies on other clusters. We also used the SDSS parameter fracDev to roughly classify galaxies as spirals and ellipticals, and investigated the distribution and structure of galaxies in the cluster. We found that most of the radio galaxies in A2256 are faint, and are distributed towards the outskirts of the cluster. On the other hand, almost all very bright radio galaxies are ellipticals which are located at the center of the cluster. We also found there is an excess in the number of radio spiral galaxies in A2256 compared to the number of radio ellipticals, counting down to a radio luminosity of log(luminosity)=20.135 W/Hz..
Abell 1201: A Minor Merger at Second Core Passage
NASA Astrophysics Data System (ADS)
Ma, Cheng-Jiun; Owers, Matt; Nulsen, Paul E. J.; McNamara, Brian R.; Murray, Stephen S.; Couch, Warrick J.
2012-06-01
We present an analysis of the structures and dynamics of the merging cluster Abell 1201, which has two sloshing cold fronts around a cooling core, and an offset gas core approximately 500 kpc northwest of the center. New Chandra and XMM-Newton data reveal a region of enhanced brightness east of the offset core, with breaks in surface brightness along its boundary to the north and east. This is interpreted as a tail of gas stripped from the offset core. Gas in the offset core and the tail is distinguished from other gas at the same distance from the cluster center chiefly by having higher density, hence lower entropy. In addition, the offset core shows marginally lower temperature and metallicity than the surrounding area. The metallicity in the cool core is high and there is an abrupt drop in metallicity across the southern cold front. We interpret the observed properties of the system, including the placement of the cold fronts, the offset core, and its tail in terms of a simple merger scenario. The offset core is the remnant of a merging subcluster, which first passed pericenter southeast of the center of the primary cluster and is now close to its second pericenter passage, moving at ~= 1000 km s-1. Sloshing excited by the merger gave rise to the two cold fronts and the disposition of the cold fronts reveals that we view the merger from close to the plane of the orbit of the offset core.
Chandra Observations of Point Sources in Abell 2255
NASA Technical Reports Server (NTRS)
Davis, David S.; Miller, Neal A.; Mushotzky, Richard F.
2003-01-01
In our search for "hidden" AGN we present results from a Chandra observation of the nearby cluster Abell 2255. Eight cluster galaxies are associated with point-like X-ray emission, and we classify these galaxies based on their X-ray, radio, and optical properties. At least three are associated with active galactic nuclei (AGN) with no optical signatures of nuclear activity, with a further two being potential AGN. Of the potential AGN, one corresponds to a galaxy with a post-starburst optical spectrum. The remaining three X-ray detected cluster galaxies consist of two starbursts and an elliptical with luminous hot gas. Of the eight cluster galaxies five are associated with luminous (massive) galaxies and the remaining three lie in much lower luminosity systems. We note that the use of X-ray to optical flux ratios for classification of X-ray sources is often misleading, and strengthen the claim that the fraction of cluster galaxies hosting an AGN based on optical data is significantly lower than the fraction based on X-ray and radio data.
ABELL 1201: A MINOR MERGER AT SECOND CORE PASSAGE
Ma Chengjiun; Nulsen, Paul E. J.; McNamara, Brian R.; Murray, Stephen S.; Owers, Matt; Couch, Warrick J.
2012-06-20
We present an analysis of the structures and dynamics of the merging cluster Abell 1201, which has two sloshing cold fronts around a cooling core, and an offset gas core approximately 500 kpc northwest of the center. New Chandra and XMM-Newton data reveal a region of enhanced brightness east of the offset core, with breaks in surface brightness along its boundary to the north and east. This is interpreted as a tail of gas stripped from the offset core. Gas in the offset core and the tail is distinguished from other gas at the same distance from the cluster center chiefly by having higher density, hence lower entropy. In addition, the offset core shows marginally lower temperature and metallicity than the surrounding area. The metallicity in the cool core is high and there is an abrupt drop in metallicity across the southern cold front. We interpret the observed properties of the system, including the placement of the cold fronts, the offset core, and its tail in terms of a simple merger scenario. The offset core is the remnant of a merging subcluster, which first passed pericenter southeast of the center of the primary cluster and is now close to its second pericenter passage, moving at {approx_equal} 1000 km s{sup -1}. Sloshing excited by the merger gave rise to the two cold fronts and the disposition of the cold fronts reveals that we view the merger from close to the plane of the orbit of the offset core.
The Sunyaev-Zel'dovich Effect Spectrum of Abell 2163
NASA Technical Reports Server (NTRS)
LaRoque, S. J.; Carlstrom, J. E.; Reese, E. D.; Holder, G. P.; Holzapfel, W. L.; Joy, M.; Grego, L.; Six, N. Frank (Technical Monitor)
2002-01-01
We present an interferometric measurement of the Sunyaev-Zel'dovich effect (SZE) at 1 cm for the galaxy cluster Abell 2163. We combine this data point with previous measurements at 1.1, 1.4, and 2.1 mm from the SuZIE experiment to construct the most complete SZE spectrum to date. The intensity in four wavelength bands is fit to determine the Compton y-parameter (y(sub 0)) and the peculiar velocity (v(sub p)) for this cluster. Our results are y(sub 0) = 3.56((sup +0.41+0.27)(sub -0.41-0.19)) X 10(exp -4) and v(sub p) = 410((sup +1030+460) (sub -850-440)) km s(exp -1) where we list statistical and systematic uncertainties, respectively, at 68% confidence. These results include corrections for contamination by Galactic dust emission. We find less contamination by dust emission than previously reported. The dust emission is distributed over much larger angular scales than the cluster signal and contributes little to the measured signal when the details of the SZE observing strategy are taken into account.
Systematic Uncertainties in Characterizing Cluster Outskirts: The Case of Abell 133
NASA Astrophysics Data System (ADS)
Paine, Jennie; Ogrean, Georgiana A.; Nulsen, Paul; Farrah, Duncan
2016-01-01
The outskirts of galaxy clusters have low surface brightness compared to the X-ray background, making accurate background subtraction particularly important for analyzing cluster spectra out to and beyond the virial radius. We analyze the thermodynamic properties of the intracluster medium (ICM) of Abell 133 and assess the extent to which uncertainties on background subtraction affect measured quantities. We implement two methods of analyzing the ICM spectra: one in which the blank-sky background is subtracted, and another in which the sky background is modeled. We find that the two methods are consistent within the 90% confidence ranges. We were able to measure the thermodynamic properties of the cluster up to R500. Even at R500, the systematic uncertainties associated with the sky background in the direction of A133 are small, despite the ICM signal constituting only ~25% of the total signal. This work was supported in part by the NSF REU and DoD ASSURE programs under NSF grant no. 1262851 and by the Smithsonian Institution. GAO acknowledges support by NASA through a Hubble Fellowship grant HST-HF2-51345.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
Gu, Y. D.; Ren, X. J.; Li, J. S.; Lake, M. J.; Zhang, Q. Y.
2009-01-01
Metatarsal fracture is one of the most common foot injuries, particularly in athletes and soldiers, and is often associated with landing in inversion. An improved understanding of deformation of the metatarsals under inversion landing conditions is essential in the diagnosis and prevention of metatarsal injuries. In this work, a detailed three-dimensional (3D) finite element foot model was developed to investigate the effect of inversion positions on stress distribution and concentration within the metatarsals. The predicted plantar pressure distribution showed good agreement with data from controlled biomechanical tests. The deformation and stresses of the metatarsals during landing at different inversion angles (normal landing, 10 degree inversion and 20 degree inversion angles) were comparatively studied. The results showed that in the lateral metatarsals stress increased while in the medial metatarsals stress decreased with the angle of inversion. The peak stress point was found to be near the proximal part of the fifth metatarsal, which corresponds with reported clinical observations of metatarsal injuries. PMID:19685241
Zhang, T; Marina, O; Chen, P; Teahan, M; Liu, Q; Benedetti, L
2014-06-01
Purpose: The purpose of this study is to develop a three-field monoisocentric inverse breast treatment planning technique without the use of half-beam block. Methods: Conventional three-field breast treatment with supraclavicular half-beam blocked requires two isocenters when the breast is too large to be contained within half-beam. The inferior border of the supraclavicular field and superior borders of the breast fields are matched on patient's skin with the light field. This method causes a large dose variation in the matching region due to daily setup uncertainties and requires a longer treatment setup time. We developed a three-field mono-isocentric planning method for the treatment of larger breasts. The three fields share the same isocenter located in the breast. Beam matching is achieved by rotating collimator, couch and gantry. Furthermore, we employed a mixed open-IMRT inverse optimization method to improve dose uniformity and coverage. Results: Perfect geometric beam matching was achieved by rotating couch, collimator and gantry together. Treatment setup time was significantly reduced without light-field matching during treatment deliveries. Inverse mixed open-IMRT optimization method achieved better dose uniformity and PTV coverage while keeping sufficient air flash to compensate setup and breast shape changes in daily treatments. Conclusion: By eliminating light field matching, the three-field mono-isocentric treatment method can significantly reduce setup time and uncertainty for large breast patients. Plan quality is further improved by inverse IMRT planning.
Xu, Shenghua; Liu, Jie; Sun, Zhiwei; Zhang, Pu
2008-10-01
The refractive indices of particles and dispersion medium are important parameters in many colloidal experiments using optical techniques, such as turbidity and light scattering measurements. These data are in general wavelength-dependent and may not be available at some wavelengths fitting to the experimental requirement. In this study we present a novel approach to inversely determine the refractive indices of particles and dispersion medium by examining the consistency of measured extinction cross sections of particles with their theoretical values using a series of trial values of the refractive indices. The colloidal suspension of polystyrene particles dispersed in water was used as an example to demonstrate how this approach works and the data obtained via such a method are compared with those reported in literature, showing a good agreement between both. Furthermore, the factors that affect the accuracy of measurements are discussed. We also present some data of the refractive indices of polystyrene over a range of wavelengths smaller than 400 nm that have been not reported in the available literature.
Disentangling the ICL with the CHEFs: Abell 2744 as a Case Study
NASA Astrophysics Data System (ADS)
Jiménez-Teja, Y.; Dupke, R.
2016-03-01
Measurements of the intracluster light (ICL) are still prone to methodological ambiguities, and there are multiple techniques in the literature to address them, mostly based on the binding energy, the local density distribution, or the surface brightness. A common issue with these methods is the a priori assumption of a number of hypotheses on either the ICL morphology, its surface brightness level, or some properties of the brightest cluster galaxy (BCG). The discrepancy in the results is high, and numerical simulations just place a boundary on the ICL fraction in present-day galaxy clusters in the range 10%-50%. We developed a new algorithm based on the Chebyshev-Fourier functions to estimate the ICL fraction without relying on any a priori assumption about the physical or geometrical characteristics of the ICL. We are able to not only disentangle the ICL from the galactic luminosity but mark out the limits of the BCG from the ICL in a natural way. We test our technique with the recently released data of the cluster Abell 2744, observed by the Frontier Fields program. The complexity of this multiple merging cluster system and the formidable depth of these images make it a challenging test case to prove the efficiency of our algorithm. We found a final ICL fraction of 19.17 ± 2.87%, which is very consistent with numerical simulations.
Narrow-angle tail radio sources and the distribution of galaxy orbits in Abell clusters
NASA Technical Reports Server (NTRS)
O'Dea, Christopher P.; Sarazin, Craig L.; Owen, Frazer N.
1987-01-01
The present data on the orientations of the tails with respect to the cluster centers of a sample of 70 narrow-angle-tail (NAT) radio sources in Abell clusters show the distribution of tail angles to be inconsistent with purely radial or circular orbits in all the samples, while being consistent with isotropic orbits in (1) the whole sample, (2) the sample of NATs far from the cluster center, and (3) the samples of morphologically regular Abell clusters. Evidence for very radial orbits is found, however, in the sample of NATs near the cluster center. If these results can be generalized to all cluster galaxies, then the presence of radial orbits near the center of Abell clusters suggests that violent relaxation may not have been fully effective even within the cores of the regular clusters.
Lee, Myung W.
2006-01-01
Elastic properties of gas hydrate-bearing sediments (GHBS) are important for identifying and quantifying gas hydrate as well as discriminating the effects of free gas on velocity from that due to overpressure. Elastic properties of GHBS sediments can be estimated from elastic inversion using the elastic impedance. The accuracy of elastic inversion can be increased by using the predicted S-wave velocity (Vs) in the parameter k, which is k = (Vs / Vp)2. However, when Vs is less than about 0.6 kilometer per second, the inversion is inaccurate, partly because of the difficulty in accurately predicting low S-wave velocities and partly because of the large error associated with small k values. A new formula that leads to estimates of only the high-frequency part of velocity is proposed by decomposing Vs into low- and high-frequency parts. This new inversion formula is applied to a variety of well logs, and the results demonstrate its effectiveness for all ranges of Vs as long as the deviation of Vs from the low-frequency part of Vs is small. For GHBS, the deviation of Vs from the low-frequency part of Vs can be large for moderate to high gas hydrate saturations. Therefore, the new formula is not effective for elastic inversion for GHBS unless the gas hydrate effect is incorporated into the low-frequency part of Vs. For inversion of GHBS with Vs greater than about 0.6 kilometer per second, the original formulation is preferable.
NASA Astrophysics Data System (ADS)
Nikitenko, N. I.
1981-12-01
The paper develops a difference method for solving inverse geometric problems of heat conduction relating to the determination of the coordinates of a moving boundary with respect to steps along the time axis. At every step, the desired function is expanded in a power series of the time coordinate; and the coefficients of this series are found through the multiple solution of direct heat-conduction problems for systems with moving phase boundaries. Numerical results indicate that the error of the solution of incorrectly stated inverse geometric problems differs only insignificantly from the error of the initial data.
NASA Astrophysics Data System (ADS)
Sasaki, Yutaka; Meju, Max A.
2006-07-01
The controlled-source dual horizontal-loop harmonic electromagnetic (HLEM) profiling method is well suited to the problem of investigating fracture zones in crystalline rocks but there are still limitations in the way that experimental data are currently interpreted-the use of 1-D data inversion leads to inaccurate determination of geological structure. To allow accurate characterization of zones of fractured rock especially underneath heterogeneous overburden, we have developed an efficient 2.5-D regularized inversion method for reconstructing subsurface electrical resistivity distributions from multifrequency HLEM data, with the forward problem solved in 3-D using a staggered-grid finite-difference method. The inversion method is validated using a synthetic example and practical data sets from four borehole sites in a granitic terrain in northeast Brazil. An appraisal of our results for sites with boreholes sited using conventional data analysis procedures shows that we can distinguish between optimally located productive wells in fracture-zone lineaments and those with diminished yields in weathered layer with no major underlying fracture zones. We suggest that 2.5-D inversion can aid in developing better strategies for sustainable groundwater resource development in the basement terrains.
The nearby Abell clusters. III. Luminosity functions for eight rich clusters
Oegerle, W.R.; Hoessel, J.G. Washburn Observatory, Madison, WI )
1989-11-01
Red photographic data on eight rich Abell clusters are combined with previous results on four other Abell clusters to study the luminosity functions of the clusters. The results produce a mean value of the characteristic galaxy magnitude (M asterisk) that is consistent with previous results. No relation is found between the magnitude of the first-ranked cluster galaxy and M asterisk, suggesting that the value of M asterisk is not changed by dynamical evolution. The faint ends of the luminosity functions for many of the clusters are quite flat, validating the nonuniversality in the parametrization of Schechter (1976) functions for rich clusters of galaxies. 40 refs.
U(1)-invariant membranes: The geometric formulation, Abel, and pendulum differential equations
Zheltukhin, A. A.; Trzetrzelewski, M.
2010-06-15
The geometric approach to study the dynamics of U(1)-invariant membranes is developed. The approach reveals an important role of the Abel nonlinear differential equation of the first type with variable coefficients depending on time and one of the membrane extendedness parameters. The general solution of the Abel equation is constructed. Exact solutions of the whole system of membrane equations in the D=5 Minkowski space-time are found and classified. It is shown that if the radial component of the membrane world vector is only time dependent, then the dynamics is described by the pendulum equation.
THE GALAXY POPULATION OF LOW-REDSHIFT ABELL CLUSTERS
Barkhouse, Wayne A.; Yee, H. K. C.; Lopez-Cruz, Omar E-mail: hyee@astro.utoronto.c
2009-10-01
We present a study of the luminosity and color properties of galaxies selected from a sample of 57 low-redshift Abell clusters. We utilize the non-parametric dwarf-to-giant ratio (DGR) and the blue galaxy fraction (f{sub b} ) to investigate the clustercentric radial-dependent changes in the cluster galaxy population. Composite cluster samples are combined by scaling the counting radius by r {sub 200} to minimize radius selection bias. The separation of galaxies into a red and blue population was achieved by selecting galaxies relative to the cluster color-magnitude relation. The DGR of the red and blue galaxies is found to be independent of cluster richness (B {sub gc}), although the DGR is larger for the blue population at all measured radii. A decrease in the DGR for the red and red+blue galaxies is detected in the cluster core region, while the blue galaxy DGR is nearly independent of radius. The f{sub b} is found not to correlate with B {sub gc}; however, a steady decline toward the inner-cluster region is observed for the giant galaxies. The dwarf galaxy f{sub b} is approximately constant with clustercentric radius except for the inner-cluster core region where f{sub b} decreases. The clustercentric radial dependence of the DGR and the galaxy blue fraction indicates that it is unlikely that a simple scenario based on either pure disruption or pure fading/reddening can describe the evolution of infalling dwarf galaxies; both outcomes are produced by the cluster environment.
Li, Shiyang; Zheng, Limei; Jiang, Wenhua; Sahul, Raffi; Gopalan, Venkatraman; Cao, Wenwu
2013-09-14
The most difficult task in the characterization of complete set material properties for piezoelectric materials is self-consistency. Because there are many independent elastic, dielectric, and piezoelectric constants, several samples are needed to obtain the full set constants. Property variation from sample to sample often makes the obtained data set lack of self-consistency. Here, we present a method, based on pulse-echo ultrasound and inverse impedance spectroscopy, to precisely determine the full set physical properties of piezoelectric materials using only one small sample, which eliminated the sample to sample variation problem to guarantee self-consistency. The method has been applied to characterize the [001]C poled Mn modified 0.27Pb(In1/2Nb1/2)O3-0.46Pb(Mg1/3Nb2/3)O3-0.27PbTiO3 single crystal and the validity of the measured data is confirmed by a previously established method. For the inverse calculations using impedance spectrum, the stability of reconstructed results is analyzed by fluctuation analysis of input data. In contrast to conventional regression methods, our method here takes the full advantage of both ultrasonic and inverse impedance spectroscopy methods to extract all constants from only one small sample. The method provides a powerful tool for assisting novel piezoelectric materials of small size and for generating needed input data sets for device designs using finite element simulations.
NASA Astrophysics Data System (ADS)
Chedin, A.; Scott, N. A.; Flobert, J.; Husson, N.; Levy, C.; Rochard, G.; Quere, J.; Bellec, B.; Simeon, J.
1987-08-01
The improved initialization inversion method for the 3 dimensional analysis of the atmospheric structure from satellite obsevations (TIROS-N series) was applied to NOAA-7 data over Europe. The scenes selected correspond to complex meteorological situations and resulted in substantial errors in forecasting. One of the situations is presented. Comparisons between retrieved and operational (conventional) thickness charts show that the method is ready for operational use.
Cheung, Mark C. M.; Boerner, P.; Schrijver, C. J.; Malanushenko, A.; Testa, P.; Chen, F.; Peter, H.
2015-07-10
We present a new method for performing differential emission measure (DEM) inversions on narrow-band EUV images from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory. The method yields positive definite DEM solutions by solving a linear program. This method has been validated against a diverse set of thermal models of varying complexity and realism. These include (1) idealized Gaussian DEM distributions, (2) 3D models of NOAA Active Region 11158 comprising quasi-steady loop atmospheres in a nonlinear force-free field, and (3) thermodynamic models from a fully compressible, 3D MHD simulation of active region (AR) corona formation following magnetic flux emergence. We then present results from the application of the method to AIA observations of Active Region 11158, comparing the region's thermal structure on two successive solar rotations. Additionally, we show how the DEM inversion method can be adapted to simultaneously invert AIA and Hinode X-ray Telescope data, and how supplementing AIA data with the latter improves the inversion result. The speed of the method allows for routine production of DEM maps, thus facilitating science studies that require tracking of the thermal structure of the solar corona in time and space.
Fienen, Michael; Kitanidis, Peter K.; Watson, David B; Jardine, Philip M
2004-01-01
A Bayesian inverse method is applied to two electromagnetic flowmeter tests conducted in fractured weathered shale at Oak Ridge National Laboratory. Traditional deconvolution of flowmeter tests is also performed using a deterministic first-difference approach; furthermore, ordinary kriging was applied on the first-difference results to provide an additional method yielding the best estimate and confidence intervals. Depth-averaged bulk hydraulic conductivity information was available from previous testing. The three methods deconvolute the vertical profile of lateral hydraulic conductivity. A linear generalized covariance function combined with a zoning approach was used to describe structure. Nonnegativity was enforced by using a power transformation. Data screening prior to calculations was critical to obtaining reasonable results, and the quantified uncertainty estimates obtained by the inverse method led to the discovery of questionable data at the end of the process. The best estimates obtained using the inverse method and kriging compared favorably with first-difference confirmatory calculations, and all three methods were consistent with the geology at the site.
Maestro, Alicia; Solè, Isabel; González, Carmen; Solans, Conxita; Gutiérrez, José M
2008-11-15
The low-energy emulsification method phase inversion composition (PIC) was used to prepare O/W nanoemulsions in the W/oleylammonium chloride-oleylamine-C12E10/hexadecane ionic system, where the oleylammonium acted as a cationic surfactant. The results obtained, in terms of phase diagrams and emulsion characteristics, were compared with those obtained in the system W/potassium oleate-oleic acid-C12E10/hexadecane [I. Solè, A. Maestro, C. González, C. Solans, J.M. Gutiérrez, Langmuir 22 (2006) 8326], in which the oleate acted as an anionic surfactant. This study was done in order to extend the application range of the ionic nanoemulsions, not only in anionic systems but also in cationic ones, and in order to deep further into the nanoemulsion formation mechanism. The results show again that to obtain small droplet-sized nanoemulsions it is necessary to cross a direct cubic liquid crystal phase along the emulsification path, and it is also crucial to remain in this phase enough time and to use a proper mixing rate to incorporate all the oil into the liquid crystal. Then, when nanoemulsion forms, the oil is already intimately mixed with all the components, and the nanoemulsification is easier. Structural studies made with both cationic and anionic systems confirmed that the size of the "micelles" that form the cubic phase is the same or slightly smaller than the size of the nanoemulsion droplets obtained, depending on the emulsification path, which seems to point out that the nanoemulsions are formed in both cases by a dilution process of this cubic phase. When further watery solution is added to the cubic liquid crystal, these micelles separate, disrupting the cubic structure, and a small fraction of the surfactant migrates to the water. Moreover, due to the change in pH, the spontaneous curvature increases. Then, the phases in equilibrium are an oil-in-water microemulsion (W(m)) and the oil in excess. However, through this emulsification method, the surfactants can
Reconstructing the projected gravitational potential of Abell 1689 from X-ray measurements
NASA Astrophysics Data System (ADS)
Tchernin, Céline; Majer, Charles L.; Meyer, Sven; Sarli, Eleonora; Eckert, Dominique; Bartelmann, Matthias
2015-02-01
Context. Galaxy clusters can be used as cosmological probes, but to this end, they need to be thoroughly understood. Combining all cluster observables in a consistent way will help us to understand their global properties and their internal structure. Aims: We provide proof of the concept that the projected gravitational potential of galaxy clusters can directly be reconstructed from X-ray observations. We also show that this joint analysis can be used to locally test the validity of the equilibrium assumptions in galaxy clusters. Methods: We used a newly developed reconstruction method, based on Richardson-Lucy deprojection, that allows reconstructing projected gravitational potentials of galaxy clusters directly from X-ray observations. We applied this algorithm to the well-studied cluster Abell 1689 and compared the gravitational potential reconstructed from X-ray observables to the potential obtained from gravitational lensing measurements. We also compared the X-ray deprojected profiles obtained by the Richardson-Lucy deprojection algorithm with the findings from the more conventional onion-peeling technique. Results: Assuming spherical symmetry and hydrostatic equilibrium, the potentials recovered from gravitational lensing and from X-ray emission agree very well beyond 500 kpc. Owing to the fact that the Richardson-Lucy deprojection algorithm allows deprojecting each line of sight independently, this result may indicate that non-gravitational effects and/or asphericity are strong in the central regions of the clusters. Conclusions: We demonstrate the robustness of the potential reconstruction method based on the Richardson-Lucy deprojection algorithm and show that gravitational lensing and X-ray emission lead to consistent gravitational potentials. Our results illustrate the power of combining galaxy-cluster observables in a single, non-parametric, joint reconstruction of consistent cluster potentials that can be used to locally constrain the physical state
Toushmalani, Reza; Rahmati, Azizalah
2014-01-01
A gravity inversion method based on the Nettleton-Parasnis technique is used to estimate near surface density in an area without exposed outcrop or where outcrop occurrences do not adequately represent the subsurface rock densities. Its accuracy, however, strongly depends on how efficiently the regional trends and very local (terrain) effects are removed from the gravity anomalies processed. Nettleton's method implemented in a usual inversion scheme and combined with the simultaneous determination of terrain corrections. This method may lead to realistic density estimations of the topographical masses. The author applied this technique in the Bandar Charak (Hormozgan-Iran) with various geological/geophysical properties. These inversion results are comparable to both values obtained from density logs in the mentioned area and other methods like Fractal methods. The calculated densities are 2.4005 gr/cm3. The slightly higher differences between calculated densities and densities of the hand rock samples may be caused by the effect of sediment-filled valleys.
VizieR Online Data Catalog: Deep spectroscopy of Abell 85 (Agulli+, 2016)
NASA Astrophysics Data System (ADS)
Agulli, I.; Aguerri, J. A. L.; Sanchez-Janssen, R.; Dalla Vecchia, C.; Diaferio, A.; Barrena, R.; Palmero, L. D.; Yu, H.
2016-07-01
File a85_memb.dat contains 5 columns with the sky coordinates (RA;DE), the r and g band magnitudes and the recessional velocities for each 460 confirmed members of Abell 85 cluster. Details on the data set can be found in the paper. (1 data file).
Abell 58 - a Planetary Nebula with an ONe-rich knot: a signature of binary interaction? .
NASA Astrophysics Data System (ADS)
Lau, H. H. B.; De Marco, O.; Liu, X.-W.
We have investigated the possibility that binary evolution is involved in the formation of the planetary nebula Abell 58. In particular, we assume a neon nova is responsible for the observed high oxygen and neon abundances of the central hydrogen-deficient knot of the H-deficient planetary nebula Abell 58 and the ejecta from the explosion are mixed with the planetary nebula. We have investigated different scenarios involving mergers and wind accretion and found that the most promising formation scenario involves a primary SAGB star that ends its evolution as an ONe white dwarf with an AGB companion at a moderately close separation. Mass is deposited on the white dwarf through wind accretion. So neon novae could occur just after the secondary AGB companion undergoes its final flash. However, the initial separation has to be fine-tuned. To estimate the frequency of such systems we evolve a population of binary systems and find that that Abell 58-like objects should indeed be rare and the fraction of Abell-58 planetary nebula is on the order of 10-4, or lower, among all planetary nebulae.
NASA Astrophysics Data System (ADS)
Meju, Max A.; Denton, Paul; Fenning, Peter
2002-05-01
This paper describes pilot experiments to assess the potential of nuclear magnetic resonance (NMR) sounding and inversion for detecting groundwater at several sites in built-up, industrial and intensively cultivated regions in England where it is difficult to deploy large transmitter loops. The targets represent near-surface (ca. 1 m below the surface) and deep (>30 m) aquiferous (chalk, sand and gravel) deposits. The NUMIS field system was used in all the experiments and has infield processing and regularised one-dimensional (1D) inversion capabilities. All the sites were characterised by high noise levels and NMR depth soundings could only be effected using a small figure-of-eight loop for which the maximum depth of investigation was approximately 40-50 m. For comparison, conventional inductive and galvanic resistivity depth soundings were performed at these sites and the data have been inverted to yield the respective subsurface resistivity distributions. At two sites with shallow water levels, the location of the water deduced from inversion of NMR data corresponded with the water level measured in the nearby boreholes. For the site where the data quality is highest, the inverted profile of decay constants also corresponded with the known geology and the geoelectrical model. The NMR data from the other five sites are noisy and it is difficult to ascertain what aspects of the inversion models are correlatable to geoelectrical and geological data. The geoelectrical inversion results correlate with lithological and fluid content variations in the subsurface. It would appear that surface NMR (SNMR) sounding with figure-of-eight loop may be effective only when the ambient noise is less than 900 nV in the UK setting.
Crazy heart: kinematics of the "star pile" in Abell 545
NASA Astrophysics Data System (ADS)
Salinas, R.; Richtler, T.; West, M. J.; Romanowsky, A. J.; Lloyd-Davies, E.; Schuberth, Y.
2011-04-01
We study the structure and internal kinematics of the "star pile" in Abell 545 - a low surface brightness structure lying in the center of the cluster. We have obtained deep long-slit spectroscopy of the star pile using VLT/FORS2 and Gemini/GMOS, which is analyzed in conjunction with deep multiband CFHT/MEGACAM imaging. As presented in a previous study the star pile has a flat luminosity profile and its color is consistent with the outer parts of elliptical galaxies. Its velocity map is irregular, with parts being seemingly associated with an embedded nucleus, and others which have significant velocity offsets to the cluster systemic velocity with no clear kinematical connection to any of the surrounding galaxies. This would make the star pile a dynamically defined stellar intra-cluster component. The complicated pattern in velocity and velocity dispersions casts doubts on the adequacy of using the whole star pile as a dynamical test for the innermost dark matter profile of the cluster. This status is fulfilled only by the nucleus and its nearest surroundings which lie at the center of the cluster velocity distribution. Based on observations taken at the European Southern Observatory, Cerro Paranal, Chile, under programme ID 080.B-0529. Also based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and SECYT (Argentina); and on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National
NASA Astrophysics Data System (ADS)
Zhang, Haijiang; Maceira, Monica; Benson, Thomas; Nafi Toksoz, M.
2010-05-01
We present an advanced multivariate inversion technique to generate a realistic, comprehensive, and high-resolution 3D model of the seismic structure of the crust and upper mantle. The model satisfies several independent geophysical datasets including seismic surface wave dispersion measurements, gravity, and seismic arrival time. The joint inversion method takes advantage of strengths of individual data sets and is able to better constrain the seismic velocity models from shallower to greater depths. To combine different geophysical datasets into a common system, we design an optimal weighting scheme that is based on relative uncertainties of individual obs