Abel inversion method for cometary atmospheres.
NASA Astrophysics Data System (ADS)
Hubert, Benoit; Opitom, Cyrielle; Hutsemekers, Damien; Jehin, Emmanuel; Munhoven, Guy; Manfroid, Jean; Bisikalo, Dmitry V.; Shematovich, Valery I.
2016-04-01
Remote observation of cometary atmospheres produces a measurement of the cometary emissions integrated along the line of sight joining the observing instrument and the gas of the coma. This integration is the so-called Abel transform of the local emission rate. We develop a method specifically adapted to the inversion of the Abel transform of cometary emissions, that retrieves the radial profile of the emission rate of any unabsorbed emission, under the hypothesis of spherical symmetry of the coma. The method uses weighted least squares fitting and analytical results. A Tikhonov regularization technique is applied to reduce the possible effects of noise and ill-conditioning, and standard error propagation techniques are implemented. Several theoretical tests of the inversion techniques are carried out to show its validity and robustness, and show that the method is only weakly dependent on any constant offset added to the data, which reduces the dependence of the retrieved emission rate on the background subtraction. We apply the method to observations of three different comets observed using the TRAPPIST instrument: 103P/ Hartley 2, F6/ Lemmon and A1/ Siding spring. We show that the method retrieves realistic emission rates, and that characteristic lengths and production rates can be derived from the emission rate for both CN and C2 molecules. We show that the emission rate derived from the observed flux of CN emission at 387 nm and from the C2 emission at 514.1 nm of comet Siding Spring both present an easily-identifiable shoulder that corresponds to the separation between pre- and post-outburst gas. As a general result, we show that diagnosing properties and features of the coma using the emission rate is easier than directly using the observed flux. We also determine the parameters of a Haser model fitting the inverted data and fitting the line-of-sight integrated observation, for which we provide the exact analytical expression of the line-of-sight integration
An efficient and flexible Abel-inversion method for noisy data
NASA Astrophysics Data System (ADS)
Antokhin, Igor I.
2016-12-01
We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.
Bayesian Abel Inversion in Quantitative X-Ray Radiography
Howard, Marylesa; Fowler, Michael; Luttman, Aaron; Mitchell, Stephen E.; Hock, Margaret C.
2016-05-19
A common image formation process in high-energy X-ray radiography is to have a pulsed power source that emits X-rays through a scene, a scintillator that absorbs X-rays and uoresces in the visible spectrum in response to the absorbed photons, and a CCD camera that images the visible light emitted from the scintillator. The intensity image is related to areal density, and, for an object that is radially symmetric about a central axis, the Abel transform then gives the object's volumetric density. Two of the primary drawbacks to classical variational methods for Abel inversion are their sensitivity to the type and scale of regularization chosen and the lack of natural methods for quantifying the uncertainties associated with the reconstructions. In this work we cast the Abel inversion problem within a statistical framework in order to compute volumetric object densities from X-ray radiographs and to quantify uncertainties in the reconstruction. A hierarchical Bayesian model is developed with a likelihood based on a Gaussian noise model and with priors placed on the unknown density pro le, the data precision matrix, and two scale parameters. This allows the data to drive the localization of features in the reconstruction and results in a joint posterior distribution for the unknown density pro le, the prior parameters, and the spatial structure of the precision matrix. Results of the density reconstructions and pointwise uncertainty estimates are presented for both synthetic signals and real data from a U.S. Department of Energy X-ray imaging facility.
Bayesian Abel Inversion in Quantitative X-Ray Radiography
Howard, Marylesa; Fowler, Michael; Luttman, Aaron; ...
2016-05-19
A common image formation process in high-energy X-ray radiography is to have a pulsed power source that emits X-rays through a scene, a scintillator that absorbs X-rays and uoresces in the visible spectrum in response to the absorbed photons, and a CCD camera that images the visible light emitted from the scintillator. The intensity image is related to areal density, and, for an object that is radially symmetric about a central axis, the Abel transform then gives the object's volumetric density. Two of the primary drawbacks to classical variational methods for Abel inversion are their sensitivity to the type andmore » scale of regularization chosen and the lack of natural methods for quantifying the uncertainties associated with the reconstructions. In this work we cast the Abel inversion problem within a statistical framework in order to compute volumetric object densities from X-ray radiographs and to quantify uncertainties in the reconstruction. A hierarchical Bayesian model is developed with a likelihood based on a Gaussian noise model and with priors placed on the unknown density pro le, the data precision matrix, and two scale parameters. This allows the data to drive the localization of features in the reconstruction and results in a joint posterior distribution for the unknown density pro le, the prior parameters, and the spatial structure of the precision matrix. Results of the density reconstructions and pointwise uncertainty estimates are presented for both synthetic signals and real data from a U.S. Department of Energy X-ray imaging facility.« less
NASA Astrophysics Data System (ADS)
Chou, Min Yang; Lin, Charles C. H.; Tsai, Ho Fang; Lin, Chi Yen
2017-01-01
The Abel inversion of ionospheric electron density profiles with the assumption of spherical symmetry applied for radio occultation soundings could introduce a greater systematic error or sometimes artifacts if the occultation rays trespass regions with larger horizontal gradients in electron density. The aided Abel inversions have been proposed by considering the asymmetry ratio derived from ionospheric total electron content (TEC) or peak density (NmF2) of reconstructed observation maps since knowledge of the horizontal asymmetry in ambient ionospheric density could mitigate the inversion error. Here we propose a new aided Abel inversion using three-dimensional time-dependent electron density (Ne) based on the climatological maps constructed from previous observations, as it has an advantage of providing altitudinal information on the horizontal asymmetry. Improvement of proposed Ne-aided Abel inversion and comparisons with electron density profiles inverted from the NmF2- and TEC-aided inversions are studied using observation system simulation experiments. Comparison results show that all three aided Abel inversions improve the ionospheric profiling by mitigating the artificial plasma caves and negative electron density in the daytime E region. The equatorial ionization anomaly crests in the F region become more distinct. The statistical results show that the Ne-aided Abel inversion has less mean and RMS error of error percentage above 250 km altitudes, and the performances for all aided Abel inversions are similar below 250 km altitudes.
Abel Inversion of Deflectometric Measurements in Dynamic Flows
NASA Technical Reports Server (NTRS)
Agrawal, Ajay K.; Albers, Burt W.; Griffin, DeVon W.
1999-01-01
We present an Abel-inversion algorithm to reconstruct mean and rms refractive-index profiles from spatially resolved statistical measurements of the beam-deflection angle in time-dependent, axisymmetric flows. An oscillating gas-jet diffusion flame was investigated as a test case for applying the algorithm. Experimental data were obtained across the whole field by a rainbow schlieren apparatus. Results show that simultaneous multipoint measurements are necessary to reconstruct the rms refractive index accurately.
Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.
Dick, Bernhard
2014-01-14
A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.
Improved Abel transform inversion: First application to COSMIC/FORMOSAT-3
NASA Astrophysics Data System (ADS)
Aragon-Angel, A.; Hernandez-Pajares, M.; Juan, J.; Sanz, J.
2007-05-01
In this paper the first results of Ionospheric Tomographic inversion are presented, using the Improved Abel Transform on the COSMIC/FORMOSAT-3 constellation of 6 LEO satellites, carrying on-board GPS receivers.[- 4mm] The Abel transform inversion is a wide used technique which in the ionospheric context makes it possible to retrieve electron densities as a function of height based of STEC (Slant Total Electron Content) data gathered from GPS receivers on board of LEO (Low Earth Orbit) satellites. Within this precise use, the classical approach of the Abel inversion is based on the assumption of spherical symmetry of the electron density in the vicinity of an occultation, meaning that the electron content varies in height but not horizontally. In particular, one implication of this assumption is that the VTEC (Vertical Total Electron Content) is a constant value for the occultation region. This assumption may not always be valid since horizontal ionospheric gradients (a very frequent feature in some ionosphere problematic areas such as the Equatorial region) could significantly affect the electron profiles. [- 4mm] In order to overcome this limitation/problem of the classical Abel inversion, a studied improvement of this technique can be obtained by assuming separability in the electron density (see Hernández-Pajares et al. 2000). This means that the electron density can be expressed by the multiplication of VTEC data and a shape function which assumes all the height dependency in it while the VTEC data keeps the horizontal dependency. Actually, it is more realistic to assume that this shape fuction depends only on the height and to use VTEC information to take into account the horizontal variation rather than considering spherical symmetry in the electron density function as it has been carried out in the classical approach of the Abel inversion.[-4mm] Since the above mentioned improved Abel inversion technique has already been tested and proven to be a useful
Serre duality, Abel's theorem, and Jacobi inversion for supercurves over a thick superpoint
NASA Astrophysics Data System (ADS)
Rothstein, Mitchell J.; Rabin, Jeffrey M.
2015-04-01
The principal aim of this paper is to extend Abel's theorem to the setting of complex supermanifolds of dimension 1 | q over a finite-dimensional local supercommutative C-algebra. The theorem is proved by establishing a compatibility of Serre duality for the supercurve with Poincaré duality on the reduced curve. We include an elementary algebraic proof of the requisite form of Serre duality, closely based on the account of the reduced case given by Serre in Algebraic groups and class fields, combined with an invariance result for the topology on the dual of the space of répartitions. Our Abel map, taking Cartier divisors of degree zero to the dual of the space of sections of the Berezinian sheaf, modulo periods, is defined via Penkov's characterization of the Berezinian sheaf as the cohomology of the de Rham complex of the sheaf D of differential operators. We discuss the Jacobi inversion problem for the Abel map and give an example demonstrating that if n is an integer sufficiently large that the generic divisor of degree n is linearly equivalent to an effective divisor, this need not be the case for all divisors of degree n.
Comparison of four stable numerical methods for Abel's integral equation
NASA Technical Reports Server (NTRS)
Murio, Diego A.; Mejia, Carlos E.
1991-01-01
The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.
Current methods of radio occultation data inversion
NASA Technical Reports Server (NTRS)
Kliore, A. J.
1972-01-01
The methods of Abel integral transform and ray-tracing inversion have been applied to data received from radio occultation experiments as a means of obtaining refractive index profiles of the ionospheres and atmospheres of Mars and Venus. In the case of Mars, certain simplifications are introduced by the assumption of small refractive bending in the atmosphere. General inversion methods, independent of the thin atmosphere approximation, have been used to invert the data obtained from the radio occultation of Mariner 5 by Venus; similar methods will be used to analyze data obtained from Jupiter with Pioneers F and G, as well as from the other outer planets in the Outer Planet Grand Tour Missions.
An inversion method for cometary atmospheres
NASA Astrophysics Data System (ADS)
Hubert, B.; Opitom, C.; Hutsemékers, D.; Jehin, E.; Munhoven, G.; Manfroid, J.; Bisikalo, D. V.; Shematovich, V. I.
2016-10-01
Remote observation of cometary atmospheres produces a measurement of the cometary emissions integrated along the line of sight. This integration is the so-called Abel transform of the local emission rate. The observation is generally interpreted under the hypothesis of spherical symmetry of the coma. Under that hypothesis, the Abel transform can be inverted. We derive a numerical inversion method adapted to cometary atmospheres using both analytical results and least squares fitting techniques. This method, derived under the usual hypothesis of spherical symmetry, allows us to retrieve the radial distribution of the emission rate of any unabsorbed emission, which is the fundamental, physically meaningful quantity governing the observation. A Tikhonov regularization technique is also applied to reduce the possibly deleterious effects of the noise present in the observation and to warrant that the problem remains well posed. Standard error propagation techniques are included in order to estimate the uncertainties affecting the retrieved emission rate. Several theoretical tests of the inversion techniques are carried out to show its validity and robustness. In particular, we show that the Abel inversion of real data is only weakly sensitive to an offset applied to the input flux, which implies that the method, applied to the study of a cometary atmosphere, is only weakly dependent on uncertainties on the sky background which has to be subtracted from the raw observations of the coma. We apply the method to observations of three different comets observed using the TRAPPIST telescope: 103P/ Hartley 2, F6/ Lemmon and A1/ Siding Spring. We show that the method retrieves realistic emission rates, and that characteristic lengths and production rates can be derived from the emission rate for both CN and C2 molecules. We show that the retrieved characteristic lengths can differ from those obtained from a direct least squares fitting over the observed flux of radiation, and
An efficient and fast parallel method for Volterra integral equations of Abel type
NASA Astrophysics Data System (ADS)
Capobianco, Giovanni; Conte, Dajana
2006-05-01
In this paper we present an efficient and fast parallel waveform relaxation method for Volterra integral equations of Abel type, obtained by reformulating a nonstationary waveform relaxation method for systems of equations with linear coefficient constant kernel. To this aim we consider the Laplace transform of the equation and here we apply the recurrence relation given by the Chebyshev polynomial acceleration for algebraic linear systems. Back in the time domain, we obtain a three term recursion which requires, at each iteration, the evaluation of convolution integrals, where only the Laplace transform of the kernel is known. For this calculation we can use a fast convolution algorithm. Numerical experiments have been done also on problems where it is not possible to use the original nonstationary method, obtaining good results in terms of improvement of the rate of convergence with respect the stationary method.
Inversion methods for interpretation of asteroid lightcurves
NASA Technical Reports Server (NTRS)
Kaasalainen, Mikko; Lamberg, L.; Lumme, K.
1992-01-01
We have developed methods of inversion that can be used in the determination of the three-dimensional shape or the albedo distribution of the surface of a body from disk-integrated photometry, assuming the shape to be strictly convex. In addition to the theory of inversion methods, we have studied the practical aspects of the inversion problem and applied our methods to lightcurve data of 39 Laetitia and 16 Psyche.
Non-thermal Hard X-Ray Emission from Coma and Several Abell Clusters
Correa, C
2004-02-05
We report results of hard X-Ray observations of the clusters Coma, Abell 496, Abell754, Abell 1060, Abell 1367, Abell2256 and Abell3558 using RXTE data from the NASA HEASARC public archive. Specifically we searched for clusters with hard x-ray emission that can be fitted by a power law because this would indicate that the cluster is a source of non-thermal emission. We are assuming the emission mechanism proposed by Vahk Petrosian where the inter cluster space contains clouds of relativistic electrons that by themselves create a magnetic field and emit radio synchrotron radiation. These relativistic electrons Inverse-Compton scatter Microwave Background photons up to hard x-ray energies. The clusters that were found to be sources of non-thermal hard x-rays are Coma, Abell496, Abell754 and Abell 1060.
An exact inverse method for subsonic flows
NASA Technical Reports Server (NTRS)
Daripa, Prabir
1988-01-01
A new inverse method for the aerodynamic design of airfoils is presented for subcritical flows. The pressure distribution in this method can be prescribed as a function of the arclength of the still unknown body. It is shown that this inverse problem is mathematically equivalent to solving only one nonlinear boundary value problem subject to known Dirichlet data on the boundary.
FNAS/Rapid Spectral Inversion Methods
NASA Technical Reports Server (NTRS)
Poularikas, Alexander
1997-01-01
The purpose of this investigation was to study methods and ways for rapid inversion programs involving the correlated k-method, and to study the infrared observations of Saturn from the Cassini orbiter.
An improved inversion for FORMOSAT-3/COSMIC ionosphere electron density profiles
NASA Astrophysics Data System (ADS)
Pedatella, N. M.; Yue, X.; Schreiner, W. S.
2015-10-01
An improved method to retrieve electron density profiles from Global Positioning System (GPS) radio occultation (RO) data is presented and applied to Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) observations. The improved inversion uses a monthly grid of COSMIC F region peak densities (NmF2), which are obtained via the standard Abel inversion, to aid the Abel inversion by providing information on the horizontal gradients in the ionosphere. This lessens the impact of ionospheric gradients on the retrieval of GPS RO electron density profiles, reducing the dominant error source in the standard Abel inversion. Results are presented that demonstrate the NmF2 aided retrieval significantly improves the quality of the COSMIC electron density profiles. Improvements are most notable at E region altitudes, where the improved inversion reduces the artificial plasma cave that is generated by the Abel inversion spherical symmetry assumption at low latitudes during the daytime. Occurrence of unphysical negative electron densities at E region altitudes is also reduced. Furthermore, the NmF2 aided inversion has a positive impact at F region altitudes, where it results in a more distinct equatorial ionization anomaly. COSMIC electron density profiles inverted using our new approach are currently available through the University Corporation for Atmospheric Research COSMIC Data Analysis and Archive Center. Owing to the significant improvement in the results, COSMIC data users are encouraged to use electron density profiles based on the improved inversion rather than those inverted by the standard Abel inversion.
NASA Technical Reports Server (NTRS)
Fleming, H. E.
1977-01-01
Linear numerical inversion methods applied to atmospheric remote sounding generally can be categorized in two ways: (1) iterative, and (2) inverse matrix methods. However, these two categories are not unrelated; a duality exists between them. In other words, given an iterative scheme, a corresponding inverse matrix method exists, and conversely. This duality concept is developed for the more familiar linear methods. The iterative duals are compared with the classical linear iterative approaches and their differences analyzed. The importance of the initial profile in all methods is stressed. Calculations using simulated data are made to compare accuracies and to examine the dependence of the solution on the initial profile.
NASA Inverse Methods/Data Assimilation
NASA Technical Reports Server (NTRS)
Bennett, Andrew
2003-01-01
An overview of NASA's Third International Summer School on Inverse Methods and Data Assimilation which was conducted at Oregon State University from July 22 to August 2, 2002, is presented. Items listed include: a roster of attendees, a description of course content and talks given.
Tsunami waveform inversion by adjoint methods
NASA Astrophysics Data System (ADS)
Pires, Carlos; Miranda, Pedro M. A.
2001-09-01
An adjoint method for tsunami waveform inversion is proposed, as an alternative to the technique based on Green's functions of the linear long wave model. The method has the advantage of being able to use the nonlinear shallow water equations, or other appropriate equation sets, and to optimize an initial state given as a linear or nonlinear function of any set of free parameters. This last facility is used to perform explicit optimization of the focal fault parameters, characterizing the initial sea surface displacement of tsunamigenic earthquakes. The proposed methodology is validated with experiments using synthetic data, showing the possibility of recovering all relevant details of a tsunami source from tide gauge observations, providing that the adjoint method is constrained in an appropriate manner. It is found, as in other methods, that the inversion skill of tsunami sources increases with the azimuthal and temporal coverage of assimilated tide gauge stations; furthermore, it is shown that the eigenvalue analysis of the Hessian matrix of the cost function provides a consistent and useful methodology to choose the subset of independent parameters that can be inverted with a given dataset of observations and to evaluate the error of the inversion process. The method is also applied to real tide gauge series, from the tsunami of the February 28, 1969, Gorringe Bank earthquake, suggesting some reasonable changes to the assumed focal parameters of that event. It is suggested that the method proposed may be able to deal with transient tsunami sources such as those generated by submarine landslides.
An efficient method for inverse problems
NASA Technical Reports Server (NTRS)
Daripa, Prabir
1987-01-01
A new inverse method for aerodynamic design of subcritical airfoils is presented. The pressure distribution in this method can be prescribed in a natural way, i.e. as a function of arclength of the as yet unknown body. This inverse problem is shown to be mathematically equivalent to solving a single nonlinear boundary value problem subject to known Dirichlet data on the boundary. The solution to this problem determines the airfoil, the free stream Mach number M(sub x) and the upstream flow direction theta(sub x). The existence of a solution for any given pressure distribution is discussed. The method is easy to implement and extremely efficient. We present a series of results for which comparisons are made with the known airfoils.
A model-assisted radio occultation data inversion method based on data ingestion into NeQuick
NASA Astrophysics Data System (ADS)
Shaikh, M. M.; Nava, B.; Kashcheyev, A.
2017-01-01
Inverse Abel transform is the most common method to invert radio occultation (RO) data in the ionosphere and it is based on the assumption of the spherical symmetry for the electron density distribution in the vicinity of an occultation event. It is understood that this 'spherical symmetry hypothesis' could fail, above all, in the presence of strong horizontal electron density gradients. As a consequence, in some cases wrong electron density profiles could be obtained. In this work, in order to incorporate the knowledge of horizontal gradients, we have suggested an inversion technique based on the adaption of the empirical ionospheric model, NeQuick2, to RO-derived TEC. The method relies on the minimization of a cost function involving experimental and model-derived TEC data to determine NeQuick2 input parameters (effective local ionization parameters) at specific locations and times. These parameters are then used to obtain the electron density profile along the tangent point (TP) positions associated with the relevant RO event using NeQuick2. The main focus of our research has been laid on the mitigation of spherical symmetry effects from RO data inversion without using external data such as data from global ionospheric maps (GIM). By using RO data from Constellation Observing System for Meteorology Ionosphere and Climate (FORMOSAT-3/COSMIC) mission and manually scaled peak density data from a network of ionosondes along Asian and American longitudinal sectors, we have obtained a global improvement of 5% with 7% in Asian longitudinal sector (considering the data used in this work), in the retrieval of peak electron density (NmF2) with model-assisted inversion as compared to the Abel inversion. Mean errors of NmF2 in Asian longitudinal sector are calculated to be much higher compared to American sector.
Regeneration of stochastic processes: an inverse method
NASA Astrophysics Data System (ADS)
Ghasemi, F.; Peinke, J.; Sahimi, M.; Rahimi Tabar, M. R.
2005-10-01
We propose a novel inverse method that utilizes a set of data to construct a simple equation that governs the stochastic process for which the data have been measured, hence enabling us to reconstruct the stochastic process. As an example, we analyze the stochasticity in the beat-to-beat fluctuations in the heart rates of healthy subjects as well as those with congestive heart failure. The inverse method provides a novel technique for distinguishing the two classes of subjects in terms of a drift and a diffusion coefficients which behave completely differently for the two classes of subjects, hence potentially providing a novel diagnostic tool for distinguishing healthy subjects from those with congestive heart failure, even at the early stages of the disease development.
Variational Bayesian Approximation methods for inverse problems
NASA Astrophysics Data System (ADS)
Mohammad-Djafari, Ali
2012-09-01
Variational Bayesian Approximation (VBA) methods are recent tools for effective Bayesian computations. In this paper, these tools are used for inverse problems where the prior models include hidden variables and where where the estimation of the hyper parameters has also to be addressed. In particular two specific prior models (Student-t and mixture of Gaussian models) are considered and details of the algorithms are given.
A Bayesian method for microseismic source inversion
NASA Astrophysics Data System (ADS)
Pugh, D. J.; White, R. S.; Christie, P. A. F.
2016-08-01
Earthquake source inversion is highly dependent on location determination and velocity models. Uncertainties in both the model parameters and the observations need to be rigorously incorporated into an inversion approach. Here, we show a probabilistic Bayesian method that allows formal inclusion of the uncertainties in the moment tensor inversion. This method allows the combination of different sets of far-field observations, such as P-wave and S-wave polarities and amplitude ratios, into one inversion. Additional observations can be included by deriving a suitable likelihood function from the uncertainties. This inversion produces samples from the source posterior probability distribution, including a best-fitting solution for the source mechanism and associated probability. The inversion can be constrained to the double-couple space or allowed to explore the gamut of moment tensor solutions, allowing volumetric and other non-double-couple components. The posterior probability of the double-couple and full moment tensor source models can be evaluated from the Bayesian evidence, using samples from the likelihood distributions for the two source models, producing an estimate of whether or not a source is double-couple. Such an approach is ideally suited to microseismic studies where there are many sources of uncertainty and it is often difficult to produce reliability estimates of the source mechanism, although this can be true of many other cases. Using full-waveform synthetic seismograms, we also show the effects of noise, location, network distribution and velocity model uncertainty on the source probability density function. The noise has the largest effect on the results, especially as it can affect other parts of the event processing. This uncertainty can lead to erroneous non-double-couple source probability distributions, even when no other uncertainties exist. Although including amplitude ratios can improve the constraint on the source probability
NASA Astrophysics Data System (ADS)
Boerner, W. M.; Brand, H.; Cram, L. A.; Giessing, D. T.; Jordan, A. K.
The present conference considers mathematical inverse methods and transient techniques, the topological approach to inverse scattering in remote sensing, the numerical resolution of inverse problems via functional derivatives, the application of almost periodic functions to inverse scattering theory, application of the Abel transform in remote sensing, the inverse diffraction problem, recent advances in the theory of inverse scattering with sparse data, direct and inverse halfspace scalar diffraction, approximation of input response, maximum entropy methods in electromagnetic/geophysical/ultrasonic imaging, time-dependent radar target signatures, the synthesis and detection of authenticity features, singularities in quasi-geometrical imaging, and polarization utilization in the electromagnetic vector inverse problem. Also discussed are polarization-dependence in angle tracking systems, polarization vector signal processing for radar clutter suppression, the radiative transfer approach in electromagnetic imaging, inverse methods in microwave target imaging, inversion in SAR imaging, fast mm-wave imaging, electromagnetic imaging of dielectric targets, tomographic imaging methods, diffraction tomography, phase-comparison monopulse side-scan radar, and far field-to-near field transforms in spherical coordinates.
Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems
Benzi, M.; Tuma, M.
1996-12-31
A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.
Inverse polynomial reconstruction method in DCT domain
NASA Astrophysics Data System (ADS)
Dadkhahi, Hamid; Gotchev, Atanas; Egiazarian, Karen
2012-12-01
The discrete cosine transform (DCT) offers superior energy compaction properties for a large class of functions and has been employed as a standard tool in many signal and image processing applications. However, it suffers from spurious behavior in the vicinity of edge discontinuities in piecewise smooth signals. To leverage the sparse representation provided by the DCT, in this article, we derive a framework for the inverse polynomial reconstruction in the DCT expansion. It yields the expansion of a piecewise smooth signal in terms of polynomial coefficients, obtained from the DCT representation of the same signal. Taking advantage of this framework, we show that it is feasible to recover piecewise smooth signals from a relatively small number of DCT coefficients with high accuracy. Furthermore, automatic methods based on minimum description length principle and cross-validation are devised to select the polynomial orders, as a requirement of the inverse polynomial reconstruction method in practical applications. The developed framework can considerably enhance the performance of the DCT in sparse representation of piecewise smooth signals. Numerical results show that denoising and image approximation algorithms based on the proposed framework indicate significant improvements over wavelet counterparts for this class of signals.
An inverse problem by boundary element method
Tran-Cong, T.; Nguyen-Thien, T.; Graham, A.L.
1996-02-01
Boundary Element Methods (BEM) have been established as useful and powerful tools in a wide range of engineering applications, e.g. Brebbia et al. In this paper, we report a particular three dimensional implementation of a direct boundary integral equation (BIE) formulation and its application to numerical simulations of practical polymer processing operations. In particular, we will focus on the application of the present boundary element technology to simulate an inverse problem in plastics processing.by extrusion. The task is to design profile extrusion dies for plastics. The problem is highly non-linear due to material viscoelastic behaviours as well as unknown free surface conditions. As an example, the technique is shown to be effective in obtaining the die profiles corresponding to a square viscoelastic extrudate under different processing conditions. To further illustrate the capability of the method, examples of other non-trivial extrudate profiles and processing conditions are also given.
Abel's Theorem Simplifies Reduction of Order
ERIC Educational Resources Information Center
Green, William R.
2011-01-01
We give an alternative to the standard method of reduction or order, in which one uses one solution of a homogeneous, linear, second order differential equation to find a second, linearly independent solution. Our method, based on Abel's Theorem, is shorter, less complex and extends to higher order equations.
Matrix methods for reflective inverse diffusion
NASA Astrophysics Data System (ADS)
Burgi, Kenneth W.; Marciniak, Michael A.; Nauyoks, Stephen E.; Oxley, Mark E.
2016-09-01
Reflective inverse diffusion is a method of refocusing light scattered by a rough surface. An SLM is used to shape the wavefront of a HeNe laser at 632.8-nm wavelength to produce a converging phase front after reflection. Iterative methods previously demonstrated intensity enhancements of the focused spot over 100 times greater than the surrounding background speckle. This proof-of-concept method was very time consuming and the algorithm started over each time the desired location of the focus spot in the observation plane was moved. Transmission matrices have been developed to control light scattered by transmission through a turbid media. Time varying phase maps are applied to an SLM and used to interrogate the phase scattering properties of the material. For each phase map, the resultant speckle intensity pattern is recorded less than 1 mm from the material surface and represents an observation plane of less than 0.02 mm2. Fourier transforms are used to extract the phase scattering properties of the material from the intensity measurements. We investigate the effectiveness this method for constructing the reflection matrix (RM) of a diffuse reflecting medium where the propagation distances and observation plane are almost 1,000 times greater than the previous work based on transmissive scatter. The RM performance is based on its ability to refocus reflectively scattered light to a single focused spot or multiple foci in the observation plane. Diffraction-based simulations are used to corroborate experimental results.
NASA Astrophysics Data System (ADS)
Trigub, R. M.
2015-08-01
We study the convergence of linear means of the Fourier series \\sumk=-∞+∞λk,\\varepsilon\\hat{f}_keikx of a function f\\in L1 \\lbrack -π,π \\rbrack to f(x) as \\varepsilon\\searrow0 at all points at which the derivative \\bigl(\\int_0^xf(t) dt\\bigr)' exists (i.e. at the d-points). Sufficient conditions for the convergence are stated in terms of the factors \\{λk,\\varepsilon\\} and, in the case of λk,\\varepsilon=\\varphi(\\varepsilon k), in terms of the condition that the functions \\varphi and x\\varphi'(x) belong to the Wiener algebra A( R). We also study a new problem concerning the convergence of means of the Abel-Poisson type, \\sumk=-∞^∞r\\psi(\\vert k\\vert)\\hat{f}_keikx, as r\
New Type Continuities via Abel Convergence
Albayrak, Mehmet
2014-01-01
We investigate the concept of Abel continuity. A function f defined on a subset of ℝ, the set of real numbers, is Abel continuous if it preserves Abel convergent sequences. Some other types of continuities are also studied and interesting result is obtained. It turned out that uniform limit of a sequence of Abel continuous functions is Abel continuous and the set of Abel continuous functions is a closed subset of continuous functions. PMID:24883393
Yao, Jie; Lesage, Anne-Cécile; Hussain, Fazle; Bodmann, Bernhard G.; Kouri, Donald J.
2014-12-15
The reversion of the Born-Neumann series of the Lippmann-Schwinger equation is one of the standard ways to solve the inverse acoustic scattering problem. One limitation of the current inversion methods based on the reversion of the Born-Neumann series is that the velocity potential should have compact support. However, this assumption cannot be satisfied in certain cases, especially in seismic inversion. Based on the idea of distorted wave scattering, we explore an inverse scattering method for velocity potentials without compact support. The strategy is to decompose the actual medium as a known single interface reference medium, which has the same asymptotic form as the actual medium and a perturbative scattering potential with compact support. After introducing the method to calculate the Green’s function for the known reference potential, the inverse scattering series and Volterra inverse scattering series are derived for the perturbative potential. Analytical and numerical examples demonstrate the feasibility and effectiveness of this method. Besides, to ensure stability of the numerical computation, the Lanczos averaging method is employed as a filter to reduce the Gibbs oscillations for the truncated discrete inverse Fourier transform of each order. Our method provides a rigorous mathematical framework for inverse acoustic scattering with a non-compact support velocity potential.
The clusters Abell 222 and Abell 223: a multi-wavelength view
NASA Astrophysics Data System (ADS)
Durret, F.; Laganá, T. F.; Adami, C.; Bertin, E.
2010-07-01
Context. The Abell 222 and 223 clusters are located at an average redshift z ~ 0.21 and are separated by 0.26 deg. Signatures of mergers have been previously found in these clusters, both in X-rays and at optical wavelengths, thus motivating our study. In X-rays, they are relatively bright, and Abell 223 shows a double structure. A filament has also been detected between the clusters both at optical and X-ray wavelengths. Aims: We analyse the optical properties of these two clusters based on deep imaging in two bands, derive their galaxy luminosity functions (GLFs) and correlate these properties with X-ray characteristics derived from XMM-Newton data. Methods: The optical part of our study is based on archive images obtained with the CFHT Megaprime/Megacam camera, covering a total region of about 1 deg2, or 12.3 × 12.3 Mpc2 at a redshift of 0.21. The X-ray analysis is based on archive XMM-Newton images. Results: The GLFs of Abell 222 in the g' and r' bands are well fit by a Schechter function; the GLF is steeper in r' than in g'. For Abell 223, the GLFs in both bands require a second component at bright magnitudes, added to a Schechter function; they are similar in both bands. The Serna & Gerbal method allows to separate well the two clusters. No obvious filamentary structures are detected at very large scales around the clusters, but a third cluster at the same redshift, Abell 209, is located at a projected distance of 19.2 Mpc. X-ray temperature and metallicity maps reveal that the temperature and metallicity of the X-ray gas are quite homogeneous in Abell 222, while they are very perturbed in Abell 223. Conclusions: The Abell 222/Abell 223 system is complex. The two clusters that form this structure present very different dynamical states. Abell 222 is a smaller, less massive and almost isothermal cluster. On the other hand, Abell 223 is more massive and has most probably been crossed by a subcluster on its way to the northeast. As a consequence, the
Radiation Source Mapping with Bayesian Inverse Methods
NASA Astrophysics Data System (ADS)
Hykes, Joshua Michael
We present a method to map the spectral and spatial distributions of radioactive sources using a small number of detectors. Locating and identifying radioactive materials is important for border monitoring, accounting for special nuclear material in processing facilities, and in clean-up operations. Most methods to analyze these problems make restrictive assumptions about the distribution of the source. In contrast, the source-mapping method presented here allows an arbitrary three-dimensional distribution in space and a flexible group and gamma peak distribution in energy. To apply the method, the system's geometry and materials must be known. A probabilistic Bayesian approach is used to solve the resulting inverse problem (IP) since the system of equations is ill-posed. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint flux, discrete ordinates solutions, obtained in this work by the Denovo code, are required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes are then used to form the linear model to map the state space to the response space. The test for the method is simultaneously locating a set of 137Cs and 60Co gamma sources in an empty room. This test problem is solved using synthetic measurements generated by a Monte Carlo (MCNP) model and using experimental measurements that we collected for this purpose. With the synthetic data, the predicted source distributions identified the locations of the sources to within tens of centimeters, in a room with an approximately four-by-four meter floor plan. Most of the predicted source intensities were within a factor of ten of their true value. The chi-square value of the predicted source was within a factor of five from the expected value based on the number of measurements employed. With a favorable uniform initial guess, the predicted source map was nearly identical to the true distribution
An inverse dynamic method yielding flexible manipulator state trajectories
NASA Technical Reports Server (NTRS)
Kwon, Dong-Soo; Book, Wayne J.
1990-01-01
An inverse dynamic equation for a flexible manipulator is derived in a state form. By dividing the inverse system into the causal part and the anticausal part, torque is calculated in the time domain for a certain end point trajectory, as well as trajectories of all state variables. The open loop control of the inverse dynamic method shows an excellent result in simulation. For practical applications, a control strategy adapting feedback tracking control to the inverse dynamic feedforward control is illustrated, and its good experimental result is presented.
An inverse method with regularity condition for transonic airfoil design
NASA Technical Reports Server (NTRS)
Zhu, Ziqiang; Xia, Zhixun; Wu, Liyi
1991-01-01
It is known from Lighthill's exact solution of the incompressible inverse problem that in the inverse design problem, the surface pressure distribution and the free stream speed cannot both be prescribed independently. This implies the existence of a constraint on the prescribed pressure distribution. The same constraint exists at compressible speeds. Presented here is an inverse design method for transonic airfoils. In this method, the target pressure distribution contains a free parameter that is adjusted during the computation to satisfy the regularity condition. Some design results are presented in order to demonstrate the capabilities of the method.
Comparison of iterative inverse coarse-graining methods
NASA Astrophysics Data System (ADS)
Rosenberger, David; Hanke, Martin; van der Vegt, Nico F. A.
2016-10-01
Deriving potentials for coarse-grained Molecular Dynamics (MD) simulations is frequently done by solving an inverse problem. Methods like Iterative Boltzmann Inversion (IBI) or Inverse Monte Carlo (IMC) have been widely used to solve this problem. The solution obtained by application of these methods guarantees a match in the radial distribution function (RDF) between the underlying fine-grained system and the derived coarse-grained system. However, these methods often fail in reproducing thermodynamic properties. To overcome this deficiency, additional thermodynamic constraints such as pressure or Kirkwood-Buff integrals (KBI) may be added to these methods. In this communication we test the ability of these methods to converge to a known solution of the inverse problem. With this goal in mind we have studied a binary mixture of two simple Lennard-Jones (LJ) fluids, in which no actual coarse-graining is performed. We further discuss whether full convergence is actually needed to achieve thermodynamic representability.
Matrix-inversion method: Applications to Möbius inversion adn deconvolution
NASA Astrophysics Data System (ADS)
Xie, Qian; Chen, Nan-Xian
1995-12-01
The purpose of this paper is threefold. The first is to show the matrix inversion method as a joint basis for the inversion of two important transforms: the Möbius and Laplace transforms. It is found that the Möbius transform is related to a multiplicative operator while the Laplace transform is related to an additive operator. The second is to show that the matrix inverison method is a useful tool for inverse problems not only in statistical physics but also in applied physics by means of adding two other applications, one the derivation of the Fuoss-Kirkwood formulas for relaxation spectra in studies of anelasticity and dielectrics and the other the reconstruction of real signal in signal processing. The third is to indicate the potentiality of the matrix inversion method as a rough algorithm for numerical solution of the convolution integral equation. The numerical examples given include the inversion of Laplace transform and the signal reconstruction with a Gaussian point spread kernel. (c) 1995 The American Physical Society
Methodology for comparison of inverse heat conduction methods
NASA Astrophysics Data System (ADS)
Raynaud, M.; Beck, J. V.
1988-02-01
The inverse heat conduction problem involves the calculation of the surface heat flux from transient measured temperatures inside solids. The deviation of the estimated heat flux from the true heat flux due to stabilization procedures is called the deterministic bias. This paper defines two test problems that show the tradeoff between deterministic bias and sensitivity to measurement errors of inverse methods. For a linear problem, with the statistical assumptions of additive and uncorrelated errors having constant variance and zero mean, the second test case gives the standard deviation of the estimated heat flux. A methodology for the quantitative comparison of deterministic bias and standard deviation of inverse methods is proposed. Four numerical inverse methods are compared.
ERIC Educational Resources Information Center
Brown, Malcolm
2009-01-01
Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…
Geophysical Inversion With Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
We are investigating the use of Pareto multi-objective global optimization (PMOGO) methods to solve numerically complicated geophysical inverse problems. PMOGO methods can be applied to highly nonlinear inverse problems, to those where derivatives are discontinuous or simply not obtainable, and to those were multiple minima exist in the problem space. PMOGO methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. This allows a more complete assessment of the possibilities and provides opportunities to calculate statistics regarding the likelihood of particular model features. We are applying PMOGO methods to four classes of inverse problems. The first are discrete-body problems where the inversion determines values of several parameters that define the location, orientation, size and physical properties of an anomalous body represented by a simple shape, for example a sphere, ellipsoid, cylinder or cuboid. A PMOGO approach can determine not only the optimal shape parameters for the anomalous body but also the optimal shape itself. Furthermore, when one expects several anomalous bodies in the subsurface, a PMOGO inversion approach can determine an optimal number of parameterized bodies. The second class of inverse problems are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The third class of problems are lithological inversions, which are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the fourth class, surface geometry inversions, we consider a fundamentally different type of problem in which a model comprises wireframe surfaces representing contacts between rock units. The physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. Surface geometry inversion can be
The Filtered Abel Transform and Its Application in Combustion Diagnostics
NASA Technical Reports Server (NTRS)
Simons, Stephen N. (Technical Monitor); Yuan, Zeng-Guang
2003-01-01
Many non-intrusive combustion diagnosis methods generate line-of-sight projections of a flame field. To reconstruct the spatial field of the measured properties, these projections need to be deconvoluted. When the spatial field is axisymmetric, commonly used deconvolution method include the Abel transforms, the onion peeling method and the two-dimensional Fourier transform method and its derivatives such as the filtered back projection methods. This paper proposes a new approach for performing the Abel transform method is developed, which possesses the exactness of the Abel transform and the flexibility of incorporating various filters in the reconstruction process. The Abel transform is an exact method and the simplest among these commonly used methods. It is evinced in this paper that all the exact reconstruction methods for axisymmetric distributions must be equivalent to the Abel transform because of its uniqueness and exactness. Detailed proof is presented to show that the two dimensional Fourier methods when applied to axisymmetric cases is identical to the Abel transform. Discrepancies among various reconstruction method stem from the different approximations made to perform numerical calculations. An equation relating the spectrum of a set of projection date to that of the corresponding spatial distribution is obtained, which shows that the spectrum of the projection is equal to the Abel transform of the spectrum of the corresponding spatial distribution. From the equation, if either the projection or the distribution is bandwidth limited, the other is also bandwidth limited, and both have the same bandwidth. If the two are not bandwidth limited, the Abel transform has a bias against low wave number components in most practical cases. This explains why the Abel transform and all exact deconvolution methods are sensitive to high wave number noises. The filtered Abel transform is based on the fact that the Abel transform of filtered projection data is equal
Noncoherent matrix inversion methods for Scansar processing
NASA Astrophysics Data System (ADS)
Dendal, Didier
1995-11-01
The aim of this work is to develop some algebraic reconstruction techniques for low resolution power SAR imagery, as in the Scansar or QUICKLOOK imaging modes. The traditional reconstruction algorithms are indeed not well fit to low resolution power purposes, since Fourier constraints impose a computational load of the same order as the one of the usual SAR azimuthal resolution. Furthermore, the range migration balancing is superfluous, as it does not cover a tenth of the resolution cell in the less favorable situations. There are several possibilities for using matrices in the azimuthal direction. The most direct alternative leads to a matrix inversion. Unfortunately, the numerical conditioning of the problem is far from being excellent, since each line of the matrix is an image of the antenna radiating pattern with a shift between two successive lines corresponding to the distance covered by the SAR between two pulses transmission (a few meters for satellite ERS1). We'll show how it is possible to turn a very ill conditioned problem into an equivalent one, but without any divergence risk, by a technique of successive decimation by two (resolution power increased by two at each step). This technique leads to very small square matrices (two lines and two columns), the good numeric conditioning of which is certified by a well-known theorem of numerical analysis. The convergence rate of the process depends on the circumstances (mainly the distance between two impulses transmissions) and on the required accuracy, but five or six iterations already give excellent results. The process is applicable at four or five levels (number of decimations) which corresponds to initial matrices of 16 by 16 or 32 by 32. The azimuth processing is performed on the basis of the projection function concept (tomographic analogy of radar principles). This integrated information results from classical coherent range compression. The aperture synthesis is obtained by non-coherent processing
Improved hybrid iterative optimization method for seismic full waveform inversion
NASA Astrophysics Data System (ADS)
Wang, Yi; Dong, Liang-Guo; Liu, Yu-Zhu
2013-06-01
In full waveform inversion (FWI), Hessian information of the misfit function is of vital importance for accelerating the convergence of the inversion; however, it usually is not feasible to directly calculate the Hessian matrix and its inverse. Although the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) or Hessian-free inexact Newton (HFN) methods are able to use approximate Hessian information, the information they collect is limited. The two methods can be interlaced because they are able to provide Hessian information for each other; however, the performance of the hybrid iterative method is dependent on the effective switch between the two methods. We have designed a new scheme to realize the dynamic switch between the two methods based on the decrease ratio (DR) of the misfit function (objective function), and we propose a modified hybrid iterative optimization method. In the new scheme, we compare the DR of the two methods for a given computational cost, and choose the method with a faster DR. Using these steps, the modified method always implements the most efficient method. The results of Marmousi and over thrust model testings indicate that the convergence with our modified method is significantly faster than that in the L-BFGS method with no loss of inversion quality. Moreover, our modified outperforms the enriched method by a little speedup of the convergence. It also exhibits better efficiency than the HFN method.
Methodology Using Inverse Methods for Pit Characterization in Multilayer Structures
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Sabbagh, Harold A.; Sabbagh, Elias H.; Murphy, R. Kim; Concordia, Michael; Judd, David R.; Lindgren, Eric; Knopp, Jeremy
2006-03-01
This paper presents a methodology incorporating ultrasonic and eddy current data and NDE models to characterize pits in first and second layers. Approaches such as equivalent pit dimensions, approximate probe models, and iterative inversion schemes were designed to improve the reliability and speed of inverse methods for second layer pit characterization. A novel clutter removal algorithm was developed to compensate for coherent background noise. Validation was achieved using artificial and real pitting corrosion samples.
Stress inversion method and analysis of GPS array data
NASA Astrophysics Data System (ADS)
Hori, Muneo; Iinuma, Takeshi; Kato, Teruyuki
2008-01-01
The stress inversion method is developed to find a stress field which satisfies the equation of equilibrium for a body in a state of plane stress. When one stress-strain relation is known and data on the strain distribution on the body and traction along the boundary are provided, the method solves a well-posed problem, which is a linear boundary value problem for Airy's stress function, with the governing equation being the Poisson equation and the boundary conditions being of the Neumann type. The stress inversion method is applied to the Global Positioning System (GPS) array data of the Japanese Islands. The stress increment distribution, which is associated with the displacement increment measured by the GPS array, is computed, and it is found that the distribution is not uniform over the islands and that some regions have a relatively large increment. The elasticity inversion method is developed as an alternative to the stress inversion method; it is based on the assumption of linear elastic deformation with unknown elastic moduli and does not need boundary traction data, which are usually difficult to measure. This method is applied to the GPS array data of a small region in Japan to which the stress inversion method is not applicable. To cite this article: M. Hori et al., C. R. Mecanique 336 (2008).
Estimating surface acoustic impedance with the inverse method.
Piechowicz, Janusz
2011-01-01
Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.
Saturation-inversion-recovery: A method for T1 measurement
NASA Astrophysics Data System (ADS)
Wang, Hongzhi; Zhao, Ming; Ackerman, Jerome L.; Song, Yiqiao
2017-01-01
Spin-lattice relaxation (T1) has always been measured by inversion-recovery (IR), saturation-recovery (SR), or related methods. These existing methods share a common behavior in that the function describing T1 sensitivity is the exponential, e.g., exp(- τ /T1), where τ is the recovery time. In this paper, we describe a saturation-inversion-recovery (SIR) sequence for T1 measurement with considerably sharper T1-dependence than those of the IR and SR sequences, and demonstrate it experimentally. The SIR method could be useful in improving the contrast between regions of differing T1 in T1-weighted MRI.
Tissue elasticity measurement method using forward and inversion algorithms
NASA Astrophysics Data System (ADS)
Lee, Jong-Ha; Won, Chang-Hee; Park, Hee-Jun; Ku, Jeonghun; Heo, Yun Seok; Kim, Yoon-Nyun
2013-03-01
Elasticity is an important indicator of tissue health, with increased stiffness pointing to an increased risk of cancer. We investigated a tissue elasticity measurement method using forward and inversion algorithms for the application of early breast tumor identification. An optical based elasticity measurement system is developed to capture images of the embedded lesions using total internal reflection principle. From elasticity images, we developed a novel method to estimate the elasticity of the embedded lesion using 3-D finite-element-model-based forward algorithm, and neural-network-based inversion algorithm. The experimental results showed that the proposed characterization method can be diffierentiate the benign and malignant breast lesions.
The method of common search direction of joint inversion
NASA Astrophysics Data System (ADS)
Zhao, C.; Tang, R.
2013-12-01
In geophysical inversion, the first step is to construct an objective function. The second step is using the optimization algorithm to minimize the objective function, such as the gradient method and the conjugate gradient method. Compared with the former, the conjugate gradient method can find a better direction to make the error decreasing faster and has been widely used for a long time. At present, the joint inversion is generally using the conjugate gradient method. The most important thing of joint inversion is to construct the partial derivative matrix with respect to different physical properties. Then we should add the constraints among different physical properties into the integrated matrix and also use the cross gradient as constrained of joint inversion. There are two ways to apply the cross gradient into inverse process that can be added to the data function or the model function. One way is adding the cross gradient into data function. The partial derivative matrix will grow two times, meanwhile it's also requested to calculate the cross gradient of each grid and bring great computation cost.
Towards an optimal inversion method for remote atmospheric sensing
NASA Technical Reports Server (NTRS)
King, J. I. F.
1969-01-01
The inference of atmospheric structure from satellite radiometric observations requires an inversion algorithm. A variety of techniques was spawned to meet these demands. One class, the nonlinear inversion methods, copes with the problem of data noise. Unlike linear techniques which require a priori data smoothing, the nonlinear method can be applied directly to raw data. The algorithm discriminates the noise input by resolving the inferences into two types of solution, associating the real roots with atmospheric structure while ascribing the imaginary roots to noise.
A reduced basis Landweber method for nonlinear inverse problems
NASA Astrophysics Data System (ADS)
Garmatter, Dominik; Haasdonk, Bernard; Harrach, Bastian
2016-03-01
We consider parameter identification problems in parametrized partial differential equations (PDEs). These lead to nonlinear ill-posed inverse problems. One way of solving them is using iterative regularization methods, which typically require numerous amounts of forward solutions during the solution process. In this article we consider the nonlinear Landweber method and couple it with the reduced basis method as a model order reduction technique in order to reduce the overall computational time. In particular, we consider PDEs with a high-dimensional parameter space, which are known to pose difficulties in the context of reduced basis methods. We present a new method that is able to handle such high-dimensional parameter spaces by combining the nonlinear Landweber method with adaptive online reduced basis updates. It is then applied to the inverse problem of reconstructing the conductivity in the stationary heat equation.
Geostatistical joint inversion of seismic and potential field methods
NASA Astrophysics Data System (ADS)
Shamsipour, Pejman; Chouteau, Michel; Giroux, Bernard
2016-04-01
Interpretation of geophysical data needs to integrate different types of information to make the proposed model geologically realistic. Multiple data sets can reduce uncertainty and non-uniqueness present in separate geophysical data inversions. Seismic data can play an important role in mineral exploration, however processing and interpretation of seismic data is difficult due to complexity of hard-rock geology. On the other hand, the recovered model from potential field methods is affected by inherent non uniqueness caused by the nature of the physics and by underdetermination of the problem. Joint inversion of seismic and potential field data can mitigate weakness of separate inversion of these methods. A stochastic joint inversion method based on geostatistical techniques is applied to estimate density and velocity distributions from gravity and travel time data. The method fully integrates the physical relations between density-gravity, on one hand, and slowness-travel time, on the other hand. As a consequence, when the data are considered noise-free, the responses from the inverted slowness and density data exactly reproduce the observed data. The required density and velocity auto- and cross-covariance are assumed to follow a linear model of coregionalization (LCM). The recent development of nonlinear model of coregionalization could also be applied if needed. The kernel function for the gravity method is obtained by the closed form formulation. For ray tracing, we use the shortest-path methods (SPM) to calculate the operation matrix. The jointed inversion is performed on structured grid; however, it is possible to extend it to use unstructured grid. The method is tested on two synthetic models: a model consisting of two objects buried in a homogeneous background and a model with stochastic distribution of parameters. The results illustrate the capability of the method to improve the inverted model compared to the separate inverted models with either gravity
A method of inversion of satellite magnetic anomaly data
NASA Technical Reports Server (NTRS)
Mayhew, M. A.
1977-01-01
A method of finding a first approximation to a crustal magnetization distribution from inversion of satellite magnetic anomaly data is described. Magnetization is expressed as a Fourier Series in a segment of spherical shell. Input to this procedure is an equivalent source representation of the observed anomaly field. Instability of the inversion occurs when high frequency noise is present in the input data, or when the series is carried to an excessively high wave number. Preliminary results are given for the United States and adjacent areas.
Indium oxide inverse opal films synthesized by structure replication method
NASA Astrophysics Data System (ADS)
Amrehn, Sabrina; Berghoff, Daniel; Nikitin, Andreas; Reichelt, Matthias; Wu, Xia; Meier, Torsten; Wagner, Thorsten
2016-04-01
We present the synthesis of indium oxide (In2O3) inverse opal films with photonic stop bands in the visible range by a structure replication method. Artificial opal films made of poly(methyl methacrylate) (PMMA) spheres are utilized as template. The opal films are deposited via sedimentation facilitated by ultrasonication, and then impregnated by indium nitrate solution, which is thermally converted to In2O3 after drying. The quality of the resulting inverse opal film depends on many parameters; in this study the water content of the indium nitrate/PMMA composite after drying is investigated. Comparison of the reflectance spectra recorded by vis-spectroscopy with simulated data shows a good agreement between the peak position and calculated stop band positions for the inverse opals. This synthesis is less complex and highly efficient compared to most other techniques and is suitable for use in many applications.
Joint Geophysical Inversion With Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelievre, P. G.; Bijani, R.; Farquharson, C. G.
2015-12-01
Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.
Kılıç, Emre Eibert, Thomas F.
2015-05-01
An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems. Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained.
Inviscid transonic wing design using inverse methods in curvilinear coordinates
NASA Technical Reports Server (NTRS)
Gally, Thomas A.; Carlson, Leland A.
1987-01-01
An inverse wing design method has been developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.
A simple inverse design method for pump turbine
NASA Astrophysics Data System (ADS)
Yin, Junlian; Li, Jingjing; Wang, Dezhong; Wei, Xianzhu
2014-03-01
In this paper, a simple inverse design method is proposed for pump turbine. The main point of this method is that the blade loading distribution is first extracted from an existing model and then applied in the new design. As an example, the blade loading distribution of the runner designed with head 200m, was analyzed. And then, the combination of the extracted blade loading and a meridional passage suitable for 500m head is applied to design a new runner project. After CFD and model test, it is shown that the new runner performs very well in terms of efficiency and cavitation. Therefore, as an alternative, the inverse design method can be extended to other design applications.
An equivalent source inversion method for imaging complex structures
NASA Astrophysics Data System (ADS)
Munk, Jens
Accurate subsurface imaging is of interest to geophysicists, having applications in geological mapping, underground void detection, ground contaminant mapping and land mine detection. The mathematical framework necessary to generate images of the subsurface from measurements of these fields describe the inverse problem, which is generally ill-posed and non-linear. Target scattering from an electromagnetic excitation results in a non-linear formulation, which is usually linearized using a weak scattering approximation. The equivalent source inversion method, in contrast, does not rely on a weak scattering approximation. The method combines the unknown total field and permittivity contrast into a single unknown distribution of "equivalent sources". Once determined, these sources are used to obtain an estimate of the total fields within the target or scatterer. The final step in the inversion is to use these fields in obtaining the desired physical property. Excellent reconstructions are obtained when the target is illuminated using multiple look angles and frequencies. Target reconstructions are further enhanced using various iterative algorithms. The general formulation of the method allow it to be used in conjunction with a number of geophysical applications. Specifically, the method can be applied to any geophysical technique incorporating a measured response to a known induced input. This is illustrated by formulating the method within resistivity electrical prospecting.
Full Waveform Inversion Using the Adjoint Method for Earthquake Kinematics Inversion
NASA Astrophysics Data System (ADS)
Tago Pacheco, J.; Metivier, L.; Brossier, R.; Virieux, J.
2014-12-01
Extracting the information contained in seismograms for better description of the Earth structure and evolution is often based on only selected attributes of these signals. Exploiting the entire seismogram, Full Wave Inversion based on an adjoint estimation of the gradient and Hessian operators, has been recognized as a high-resolution imaging technique. Most of earthquake kinematics inversion are still based on the estimation of the Frechet derivatives for the gradient operator computation in linearized optimization. One may wonder the benefit of the adjoint formulation which avoids the estimation of these derivatives for the gradient estimation. Recently, Somala et al. (submitted) have detailed the adjoint method for earthquake kinematics inversion starting from the second-order wave equation in 3D media. They have used a conjugate gradient method for the optimization procedure. We explore a similar adjoint formulation based on the first-order wave equations while using different optimization schemes. Indeed, for earthquake kinematics inversion, the model space is the slip-rate spatio-temporal history over the fault. Seismograms obtained from a dislocation rupture simulation are linearly linked to this slip-rate distribution. Therefore, we introduce a simple systematic procedure based on Lagrangian formulation of the adjoint method in the linear problem of earthquake kinematics. We have developed both the gradient estimation using the adjoint formulation and the Hessian influence using the second-order adjoint formulation (Metivier et al, 2013, 2014). Since the earthquake kinematics is a linear problem, the minimization problem is quadratic, henceforth, only one solution of the Newton equations is needed with the Hessian impact. Moreover, the formal uncertainty estimation over slip-rate distribution could be deduced from this Hessian analysis. On simple synthetic examples for antiplane kinematic rupture configuration in 2D medium, we illustrate the properties of
Inverse design of airfoils using a flexible membrane method
NASA Astrophysics Data System (ADS)
Thinsurat, Kamon
The Modified Garabedian Mc-Fadden (MGM) method is used to inversely design airfoils. The Finite Difference Method (FDM) for Non-Uniform Grids was developed to discretize the MGM equation for numerical solving. The Finite Difference Method (FDM) for Non-Uniform Grids has the advantage of being used flexibly with an unstructured grids airfoil. The commercial software FLUENT is being used as the flow solver. Several conditions are set in FLUENT such as subsonic inviscid flow, subsonic viscous flow, transonic inviscid flow, and transonic viscous flow to test the inverse design code for each condition. A moving grid program is used to create a mesh for new airfoils prior to importing meshes into FLUENT for the analysis of flows. For validation, an iterative process is used so the Cp distribution of the initial airfoil, the NACA0011, achieves the Cp distribution of the target airfoil, the NACA2315, for the subsonic inviscid case at M=0.2. Three other cases were carried out to validate the code. After the code validations, the inverse design method was used to design a shock free airfoil in the transonic condition and to design a separation free airfoil at a high angle of attack in the subsonic condition.
Inverse method for estimating respiration rates from decay time series
NASA Astrophysics Data System (ADS)
Forney, D. C.; Rothman, D. H.
2012-09-01
Long-term organic matter decomposition experiments typically measure the mass lost from decaying organic matter as a function of time. These experiments can provide information about the dynamics of carbon dioxide input to the atmosphere and controls on natural respiration processes. Decay slows down with time, suggesting that organic matter is composed of components (pools) with varied lability. Yet it is unclear how the appropriate rates, sizes, and number of pools vary with organic matter type, climate, and ecosystem. To better understand these relations, it is necessary to properly extract the decay rates from decomposition data. Here we present a regularized inverse method to identify an optimally-fitting distribution of decay rates associated with a decay time series. We motivate our study by first evaluating a standard, direct inversion of the data. The direct inversion identifies a discrete distribution of decay rates, where mass is concentrated in just a small number of discrete pools. It is consistent with identifying the best fitting "multi-pool" model, without prior assumption of the number of pools. However we find these multi-pool solutions are not robust to noise and are over-parametrized. We therefore introduce a method of regularized inversion, which identifies the solution which best fits the data but not the noise. This method shows that the data are described by a continuous distribution of rates, which we find is well approximated by a lognormal distribution, and consistent with the idea that decomposition results from a continuum of processes at different rates. The ubiquity of the lognormal distribution suggest that decay may be simply described by just two parameters: a mean and a variance of log rates. We conclude by describing a procedure that estimates these two lognormal parameters from decay data. Matlab codes for all numerical methods and procedures are provided.
Inverse method for estimating respiration rates from decay time series
NASA Astrophysics Data System (ADS)
Forney, D. C.; Rothman, D. H.
2012-03-01
Long-term organic matter decomposition experiments typically measure the mass lost from decaying organic matter as a function of time. These experiments can provide information about the dynamics of carbon dioxide input to the atmosphere and controls on natural respiration processes. Decay slows down with time, suggesting that organic matter is composed of components (pools) with varied lability. Yet it is unclear how the appropriate rates, sizes, and number of pools vary with organic matter type, climate, and ecosystem. To better understand these relations, it is necessary to properly extract the decay rates from decomposition data. Here we present a regularized inverse method to identify an optimally-fitting distribution of decay rates associated with a decay time series. We motivate our study by first evaluating a standard, direct inversion of the data. The direct inversion identifies a discrete distribution of decay rates, where mass is concentrated in just a small number of discrete pools. It is consistent with identifying the best fitting "multi-pool" model, without prior assumption of the number of pools. However we find these multi-pool solutions are not robust to noise and are over-parametrized. We therefore introduce a method of regularized inversion, which identifies the solution which best fits the data but not the noise. This method shows that the data are described by a continuous distribution of rates which we find is well approximated by a lognormal distribution, and consistent with the idea that decomposition results from a continuum of processes at different rates. The ubiquity of the lognormal distribution suggest that decay may be simply described by just two parameters; a mean and a variance of log rates. We conclude by describing a procedure that estimates these two lognormal parameters from decay data. Matlab codes for all numerical methods and procedures are provided.
Neural Network method for Inverse Modeling of Material Deformation
Allen, J.D., Jr.; Ivezic, N.D.; Zacharia, T.
1999-07-10
A method is described for inverse modeling of material deformation in applications of importance to the sheet metal forming industry. The method was developed in order to assess the feasibility of utilizing empirical data in the early stages of the design process as an alternative to conventional prototyping methods. Because properly prepared and employed artificial neural networks (ANN) were known to be capable of codifying and generalizing large bodies of empirical data, they were the natural choice for the application. The product of the work described here is a desktop ANN system that can produce in one pass an accurate die design for a user-specified part shape.
Express method of construction of accurate inverse pole figures
NASA Astrophysics Data System (ADS)
Perlovich, Yu; Isaenkova, M.; Fesenko, V.
2016-04-01
With regard to metallic materials with the FCC and BCC crystal lattice a new method for constructing the X-ray texture inverse pole figures (IPF) by using tilt curves of spinning sample, characterized by high accuracy and rapidity (express), was proposed. In contrast to the currently widespread method to construct IPF using orientation distribution function (ODF), synthesized in several partial direct pole figures, the proposed method is based on a simple geometrical interpretation of a measurement procedure, requires a minimal operating time of the X-ray diffractometer.
An Efficient Inverse Aerodynamic Design Method For Subsonic Flows
NASA Technical Reports Server (NTRS)
Milholen, William E., II
2000-01-01
Computational Fluid Dynamics based design methods are maturing to the point that they are beginning to be used in the aircraft design process. Many design methods however have demonstrated deficiencies in the leading edge region of airfoil sections. The objective of the present research is to develop an efficient inverse design method which is valid in the leading edge region. The new design method is a streamline curvature method, and a new technique is presented for modeling the variation of the streamline curvature normal to the surface. The new design method allows the surface coordinates to move normal to the surface, and has been incorporated into the Constrained Direct Iterative Surface Curvature (CDISC) design method. The accuracy and efficiency of the design method is demonstrated using both two-dimensional and three-dimensional design cases.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
A Modal/WKB Inversion Method for Determining Sound Speed Profiles in the Ocean and Ocean Bottom
1988-06-01
0 and z = z0 with harmonic time dependence ezp(-iwt), satisfies the inhomogeneous Helmholtz equation : 1 [ (r) + -2 + k2(z)] p(r,z, zo) = -2 6(z- Zo...P(z) = 0. (5.2) Substituting v(z) = P(z)/V/ i gives the Schr ~ dinger type equation [36] d2 V(z) + (k(z) + pi(z) - k2) v(z) = 0 (5.3) where ) p/ 2(z...input data used in generating a profile dependent functional relationship for the phase integral. The Abel integral equation based inversion relations
Determination of transient fluid temperature using the inverse method
NASA Astrophysics Data System (ADS)
Jaremkiewicz, Magdalena
2014-03-01
This paper proposes an inverse method to obtain accurate measurements of the transient temperature of fluid. A method for unit step and linear rise of temperature is presented. For this purpose, the thermometer housing is modelled as a full cylindrical element (with no inner hole), divided into four control volumes. Using the control volume method, the heat balance equations can be written for each of the nodes for each of the control volumes. Thus, for a known temperature in the middle of the cylindrical element, the distribution of temperature in three nodes and heat flux at the outer surface were obtained. For a known value of the heat transfer coefficient the temperature of the fluid can be calculated using the boundary condition. Additionally, results of experimental research are presented. The research was carried out during the start-up of an experimental installation, which comprises: a steam generator unit, an installation for boiler feed water treatment, a tray-type deaerator, a blow down flashvessel for heat recovery, a steam pressure reduction station, a boiler control system and a steam header made of martensitic high alloy P91 steel. Based on temperature measurements made in the steam header using the inverse method, accurate measurements of the transient temperature of the steam were obtained. The results of the calculations are compared with the real temperature of the steam, which can be determined for a known pressure and enthalpy.
NASA Astrophysics Data System (ADS)
Rezaie, Mohammad; Moradzadeh, Ali; Kalate, Ali Nejati; Aghajani, Hamid
2017-01-01
Inversion of gravity data is one of the important steps in the interpretation of practical data. One of the most interesting geological frameworks for gravity data inversion is the detection of sharp boundaries between orebody and host rocks. The focusing inversion is able to reconstruct a sharp image of the geological target. This technique can be efficiently applied for the quantitative interpretation of gravity data. In this study, a new reweighted regularized method for the 3D focusing inversion technique based on Lanczos bidiagonalization method is developed. The inversion results of synthetic data show that the new method is faster than common reweighted regularized conjugate gradient method to produce an acceptable solution for focusing inverse problem. The new developed inversion scheme is also applied for inversion of the gravity data collected over the San Nicolas Cu-Zn orebody in Zacatecas State, Mexico. The inversion results indicate a remarkable correlation with the true structure of the orebody that is achieved from drilling data.
The cluster Abell 780: an optical view
NASA Astrophysics Data System (ADS)
Durret, F.; Slezak, E.; Adami, C.
2009-11-01
Context: The Abell 780 cluster, better known as the Hydra A cluster, has been thouroughly analyzed in X-rays. However, little is known about its optical properties. Aims: We propose to derive the galaxy luminosity function (GLF) in this apparently relaxed cluster and to search for possible environmental effects by comparing the GLFs in various regions and by looking at the galaxy distribution at large scale around Abell 780. Methods: Our study is based on optical images obtained with the ESO 2.2m telescope and WFI camera in the B and R bands, covering a total region of 67.22 × 32.94 arcmin^2, or 4.235 × 2.075 Mpc2 for a cluster redshift of 0.0539. Results: In a region of 500 kpc radius around the cluster center, the GLF in the R band shows a double structure, with a broad and flat bright part and a flat faint end that can be fit by a power law with an index α ~ - 0.85 ± 0.12 in the 20.25 ≤ R ≤ 21.75 interval. If we divide this 500 kpc radius region in north+south or east+west halves, we find no clear difference between the GLFs in these smaller regions. No obvious large-scale structure is apparent within 5 Mpc from the cluster, based on galaxy redshifts and magnitudes collected from the NED database in a much larger region than that covered by our data, suggesting that there is no major infall of material in any preferential direction. However, the Serna-Gerbal method reveals a gravitationally bound structure of 27 galaxies, which includes the cD, and of a more strongly gravitationally bound structure of 14 galaxies. Conclusions: These optical results agree with the overall relaxed structure of Abell 780 previously derived from X-ray analyses. Based on observations obtained at the European Southern Observatory, program ESO 68.A-0084(A), P. I. E. Slezak. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics
Estimates of tropical bromoform emissions using an inversion method
NASA Astrophysics Data System (ADS)
Ashfold, M. J.; Harris, N. R. P.; Manning, A. J.; Robinson, A. D.; Warwick, N. J.; Pyle, J. A.
2014-01-01
Bromine plays an important role in ozone chemistry in both the troposphere and stratosphere. When measured by mass, bromoform (CHBr3) is thought to be the largest organic source of bromine to the atmosphere. While seaweed and phytoplankton are known to be dominant sources, the size and the geographical distribution of CHBr3 emissions remains uncertain. Particularly little is known about emissions from the Maritime Continent, which have usually been assumed to be large, and which appear to be especially likely to reach the stratosphere. In this study we aim to reduce this uncertainty by combining the first multi-annual set of CHBr3 measurements from this region, and an inversion process, to investigate systematically the distribution and magnitude of CHBr3 emissions. The novelty of our approach lies in the application of the inversion method to CHBr3. We find that local measurements of a short-lived gas like CHBr3 can be used to constrain emissions from only a relatively small, sub-regional domain. We then obtain detailed estimates of CHBr3 emissions within this area, which appear to be relatively insensitive to the assumptions inherent in the inversion process. We extrapolate this information to produce estimated emissions for the entire tropics (defined as 20° S-20° N) of 225 Gg CHBr3 yr-1. The ocean in the area we base our extrapolations upon is typically somewhat shallower, and more biologically productive, than the tropical average. Despite this, our tropical estimate is lower than most other recent studies, and suggests that CHBr3 emissions in the coastline-rich Maritime Continent may not be stronger than emissions in other parts of the tropics.
Inverse method for estimating shear stress in machining
NASA Astrophysics Data System (ADS)
Burns, T. J.; Mates, S. P.; Rhorer, R. L.; Whitenton, E. P.; Basak, D.
2016-01-01
An inverse method is presented for estimating shear stress in the work material in the region of chip-tool contact along the rake face of the tool during orthogonal machining. The method is motivated by a model of heat generation in the chip, which is based on a two-zone contact model for friction along the rake face, and an estimate of the steady-state flow of heat into the cutting tool. Given an experimentally determined discrete set of steady-state temperature measurements along the rake face of the tool, it is shown how to estimate the corresponding shear stress distribution on the rake face, even when no friction model is specified.
Simple method for the synthesis of inverse patchy colloids
NASA Astrophysics Data System (ADS)
van Oostrum, P. D. J.; Hejazifar, M.; Niedermayer, C.; Reimhult, E.
2015-06-01
Inverse patchy colloids (IPC's) have recently been introduced as a conceptually simple model to study the phase-behavior of heterogeneously charged units. This class of patchy particles is referred to as inverse to highlight that the patches repel each other in contrast to the attractive interactions of conventional patches. IPCs demonstrate a complex interplay between attractions and repulsions that depend on their patch size and charge, their relative orientations as well as on charge of the substrate below; the resulting wide array of different types of aggregates that can be formed motivates their fabrication and use as model system. We present a novel method that does not rely on clean-room facilities and that is easily scalable to modify the surface of colloidal particles to create two polar regions with the opposite charge with respect to that of the equatorial region. The patch size is characterized by electron microscopy and fluorescently labeled to facilitate using confocal microscopy to study their phase behavior. We show that the pH can be used to tune the charges of the IPCs thus offering a tool to steer the self assembly.
Comparison of Optimal Design Methods in Inverse Problems.
Banks, H T; Holm, Kathleen; Kappel, Franz
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29].
2012-08-01
AFRL-RX-WP-TP-2012-0397 INVERSE PROBLEM FOR ELECTROMAGNETIC PROPAGATION IN A DIELECTRIC MEDIUM USING MARKOV CHAIN MONTE CARLO METHOD ...SUBTITLE INVERSE PROBLEM FOR ELECTROMAGNETIC PROPAGATION IN A DIELECTRIC MEDIUM USING MARKOV CHAIN MONTE CARLO METHOD (PREPRINT) 5a. CONTRACT...a stochastic inverse methodology arising in electromagnetic imaging. Nondestructive testing using guided microwaves covers a wide range of
Multiresolution subspace-based optimization method for inverse scattering problems.
Oliveri, Giacomo; Zhong, Yu; Chen, Xudong; Massa, Andrea
2011-10-01
This paper investigates an approach to inverse scattering problems based on the integration of the subspace-based optimization method (SOM) within a multifocusing scheme in the framework of the contrast source formulation. The scattering equations are solved by a nested three-step procedure composed of (a) an outer multiresolution loop dealing with the identification of the regions of interest within the investigation domain through an iterative information-acquisition process, (b) a spectrum analysis step devoted to the reconstruction of the deterministic components of the contrast sources, and (c) an inner optimization loop aimed at retrieving the ambiguous components of the contrast sources through a conjugate gradient minimization of a suitable objective function. A set of representative reconstruction results is discussed to provide numerical evidence of the effectiveness of the proposed algorithmic approach as well as to assess the features and potentialities of the multifocusing integration in comparison with the state-of-the-art SOM implementation.
Inverse methods for stellarator error-fields and emission
NASA Astrophysics Data System (ADS)
Hammond, K. C.; Anichowski, A.; Brenner, P. W.; Diaz-Pacheco, R.; Volpe, F. A.; Wei, Y.; Kornbluth, Y.; Pedersen, T. S.; Raftopoulos, S.; Traverso, P.
2016-10-01
Work at the CNT stellarator at Columbia University has resulted in the development of two inverse diagnosis techniques that infer difficult-to-measure properties from simpler measurements. First, CNT's error-field is determined using a Newton-Raphson algorithm to infer coil misalignments based on measurements of flux surfaces. This is obtained by reconciling the computed flux surfaces (a function of coil misalignments) with the measured flux surfaces. Second, the plasma emissivity profile is determined based on a single CCD camera image using an onion-peeling method. This approach posits a system of linear equations relating pixel brightness to emission from a discrete set of plasma layers bounded by flux surfaces. Results for both of these techniques as applied to CNT will be shown, and their applicability to large modular coil stellarators will be discussed.
MASS SUBSTRUCTURE IN ABELL 3128
McCleary, J.; Dell’Antonio, I.; Huwe, P.
2015-05-20
We perform a detailed two-dimensional weak gravitational lensing analysis of the nearby (z = 0.058) galaxy cluster Abell 3128 using deep ugrz imaging from the Dark Energy Camera (DECam). We have designed a pipeline to remove instrumental artifacts from DECam images and stack multiple dithered observations without inducing a spurious ellipticity signal. We develop a new technique to characterize the spatial variation of the point-spread function that enables us to circularize the field to better than 0.5% and thereby extract the intrinsic galaxy ellipticities. By fitting photometric redshifts to sources in the observation, we are able to select a sample of background galaxies for weak-lensing analysis free from low-redshift contaminants. Photometric redshifts are also used to select a high-redshift galaxy subsample with which we successfully isolate the signal from an interloping z = 0.44 cluster. We estimate the total mass of Abell 3128 by fitting the tangential ellipticity of background galaxies with the weak-lensing shear profile of a Navarro–Frenk–White (NFW) halo and also perform NFW fits to substructures detected in the 2D mass maps of the cluster. This study yields one of the highest resolution mass maps of a low-z cluster to date and is the first step in a larger effort to characterize the redshift evolution of mass substructures in clusters.
A comparison of lidar inversion methods for cirrus applications
NASA Technical Reports Server (NTRS)
Elouragini, Salem; Flamant, Pierre H.
1992-01-01
Several methods for inverting the lidar equation are suggested as means to derive the cirrus optical properties (beta backscatter, alpha extinction coefficients, and delta optical depth) at one wavelength. The lidar equation can be inverted in a linear or logarithmic form; either solution assumes a linear relationship: beta = kappa(alpha), where kappa is the lidar ratio. A number of problems prevent us from calculating alpha (or beta) with a good accuracy. Some of these are as follows: (1) the multiple scattering effect (most authors neglect it); (2) an absolute calibration of the lidar system (difficult and sometimes not possible); (3) lack of accuracy on the lidar ratio k (taken as constant, but in fact it varies with range and cloud species); and (4) the determination of boundary condition for logarithmic solution which depends on signal to noise ration (SNR) at cloud top. An inversion in a linear form needs an absolute calibration of the system. In practice one uses molecular backscattering below the cloud to calibrate the system. This method is not permanent because the lower atmosphere turbidity is variable. For a logarithmic solution, a reference extinction coefficient (alpha(sub f)) at cloud top is required. Several methods to determine alpha(sub f) were suggested. We tested these methods at low SNR. This led us to propose two new methods referenced as S1 and S2.
Inverse Methods. Interdisciplinary Elements of Methodology, Computation, and Applications
NASA Astrophysics Data System (ADS)
Jacobsen, Bo Holm; Mosegaard, Klaus; Sibani, Paolo
Over the last few decades inversion concepts have become an integral part of experimental data interpretation in several branches of science. In numerous cases similar inversion-like techniques were developed independently in separate disciplines, sometimes based on different lines of reasoning, but not always to the same level of sophistication. This book is based on the Interdisciplinary Inversion Conference held at the University of Aarhus, Denmark. For scientists and graduate students in geophysics, astronomy, oceanography, petroleum geology, and geodesy, the book offers a wide variety of examples and theoretical background in the field of inversion techniques.
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
Research on inverse methods and optimization in Italy
NASA Technical Reports Server (NTRS)
Larocca, Francesco
1991-01-01
The research activities in Italy on inverse design and optimization are reviewed. The review is focused on aerodynamic aspects in turbomachinery and wing section design. Inverse design of blade rows and ducts of turbomachinery in subsonic and transonic regime are illustrated by the Politecnico di Torino and turbomachinery industry (FIAT AVIO).
Numerical methods for problems involving the Drazin inverse
NASA Technical Reports Server (NTRS)
Meyer, C. D., Jr.
1979-01-01
The objective was to try to develop a useful numerical algorithm for the Drazin inverse and to analyze the numerical aspects of the applications of the Drazin inverse relating to the study of homogeneous Markov chains and systems of linear differential equations with singular coefficient matrices. It is felt that all objectives were accomplished with a measurable degree of success.
Asteroid spin and shape modelling using two lightcurve inversion methods
NASA Astrophysics Data System (ADS)
Marciniak, Anna; Bartczak, Przemyslaw; Konstanciak, Izabella; Dudzinski, Grzegorz; Mueller, Thomas G.; Duffard, Rene
2016-10-01
We are conducting an observing campaign to counteract strong selection effects in photometric studies of asteroids. Our targets are long-period (P>12 hours) and low-amplitude (a_max<0.25 mag) asteroids, that although numerous, have poor lightcurve datasets (Marciniak et al. 2015, PSS 118, 256). As a result such asteroids are very poorly studied in terms of their spins and shapes. Our campaign targets a sample of around 100 bright (H<11 mag) main belt asteroids sharing both of these features, resulting in a few tens of new composite lightcurves each year. At present the data gathered so far allowed to construct detailed models for the shape and spin for about ten targets.In this study we perform spin and shape modelling using two lightcurve inversion methods: convex inversion (Kaasalainen et al. 2001, Icarus, 153, 37) and nonconvex SAGE modelling algorithm (Shaping Asteroids with Genetic Evolution, Bartczak et al. 2014, MNRAS, 443, 1802). These two methods are independent from each other, and are based on different assumptions for the shape.Thus, the results obtained on the same datasets provide a cross-check of both the methods and the resulting spin and shape models. The results for the spin solutions are highly consistent, and the shape models are similar, though the ones from SAGE algorithm provide more details of the surface features. Nonconvex shape produced by SAGE have been compared with direct images from spacecrafts and the first results for targets like Eros or Lutetia (Batczak et al. 2014, ACM conf. 29B) provide a high level of agreement.Another way of validation is the shape model comparison with the asteroid shape contours obtained using different techniques (like the stellar occultation timings or adaptive optics imaging) or against data in thermal infrared range gathered by ground and space-bound observatories. The thermal data could provide assignment of size and albedo, but also can help to resolve spin-pole ambiguities. In special cases, the
The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method
NASA Astrophysics Data System (ADS)
Voronina, T. A.; Romanenko, A. A.
2016-12-01
Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.
The genetic algorithm: A robust method for stress inversion
NASA Astrophysics Data System (ADS)
Thakur, Prithvi; Srivastava, Deepak C.; Gupta, Pravin K.
2017-01-01
The stress inversion of geological or geophysical observations is a nonlinear problem. In most existing methods, it is solved by linearization, under certain assumptions. These linear algorithms not only oversimplify the problem but also are vulnerable to entrapment of the solution in a local optimum. We propose the use of a nonlinear heuristic technique, the genetic algorithm, which searches the global optimum without making any linearizing assumption or simplification. The algorithm mimics the natural evolutionary processes of selection, crossover and mutation and, minimizes a composite misfit function for searching the global optimum, the fittest stress tensor. The validity and efficacy of the algorithm are demonstrated by a series of tests on synthetic and natural fault-slip observations in different tectonic settings and also in situations where the observations are noisy. It is shown that the genetic algorithm is superior to other commonly practised methods, in particular, in those tectonic settings where none of the principal stresses is directed vertically and/or the given data set is noisy.
NASA Astrophysics Data System (ADS)
Ji, Hongzhu; Chen, Siying; Zhang, Yinchao; Chen, He; Guo, Pan; Chen, Hao
2017-02-01
A calibration method is proposed to invert the extinction coefficient for Fernald and Klett inversion by using the particle backscattering coefficient inversed with Raman and Elastic return signals. The calibration method is analyzed theoretically and experimentally, the inversion accuracy can be improved by removing the dependence on reference altitudes and intervals in conventional calibration methods, which resulted from the introduction of backscattering coefficient with relatively higher accuracy obtained by Raman-Mie inversion method. The standard deviation of this new calibration method can be reduced by about 20×, compared to that of the conventional calibration methods of Fernald and Klett inversion. And, the more stable effective inversed range with this new calibration method can be obtained by removing the dimple phenomenon in clouds position.
An optimal constrained linear inverse method for magnetic source imaging
Hughett, P.
1993-09-01
Magnetic source imaging is the reconstruction of the current distribution inside an inaccessible volume from magnetic field measurements made outside the volume. If the unknown current distribution is expressed as a linear combination of elementary current distributions in fixed positions, then the magnetic field measurements are linear in the unknown source amplitudes and both the least square and minimum mean square reconstructions are linear problems. This offers several advantages: The problem is well understood theoretically and there is only a single, global minimum. Efficient and reliable software for numerical linear algebra is readily available. If the sources are localized and statistically uncorrelated, then a map of expected power dissipation is equivalent to the source covariance matrix. Prior geological or physiological knowledge can be used to determine such an expected power map and thus the source covariance matrix. The optimal constrained linear inverse method (OCLIM) derived in this paper uses this prior knowledge to obtain a minimum mean square error estimate of the current distribution. OCLIM can be efficiently computed using the Cholesky decomposition, taking about a second on a workstation-class computer for a problem with 64 sources and 144 detectors. Any source and detector configuration is allowed as long as their positions are fixed a priori. Correlations among source and noise amplitudes are permitted. OCLIM reduces to the optimally weighted pseudoinverse method of Shim and Cho if the source amplitudes are independent and identically distributed and to the minimum-norm least squares estimate in the limit of no measurement noise or no prior knowledge of the source amplitudes. In the general case, OCLIM has better mean square error than either previous method. OCLIM appears well suited to magnetic imaging, since it exploits prior information, provides the minimum reconstruction error, and is inexpensive to compute.
Gao Yajun
2008-08-15
A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.
SPITZER OBSERVATIONS OF ABELL 1763. I. INFRARED AND OPTICAL PHOTOMETRY
Edwards, Louise O. V.; Fadda, Dario; Biviano, Andrea
2010-02-15
We present a photometric analysis of the galaxy cluster Abell 1763 at visible and infrared wavelengths. Included are fully reduced images in r', J, H, and K{sub s} obtained using the Palomar 200in telescope, as well as the IRAC and MIPS images from Spitzer. The cluster is covered out to approximately 3 virial radii with deep 24 {mu}m imaging (a 5{sigma} depth of 0.2 mJy). This same field of {approx}40' x 40' is covered in all four IRAC bands as well as the longer wavelength MIPS bands (70 and 160 {mu}m). The r' imaging covers {approx}0.8 deg{sup 2} down to 25.5 mag, and overlaps with most of the MIPS field of view. The J, H, and K{sub s} images cover the cluster core and roughly half of the filament galaxies, which extend toward the neighboring cluster, Abell 1770. This first, in a series of papers on Abell 1763, discusses the data reduction methods and source extraction techniques used for each data set. We present catalogs of infrared sources (with 24 and/or 70 {mu}m emission) and their corresponding emission in the optical (u', g', r', i', z'), and near- to far-IR (J, H, K{sub s} , IRAC, and MIPS 160 {mu}m). We provide the catalogs and reduced images to the community through the NASA/IPAC Infrared Science Archive.
Nonlinear inversion of pre-stack seismic data using variable metric method
NASA Astrophysics Data System (ADS)
Zhang, Fanchang; Dai, Ronghuo
2016-06-01
At present, the routine method to perform AVA (Amplitude Variation with incident Angle) inversion is based on the assumption that the ratio of S-wave velocity to P-wave velocity γ is a constant. However, this simplified assumption does not always hold, and it is necessary to use nonlinear inversion method to solve it. Based on Bayesian theory, the objective function for nonlinear AVA inversion is established and γ is considered as an unknown model parameter. Then, variable metric method with a strategy of periodically variational starting point is used to solve the nonlinear AVA inverse problem. The proposed method can keep the inverted reservoir parameters approach to the actual solution and has been performed on both synthetic and real data. The inversion results suggest that the proposed method can solve the nonlinear inverse problem and get accurate solutions even without the knowledge of γ.
Kinugawa, Tohru
2014-02-15
This paper presents a simple but nontrivial generalization of Abel's mechanical problem, based on the extended isochronicity condition and the superposition principle. There are two primary aims. The first one is to reveal the linear relation between the transit-time T and the travel-length X hidden behind the isochronicity problem that is usually discussed in terms of the nonlinear equation of motion (d{sup 2}X)/(dt{sup 2}) +(dU)/(dX) =0 with U(X) being an unknown potential. Second, the isochronicity condition is extended for the possible Abel-transform approach to designing the isochronous trajectories of charged particles in spectrometers and/or accelerators for time-resolving experiments. Our approach is based on the integral formula for the oscillatory motion by Landau and Lifshitz [Mechanics (Pergamon, Oxford, 1976), pp. 27–29]. The same formula is used to treat the non-periodic motion that is driven by U(X). Specifically, this unknown potential is determined by the (linear) Abel transform X(U) ∝ A[T(E)], where X(U) is the inverse function of U(X), A=(1/√(π))∫{sub 0}{sup E}dU/√(E−U) is the so-called Abel operator, and T(E) is the prescribed transit-time for a particle with energy E to spend in the region of interest. Based on this Abel-transform approach, we have introduced the extended isochronicity condition: typically, τ = T{sub A}(E) + T{sub N}(E) where τ is a constant period, T{sub A}(E) is the transit-time in the Abel type [A-type] region spanning X > 0 and T{sub N}(E) is that in the Non-Abel type [N-type] region covering X < 0. As for the A-type region in X > 0, the unknown inverse function X{sub A}(U) is determined from T{sub A}(E) via the Abel-transform relation X{sub A}(U) ∝ A[T{sub A}(E)]. In contrast, the N-type region in X < 0 does not ensure this linear relation: the region is covered with a predetermined potential U{sub N}(X) of some arbitrary choice, not necessarily obeying the Abel-transform relation. In discussing
Rapid Inversion of Angular Deflection Data for Certain Axisymmetric Refractive Index Distributions
NASA Technical Reports Server (NTRS)
Rubinstein, R.; Greenberg, P. S.
1994-01-01
Certain functions useful for representing axisymmetric refractive-index distributions are shown to have exact solutions for Abel transformation of the resulting angular deflection data. An advantage of this procedure over direct numerical Abel inversion is that least-squares curve fitting is a smoothing process that reduces the noise sensitivity of the computation
Comparative study of inversion methods of three-dimensional NMR and sensitivity to fluids
NASA Astrophysics Data System (ADS)
Tan, Maojin; Wang, Peng; Mao, Keyu
2014-04-01
Three-dimensional nuclear magnetic resonance (3D NMR) logging can simultaneously measure transverse relaxation time (T2), longitudinal relaxation time (T1), and diffusion coefficient (D). These parameters can be used to distinguish fluids in the porous reservoirs. For 3D NMR logging, the relaxation mechanism and mathematical model, Fredholm equation, are introduced, and the inversion methods including Singular Value Decomposition (SVD), Butler-Reeds-Dawson (BRD), and Global Inversion (GI) methods are studied in detail, respectively. During one simulation test, multi-echo CPMG sequence activation is designed firstly, echo trains of the ideal fluid models are synthesized, then an inversion algorithm is carried on these synthetic echo trains, and finally T2-T1-D map is built. Futhermore, SVD, BRD, and GI methods are respectively applied into a same fluid model, and the computing speed and inversion accuracy are compared and analyzed. When the optimal inversion method and matrix dimention are applied, the inversion results are in good aggreement with the supposed fluid model, which indicates that the inversion method of 3D NMR is applieable for fluid typing of oil and gas reservoirs. Additionally, the forward modeling and inversion tests are made in oil-water and gas-water models, respectively, the sensitivity to the fluids in different magnetic field gradients is also examined in detail. The effect of magnetic gradient on fluid typing in 3D NMR logging is stuied and the optimal manetic gradient is choosen.
NASA Astrophysics Data System (ADS)
Balogh, Michael L.; Morris, Simon L.
2000-11-01
We present the results of a search for strong Hα emission line galaxies (rest frame equivalent widths greater than 50Å) in the z~0.23 cluster Abell 2390. The survey contains 1189galaxies over 270arcmin2, and is 50per cent complete at Mr~-17.5+5logh. The fraction of galaxies in which Hα is detected at the 2σ level rises from 0.0 in the central regions (excluding the cD galaxy) to 12.5+/-8per cent at R200. For 165 of the galaxies in our catalogue, we compare the Hα equivalent widths with their [Oii] λ3727 equivalent widths, from the Canadian Network for Observational Cosmology (CNOC1) spectra. The fraction of strong Hα emission line galaxies is consistent with the fraction of strong [Oii] emission galaxies in the CNOC1 sample: only 2+/-1per cent have no detectable [Oii] emission and yet significant (>2σ) Hα equivalent widths. Dust obscuration, non-thermal ionization, and aperture effects are all likely to contribute to this non-correspondence of emission lines. We identify six spectroscopically `secure' k+a galaxies [W0(Oii)<5Å and W0(Hδ)>~5Å] at least two of these show strong signs in Hα of star formation in regions that are covered by the slit from which the spectra were obtained. Thus, some fraction of galaxies classified as k+a based on spectra shortward of 6000Å are likely to be undergoing significant star formation. These results are consistent with a `strangulation' model for cluster galaxy evolution, in which star formation in cluster galaxies is gradually decreased, and is neither enhanced nor abruptly terminated by the cluster environment.
A direct-inverse method for transonic and separated flows about airfoils
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1990-01-01
A direct-inverse technique and computer program called TAMSEP that can be used for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicting the flow field about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.
A direct-inverse method for transonic and separated flows about airfoils
NASA Technical Reports Server (NTRS)
Carlson, K. D.
1985-01-01
A direct-inverse technique and computer program called TAMSEP that can be sued for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicing the flowfield about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.
A method for determining void arrangements in inverse opals.
Blanford, C F; Carter, C B; Stein, A
2004-12-01
The periodic arrangement of voids in ceramic materials templated by colloidal crystal arrays (inverse opals) has been analysed by transmission electron microscopy. Individual particles consisting of an approximately spherical array of at least 100 voids were tilted through 90 degrees along a single axis within the transmission electron microscope. The bright-field images of these particles at high-symmetry points, their diffractograms calculated by fast Fourier transforms, and the transmission electron microscope goniometer angles were compared with model face-centred cubic, body-centred cubic, hexagonal close-packed, and simple cubic lattices in real and reciprocal space. The spatial periodicities were calculated for two-dimensional projections. The systematic absences in these diffractograms differed from those found in diffraction patterns from three-dimensional objects. The experimental data matched only the model face-centred cubic lattice, so it was concluded that the packing of the voids (and, thus, the polymer spheres that composed the original colloidal crystals) was face-centred cubic. In face-centred cubic structures, the stacking-fault displacement vector is a/6<211> . No stacking faults were observed when viewing the inverse opal structure along the orthogonal <110>-type directions, eliminating the possibility of a random hexagonally close-packed structure for the particles observed. This technique complements synchrotron X-ray scattering work on colloidal crystals by allowing both real-space and reciprocal-space analysis to be carried out on a smaller cross-sectional area.
Accuracy evaluation of both Wallace-Bott and BEM-based paleostress inversion methods
NASA Astrophysics Data System (ADS)
Lejri, Mostfa; Maerten, Frantz; Maerten, Laurent; Soliva, Roger
2017-01-01
Four decades after their introduction, the validity of fault slip inversion methods based on Wallace (1951) and Bott (1959) hypothesis, which states that the slip on each fault surface has the same direction and sense as the maximum resolved shear stress, is still a subject of debate. According to some authors, this hypothesis is questionable since fault mechanical interactions induce slip reorientations as confirmed by geomechanical models. This leads us to ask as to what extent the Wallace-Bott simplifications are reliable as a basis hypothesis for stress inversion from fault slip data. In this paper, we compare two inversion methods; the first is based on the Wallace-Bott hypothesis, and the second relies on geomechanics and mechanical effects on fault heterogeneous slip distribution. In that context, a multi-parametric stress inversion study covering (i) the friction coefficients (μ), (ii) the full range of Andersonian state of stress and (iii) slip data sampling along the faults is performed. For each tested parameter, the results of the mechanical stress inversion and the Wallace-Bott (WB) based stress inversion for slip are compared in order to understand their respective effects. The predicted discrepancy between the solutions of both stress inversion methods (based on WB and mechanics) will then be used to explain the stress inversions results for the chimney Rock case study. It is shown that a high solution discrepancy is not always correlated with the misfit angle (ω) and can be found under specific configurations (R-, θ, μ, geometry) invalidating the WB solutions. We conclude that in most cases the mechanical stress inversion and the WB based stress inversion are both valid and complementary depending on the fault friction. Some exceptions (i.e. low fault friction, simple fault geometry and pure regimes) that may lead to wrong WB based stress inversion solutions are highlighted.
Fast 3D inversion of airborne gravity-gradiometry data using Lanczos bidiagonalization method
NASA Astrophysics Data System (ADS)
Meng, Zhaohai; Li, Fengting; Zhang, Dailei; Xu, Xuechun; Huang, Danian
2016-09-01
We developed a new fast inversion method for to process and interpret airborne gravity gradiometry data, which was based on Lanczos bidiagonalization algorithm. Here, we describe the application of this new 3D gravity gradiometry inversion method to recover a subsurface density distribution model from the airborne measured gravity gradiometry anomalies. For this purpose, the survey area is divided into a large number of rectangular cells with each cell possessing a constant unknown density. It is well known that the solution of large linear gravity gradiometry is an ill-posed problem since using the smoothest inversion method is considerably time consuming. We demonstrate that the Lanczos bidiagonalization method can be an appropriate algorithm to solve a Tikhonov solver time cost function for resolving the large equations within a short time. Lanczos bidiagonalization is designed to make the very large gravity gradiometry forward modeling matrices to become low-rank, which will considerably reduce the running time of the inversion method. We also use a weighted generalized cross validation method to choose the appropriate Tikhonov parameter to improve inversion results. The inversion incorporates a model norm that allows us to attain the smoothing and depth of the solution; in addition, the model norm counteracts the natural decay of the kernels, which concentrate at shallow depths. The method is applied on noise-contaminated synthetic gravity gradiometry data to demonstrate its suitability for large 3D gravity gradiometry data inversion. The airborne gravity gradiometry data from the Vinton Salt Dome, USE, were considered as a case study. The validity of the new method on real data is discussed with reference to the Vinton Dome inversion result. The intermediate density values in the constructed model coincide well with previous results and geological information. This demonstrates the validity of the gravity gradiometry inversion method.
Method for the preparation of metal colloids in inverse micelles and product preferred by the method
Wilcoxon, Jess P.
1992-01-01
A method is provided for preparing catalytic elemental metal colloidal particles (e.g. gold, palladium, silver, rhodium, iridium, nickel, iron, platinum, molybdenum) or colloidal alloy particles (silver/iridium or platinum/gold). A homogeneous inverse micelle solution of a metal salt is first formed in a metal-salt solvent comprised of a surfactant (e.g. a nonionic or cationic surfactant) and an organic solvent. The size and number of inverse micelles is controlled by the proportions of the surfactant and the solvent. Then, the metal salt is reduced (by chemical reduction or by a pulsed or continuous wave UV laser) to colloidal particles of elemental metal. After their formation, the colloidal metal particles can be stabilized by reaction with materials that permanently add surface stabilizing groups to the surface of the colloidal metal particles. The sizes of the colloidal elemental metal particles and their size distribution is determined by the size and number of the inverse micelles. A second salt can be added with further reduction to form the colloidal alloy particles. After the colloidal elemental metal particles are formed, the homogeneous solution distributes to two phases, one phase rich in colloidal elemental metal particles and the other phase rich in surfactant. The colloidal elemental metal particles from one phase can be dried to form a powder useful as a catalyst. Surfactant can be recovered and recycled from the phase rich in surfactant.
Application of direct inverse analogy method (DIVA) and viscous design optimization techniques
NASA Technical Reports Server (NTRS)
Greff, E.; Forbrich, D.; Schwarten, H.
1991-01-01
A direct-inverse approach to the transonic design problem was presented in its initial state at the First International Conference on Inverse Design Concepts and Optimization in Engineering Sciences (ICIDES-1). Further applications of the direct inverse analogy (DIVA) method to the design of airfoils and incremental wing improvements and experimental verification are reported. First results of a new viscous design code also from the residual correction type with semi-inverse boundary layer coupling are compared with DIVA which may enhance the accuracy of trailing edge design for highly loaded airfoils. Finally, the capabilities of an optimization routine coupled with the two viscous full potential solvers are investigated in comparison to the inverse method.
A boundary integral method for an inverse problem in thermal imaging
NASA Technical Reports Server (NTRS)
Bryan, Kurt
1992-01-01
An inverse problem in thermal imaging involving the recovery of a void in a material from its surface temperature response to external heating is examined. Uniqueness and continuous dependence results for the inverse problem are demonstrated, and a numerical method for its solution is developed. This method is based on an optimization approach, coupled with a boundary integral equation formulation of the forward heat conduction problem. Some convergence results for the method are proved, and several examples are presented using computationally generated data.
An adaptive subspace trust-region method for frequency-domain seismic full waveform inversion
NASA Astrophysics Data System (ADS)
Zhang, Huan; Li, Xiaofan; Song, Hanjie; Liu, Shaolin
2015-05-01
Full waveform inversion is currently considered as a promising seismic imaging method to obtain high-resolution and quantitative images of the subsurface. It is a nonlinear ill-posed inverse problem, the main difficulty of which that prevents the full waveform inversion from widespread applying to real data is the sensitivity to incorrect initial models and noisy data. Local optimization theories including Newton's method and gradient method always lead the convergence to local minima, while global optimization algorithms such as simulated annealing are computationally costly. To confront this issue, in this paper we investigate the possibility of applying the trust-region method to the full waveform inversion problem. Different from line search methods, trust-region methods force the new trial step within a certain neighborhood of the current iterate point. Theoretically, the trust-region methods are reliable and robust, and they have very strong convergence properties. The capability of this inversion technique is tested with the synthetic Marmousi velocity model and the SEG/EAGE Salt model. Numerical examples demonstrate that the adaptive subspace trust-region method can provide solutions closer to the global minima compared to the conventional Approximate Hessian approach and the L-BFGS method with a higher convergence rate. In addition, the match between the inverted model and the true model is still excellent even when the initial model deviates far from the true model. Inversion results with noisy data also exhibit the remarkable capability of the adaptive subspace trust-region method for low signal-to-noise data inversions. Promising numerical results suggest this adaptive subspace trust-region method is suitable for full waveform inversion, as it has stronger convergence and higher convergence rate.
Inverse scattering for the one-dimensional Helmholtz equation: fast numerical method.
Belai, Oleg V; Frumin, Leonid L; Podivilov, Evgeny V; Shapiro, David A
2008-09-15
The inverse scattering problem for the one-dimensional Helmholtz wave equation is studied. The equation is reduced to a Fresnel set that describes multiple bulk reflection and is similar to the coupled-wave equations. The inverse scattering problem is equivalent to coupled Gel'fand-Levitan-Marchenko integral equations. In the discrete representation its matrix has Töplitz symmetry, and the fast inner bordering method can be applied for its inversion. Previously the method was developed for the design of fiber Bragg gratings. The testing example of a short Bragg reflector with deep modulation demonstrates the high efficiency of refractive-index reconstruction.
Inverting geodetic time series with a principal component analysis-based inversion method
NASA Astrophysics Data System (ADS)
Kositsky, A. P.; Avouac, J.-P.
2010-03-01
The Global Positioning System (GPS) system now makes it possible to monitor deformation of the Earth's surface along plate boundaries with unprecedented accuracy. In theory, the spatiotemporal evolution of slip on the plate boundary at depth, associated with either seismic or aseismic slip, can be inferred from these measurements through some inversion procedure based on the theory of dislocations in an elastic half-space. We describe and test a principal component analysis-based inversion method (PCAIM), an inversion strategy that relies on principal component analysis of the surface displacement time series. We prove that the fault slip history can be recovered from the inversion of each principal component. Because PCAIM does not require externally imposed temporal filtering, it can deal with any kind of time variation of fault slip. We test the approach by applying the technique to synthetic geodetic time series to show that a complicated slip history combining coseismic, postseismic, and nonstationary interseismic slip can be retrieved from this approach. PCAIM produces slip models comparable to those obtained from standard inversion techniques with less computational complexity. We also compare an afterslip model derived from the PCAIM inversion of postseismic displacements following the 2005 8.6 Nias earthquake with another solution obtained from the extended network inversion filter (ENIF). We introduce several extensions of the algorithm to allow statistically rigorous integration of multiple data sources (e.g., both GPS and interferometric synthetic aperture radar time series) over multiple timescales. PCAIM can be generalized to any linear inversion algorithm.
Development of direct-inverse 3-D methods for applied aerodynamic design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1988-01-01
Several inverse methods have been compared and initial results indicate that differences in results are primarily due to coordinate systems and fuselage representations and not to design procedures. Further, results from a direct-inverse method that includes 3-D wing boundary layer effects, wake curvature, and wake displacement are presented. These results show that boundary layer displacements must be included in the design process for accurate results.
Resampling: An optimization method for inverse planning in robotic radiosurgery
Schweikard, Achim; Schlaefer, Alexander; Adler, John R. Jr.
2006-11-15
By design, the range of beam directions in conventional radiosurgery are constrained to an isocentric array. However, the recent introduction of robotic radiosurgery dramatically increases the flexibility of targeting, and as a consequence, beams need be neither coplanar nor isocentric. Such a nonisocentric design permits a large number of distinct beam directions to be used in one single treatment. These major technical differences provide an opportunity to improve upon the well-established principles for treatment planning used with GammaKnife or LINAC radiosurgery. With this objective in mind, our group has developed over the past decade an inverse planning tool for robotic radiosurgery. This system first computes a set of beam directions, and then during an optimization step, weights each individual beam. Optimization begins with a feasibility query, the answer to which is derived through linear programming. This approach offers the advantage of completeness and avoids local optima. Final beam selection is based on heuristics. In this report we present and evaluate a new strategy for utilizing the advantages of linear programming to improve beam selection. Starting from an initial solution, a heuristically determined set of beams is added to the optimization problem, while beams with zero weight are removed. This process is repeated to sample a set of beams much larger compared with typical optimization. Experimental results indicate that the planning approach efficiently finds acceptable plans and that resampling can further improve its efficiency.
NASA Astrophysics Data System (ADS)
Liu, B.; Li, S. C.; Nie, L. C.; Wang, J.; L, X.; Zhang, Q. S.
2012-12-01
Traditional inversion method is the most commonly used procedure for three-dimensional (3D) resistivity inversion, which usually takes the linearization of the problem and accomplish it by iterations. However, its accuracy is often dependent on the initial model, which can make the inversion trapped in local optima, even cause a bad result. Non-linear method is a feasible way to eliminate the dependence on the initial model. However, for large problems such as 3D resistivity inversion with inversion parameters exceeding a thousand, main challenges of non-linear method are premature and quite low search efficiency. To deal with these problems, we present an improved Genetic Algorithm (GA) method. In the improved GA method, smooth constraint and inequality constraint are both applied on the object function, by which the degree of non-uniqueness and ill-conditioning is decreased. Some measures are adopted from others by reference to maintain the diversity and stability of GA, e.g. real-coded method, and the adaptive adjustment of crossover and mutation probabilities. Then a generation method of approximately uniform initial population is proposed in this paper, with which uniformly distributed initial generation can be produced and the dependence on initial model can be eliminated. Further, a mutation direction control method is presented based on the joint algorithm, in which the linearization method is embedded in GA. The update vector produced by linearization method is used as mutation increment to maintain a better search direction compared with the traditional GA with non-controlled mutation operation. By this method, the mutation direction is optimized and the search efficiency is improved greatly. The performance of improved GA is evaluated by comparing with traditional inversion results in synthetic example or with drilling columnar sections in practical example. The synthetic and practical examples illustrate that with the improved GA method we can eliminate
LensPerfect Analysis of Abell 1689
NASA Astrophysics Data System (ADS)
Coe, Dan A.
2007-12-01
I present the first massmap to perfectly reproduce the position of every gravitationally-lensed multiply-imaged galaxy detected to date in ACS images of Abell 1689. This massmap was obtained using a powerful new technique made possible by a recent advance in the field of Mathematics. It is the highest resolution assumption-free Dark Matter massmap to date, with the resolution being limited only by the number of multiple images detected. We detect 8 new multiple image systems and identify multiple knots in individual galaxies to constrain a grand total of 168 knots within 135 multiple images of 42 galaxies. No assumptions are made about mass tracing light, and yet the brightest visible structures in A1689 are reproduced in our massmap, a few with intriguing positional offsets. Our massmap probes radii smaller than that resolvable in current Dark Matter simulations of galaxy clusters. And at these radii, we observe slight deviations from the NFW and Sersic profiles which describe simulated Dark Matter halos so well. While we have demonstrated that our method is able to recover a known input massmap (to limited resolution), further tests are necessary to determine the uncertainties of our mass profile and positions of massive subclumps. I compile the latest weak lensing data from ACS, Subaru, and CFHT, and attempt to fit a single profile, either NFW or Sersic, to both the observed weak and strong lensing. I confirm the finding of most previous authors, that no single profile fits extremely well to both simultaneously. Slight deviations are revealed, with the best fits slightly over-predicting the mass profile at both large and small radius. Our easy-to-use software, called LensPerfect, will be made available soon. This research was supported by the European Commission Marie Curie International Reintegration Grant 017288-BPZ and the PNAYA grant AYA2005-09413-C02.
The discovery of diffuse steep spectrum sources in Abell 2256
NASA Astrophysics Data System (ADS)
van Weeren, R. J.; Intema, H. T.; Oonk, J. B. R.; Röttgering, H. J. A.; Clarke, T. E.
2009-12-01
Context: Hierarchical galaxy formation models indicate that during their lifetime galaxy clusters undergo several mergers. An example of such a merging cluster is Abell 2256. Here we report on the discovery of three diffuse radio sources in the periphery of Abell 2256, using the Giant Metrewave Radio Telescope (GMRT). Aims: The aim of the observations was to search for diffuse ultra-steep spectrum radio sources within the galaxy cluster Abell 2256. Methods: We have carried out GMRT 325 MHz radio continuum observations of Abell 2256. V, R and I band images of the cluster were taken with the 4.2 m William Herschel Telescope (WHT). Results: We have discovered three diffuse elongated radio sources located about 1 Mpc from the cluster center. Two are located to the west of the cluster center, and one to the southeast. The sources have a measured physical extent of 170, 140 and 240 kpc, respectively. The two western sources are also visible in deep low-resolution 115-165 MHz Westerbork Synthesis Radio Telescope (WSRT) images, although they are blended into a single source. For the combined emission of the blended source we find an extreme spectral index (α) of -2.05 ± 0.14 between 140 and 351 MHz. The extremely steep spectral index suggests these two sources are most likely the result of adiabatic compression of fossil radio plasma due to merger shocks. For the source to the southeast, we find that {α < -1.45} between 1369 and 325 MHz. We did not find any clear optical counterparts to the radio sources in the WHT images. Conclusions: The discovery of the steep spectrum sources implies the existence of a population of faint diffuse radio sources in (merging) clusters with such steep spectra that they have gone unnoticed in higher frequency (⪆1 GHz) observations. Simply considering the timescales related to the AGN activity, synchrotron losses, and the presence of shocks, we find that most massive clusters should possess similar sources. An exciting possibility
Magnetic interface forward and inversion method based on Padé approximation
NASA Astrophysics Data System (ADS)
Zhang, Chong; Huang, Da-Nian; Zhang, Kai; Pu, Yi-Tao; Yu, Ping
2016-12-01
The magnetic interface forward and inversion method is realized using the Taylor series expansion to linearize the Fourier transform of the exponential function. With a large expansion step and unbounded neighborhood, the Taylor series is not convergent, and therefore, this paper presents the magnetic interface forward and inversion method based on Padé approximation instead of the Taylor series expansion. Compared with the Taylor series, Padé's expansion's convergence is more stable and its approximation more accurate. Model tests show the validity of the magnetic forward modeling and inversion of Padé approximation proposed in the paper, and when this inversion method is applied to the measured data of the Matagami area in Canada, a stable and reasonable distribution of underground interface is obtained.
ERIC Educational Resources Information Center
Ngu, Bing Hiong; Phan, Huy Phuong
2016-01-01
We examined the use of balance and inverse methods in equation solving. The main difference between the balance and inverse methods lies in the operational line (e.g. +2 on both sides vs -2 becomes +2). Differential element interactivity favours the inverse method because the interaction between elements occurs on both sides of the equation for…
FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)
NASA Astrophysics Data System (ADS)
2014-10-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the
FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems
NASA Astrophysics Data System (ADS)
Vourc'h, Eric; Rodet, Thomas
2015-11-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods
Efficiency of Pareto joint inversion of 2D geophysical data using global optimization methods
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2016-04-01
Pareto joint inversion of two or more sets of data is a promising new tool of modern geophysical exploration. In the first stage of our investigation we created software enabling execution of forward solvers of two geophysical methods (2D magnetotelluric and gravity) as well as inversion with possibility of constraining solution with seismic data. In the algorithm solving MT forward solver Helmholtz's equations, finite element method and Dirichlet's boundary conditions were applied. Gravity forward solver was based on Talwani's algorithm. To limit dimensionality of solution space we decided to describe model as sets of polygons, using Sharp Boundary Interface (SBI) approach. The main inversion engine was created using Particle Swarm Optimization (PSO) algorithm adapted to handle two or more target functions and to prevent acceptance of solutions which are non - realistic or incompatible with Pareto scheme. Each inversion run generates single Pareto solution, which can be added to Pareto Front. The PSO inversion engine was parallelized using OpenMP standard, what enabled execution code for practically unlimited amount of threads at once. Thereby computing time of inversion process was significantly decreased. Furthermore, computing efficiency increases with number of PSO iterations. In this contribution we analyze the efficiency of created software solution taking under consideration details of chosen global optimization engine used as a main joint minimization engine. Additionally we study the scale of possible decrease of computational time caused by different methods of parallelization applied for both forward solvers and inversion algorithm. All tests were done for 2D magnetotelluric and gravity data based on real geological media. Obtained results show that even for relatively simple mid end computational infrastructure proposed solution of inversion problem can be applied in practice and used for real life problems of geophysical inversion and interpretation.
a method of gravity and seismic sequential inversion and its GPU implementation
NASA Astrophysics Data System (ADS)
Liu, G.; Meng, X.
2011-12-01
In this abstract, we introduce a gravity and seismic sequential inversion method to invert for density and velocity together. For the gravity inversion, we use an iterative method based on correlation imaging algorithm; for the seismic inversion, we use the full waveform inversion. The link between the density and velocity is an empirical formula called Gardner equation, for large volumes of data, we use the GPU to accelerate the computation. For the gravity inversion method , we introduce a method based on correlation imaging algorithm,it is also a interative method, first we calculate the correlation imaging of the observed gravity anomaly, it is some value between -1 and +1, then we multiply this value with a little density ,this value become the initial density model. We get a forward reuslt with this initial model and also calculate the correaltion imaging of the misfit of observed data and the forward data, also multiply the correaltion imaging result a little density and add it to the initial model, then do the same procedure above , at last ,we can get a inversion density model. For the seismic inveron method ,we use a mothod base on the linearity of acoustic wave equation written in the frequency domain,with a intial velociy model, we can get a good velocity result. In the sequential inversion of gravity and seismic , we need a link formula to convert between density and velocity ,in our method , we use the Gardner equation. Driven by the insatiable market demand for real time, high-definition 3D images, the programmable NVIDIA Graphic Processing Unit (GPU) as co-processor of CPU has been developed for high performance computing. Compute Unified Device Architecture (CUDA) is a parallel programming model and software environment provided by NVIDIA designed to overcome the challenge of using traditional general purpose GPU while maintaining a low learn curve for programmers familiar with standard programming languages such as C. In our inversion processing
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Rothe, E. D.; Neupert, W. M.
1976-01-01
Intensities of Fe XIV and Fe XIII EUV emission lines obtained at coronal locations beyond the limb by the Goddard spectroheliograph on the OSO 7 satellite have been corrected for the wavelength dependence of the instrument's sensitivity and have been Abel-inverted to provide a valid comparison with theoretical predictions for each ion. Details of the Abel-inversion procedure are given, including explicit formulas for application of Bracewell's (1956) method. The intensity ratios of pairs of lines originating from a common level are compared with expected theoretical transition probability ratios over a range of heliocentric distance; deviations in some cases yield information about adjacent unclassified lines. Comparison of the observations with predictions for Fe XIV and Fe XIII shows generally good agreement, with a few interesting discrepancies that may imply a corresponding need for more accurate collisional excitation cross sections. The same comparison yields the variation of electron density with heliocentric radius for each ion separately; the two density functions are found to agree within a factor of three.
Parallel full-waveform inversion in the frequency domain by the Gauss-Newton method
NASA Astrophysics Data System (ADS)
Zhang, Wensheng; Zhuang, Yuan
2016-06-01
In this paper, we investigate the full-waveform inversion in the frequency domain. We first test the inversion ability of three numerical optimization methods, i.e., the steepest-descent method, the Newton-CG method and the Gauss- Newton method, for a simple model. The results show that the Gauss-Newton method performs well and efficiently. Then numerical computations for a benchmark model named Marmousi model by the Gauss-Newton method are implemented. Parallel algorithm based on message passing interface (MPI) is applied as the inversion is a typical large-scale computational problem. Numerical computations show that the Gauss-Newton method has good ability to reconstruct the complex model.
NASA Astrophysics Data System (ADS)
Wang, Qian; Li, Xingwen; Song, Haoyong; Rong, Mingzhe
2010-04-01
Non-contact magnetic measurement method is an effective way to study the air arc behavior experimentally One of the crucial techniques is to solve an inverse problem for the electromagnetic field. This study is devoted to investigating different algorithms for this kind of inverse problem preliminarily, including the preconditioned conjugate gradient method, penalty function method and genetic algorithm. The feasibility of each algorithm is analyzed. It is shown that the preconditioned conjugate gradient method is valid only for few arc segments, the estimation accuracy of the penalty function method is dependent on the initial conditions, and the convergence of genetic algorithm should be studied further for more segments in an arc current.
Adaptive eigenspace method for inverse scattering problems in the frequency domain
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nahum, Uri
2017-02-01
A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.
A full potential inverse method based on a density linearization scheme for wing design
NASA Technical Reports Server (NTRS)
Shankar, V.
1982-01-01
A mixed analysis inverse procedure based on the full potential equation in conservation form was developed to recontour a given base wing to produce density linearization scheme in applying the pressure boundary condition in terms of the velocity potential. The FL030 finite volume analysis code was modified to include the inverse option. The new surface shape information, associated with the modified pressure boundary condition, is calculated at a constant span station based on a mass flux integration. The inverse method is shown to recover the original shape when the analysis pressure is not altered. Inverse calculations for weakening of a strong shock system and for a laminar flow control (LFC) pressure distribution are presented. Two methods for a trailing edge closure model are proposed for further study.
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
The Noble-Abel Stiffened-Gas equation of state
NASA Astrophysics Data System (ADS)
Le Métayer, Olivier; Saurel, Richard
2016-04-01
Hyperbolic two-phase flow models have shown excellent ability for the resolution of a wide range of applications ranging from interfacial flows to fluid mixtures with several velocities. These models account for waves propagation (acoustic and convective) and consist in hyperbolic systems of partial differential equations. In this context, each phase is compressible and needs an appropriate convex equation of state (EOS). The EOS must be simple enough for intensive computations as well as boundary conditions treatment. It must also be accurate, this being challenging with respect to simplicity. In the present approach, each fluid is governed by a novel EOS named "Noble Abel stiffened gas," this formulation being a significant improvement of the popular "Stiffened Gas (SG)" EOS. It is a combination of the so-called "Noble-Abel" and "stiffened gas" equations of state that adds repulsive effects to the SG formulation. The determination of the various thermodynamic functions and associated coefficients is the aim of this article. We first use thermodynamic considerations to determine the different state functions such as the specific internal energy, enthalpy, and entropy. Then we propose to determine the associated coefficients for a liquid in the presence of its vapor. The EOS parameters are determined from experimental saturation curves. Some examples of liquid-vapor fluids are examined and associated parameters are computed with the help of the present method. Comparisons between analytical and experimental saturation curves show very good agreement for wide ranges of temperature for both liquid and vapor.
NASA Astrophysics Data System (ADS)
Schuster, David M.
1993-04-01
An inverse method has been developed to compute the structural stiffness properties of wings given a specified wing loading and aeroelastic twist distribution. The method directly solves for the bending and torsional stiffness distribution of the wing using a modal representation of these properties. An aeroelastic design problem involving the use of a computational aerodynamics method to optimize the aeroelastic twist distribution of a tighter wing operating at maneuver flight conditions is used to demonstrate the application of the method. This exercise verifies the ability of the inverse scheme to accurately compute the structural stiffness distribution required to generate a specific aeroelastic twist under a specified aeroelastic load.
Method for detecting a pericentric inversion in a chromosome
Lucas, Joe N.
2000-01-01
A method is provided for determining a clastogenic signature of a sample of chromosomes by quantifying a frequency of a first type of chromosome aberration present in the sample; quantifying a frequency of a second, different type of chromosome aberration present in the sample; and comparing the frequency of the first type of chromosome aberration to the frequency of the second type of chromosome aberration. A method is also provided for using that clastogenic signature to identify a clastogenic agent or dosage to which the cells were exposed.
A comparison of techniques for inversion of radio-ray phase data in presence of ray bending
NASA Technical Reports Server (NTRS)
Wallio, H. A.; Grossi, M. D.
1972-01-01
Derivations are presented of the straight-line Abel transform and the seismological Herglotz-Wiechert transform (which takes ray bending into account) that are used in the reconstruction of refractivity profiles from radio-wave phase data. Profile inversion utilizing these approaches, performed in computer-simulated experiments, are compared for cases of positive, zero, and negative ray bending. For thin atmospheres and ionospheres, such as the Martian atmosphere and ionosphere, radio wave signals are shown to be inverted accurately with both methods. For dense media, such as the solar corona or the lower Venus atmosphere, the refractive recovered by the seismological Herglotz-Wiechert transform provide a significant improvement compared with the straight-line Abel transform.
Computational Methods for Aerodynamic Design (Inverse) and Optimization
1990-01-01
Airfoils with Given Velocity Distribution in Incompressible Flow," J. Aircraft, Vol. 10, 1973, pp. 651-659. 7. Polito, L., "Un Metodo Esatto -per 11 Progetto...and the Simpson rule. Using a panel arrangement method with properly increased panel deusity in regions with comparatively large rv -variations, use of
NASA Technical Reports Server (NTRS)
Kurtz, M. J.; Huchra, J. P.; Beers, T. C.; Geller, M. J.; Gioia, I. M.
1985-01-01
X-ray and optical observations of the cluster of galaxies Abell 744 are presented. The X-ray flux (assuming H(0) = 100 km/s per Mpc) is about 9 x 10 to the 42nd erg/s. The X-ray source is extended, but shows no other structure. Photographic photometry (in Kron-Cousins R), calibrated by deep CCD frames, is presented for all galaxies brighter than 19th magnitude within 0.75 Mpc of the cluster center. The luminosity function is normal, and the isopleths show little evidence of substructure near the cluster center. The cluster has a dominant central galaxy, which is classified as a normal brightest-cluster elliptical on the basis of its luminosity profile. New redshifts were obtained for 26 galaxies in the vicinity of the cluster center; 20 appear to be cluster members. The spatial distribution of redshifts is peculiar; the dispersion within the 150 kpc core radius is much greater than outside. Abell 744 is similar to the nearby cluster Abell 1060.
A Strong Merger Shock in Abell 665
NASA Technical Reports Server (NTRS)
Dasadia, S.; Sun, M.; Sarazin, C.; Morandi, A.; Markevitch, M.; Wik, D.; Feretti, L.; Giovannini, G.; Govoni, F.
2016-01-01
Deep (103 ks) Chandra observations of Abell 665 have revealed rich structures in this merging galaxy cluster, including a strong shock and two cold fronts. The newly discovered shock has a Mach number of M =?3.0 +/- 0.6, propagating in front of a cold disrupted cloud. This makes Abell 665 the second cluster, after the Bullet cluster, where a strong merger shock of M is approximately 3 has been detected. The shock velocity from jump conditions is consistent with (2.7 +/- 0.7) × 10(exp 3) km s(exp -1). The new data also reveal a prominent southern cold front with potentially heated gas ahead of it. Abell 665 also hosts a giant radio halo. There is a hint of diffuse radio emission extending to the shock at the north, which needs to be examined with better radio data. This new strong shock provides a great opportunity to study the reacceleration model with the X-ray and radio data combined.
Diffuse interface methods for inverse problems: case study for an elliptic Cauchy problem
NASA Astrophysics Data System (ADS)
Burger, Martin; Løseth Elvetun, Ole; Schlottbom, Matthias
2015-12-01
Many inverse problems have to deal with complex, evolving and often not exactly known geometries, e.g. as domains of forward problems modeled by partial differential equations. This makes it desirable to use methods which are robust with respect to perturbed or not well resolved domains, and which allow for efficient discretizations not resolving any fine detail of those geometries. For forward problems in partial differential equations methods based on diffuse interface representations have gained strong attention in the last years, but so far they have not been considered systematically for inverse problems. In this work we introduce a diffuse domain method as a tool for the solution of variational inverse problems. As a particular example we study ECG inversion in further detail. ECG inversion is a linear inverse source problem with boundary measurements governed by an anisotropic diffusion equation, which naturally cries for solutions under changing geometries, namely the beating heart. We formulate a regularization strategy using Tikhonov regularization and, using standard source conditions, we prove convergence rates. A special property of our approach is that not only operator perturbations are introduced by the diffuse domain method, but more important we have to deal with topologies which depend on a parameter \\varepsilon in the diffuse domain method, i.e. we have to deal with \\varepsilon -dependent forward operators and \\varepsilon -dependent norms. In particular the appropriate function spaces for the unknown and the data depend on \\varepsilon . This prevents the application of some standard convergence techniques for inverse problems, in particular interpreting the perturbations as data errors in the original problem does not yield suitable results. We consequently develop a novel approach based on saddle-point problems. The numerical solution of the problem is discussed as well and results for several computational experiments are reported. In
Numerical Methods for Forward and Inverse Problems in Discontinuous Media
Chartier, Timothy P.
2011-03-08
The research emphasis under this grant's funding is in the area of algebraic multigrid methods. The research has two main branches: 1) exploring interdisciplinary applications in which algebraic multigrid can make an impact and 2) extending the scope of algebraic multigrid methods with algorithmic improvements that are based in strong analysis.The work in interdisciplinary applications falls primarily in the field of biomedical imaging. Work under this grant demonstrated the effectiveness and robustness of multigrid for solving linear systems that result from highly heterogeneous finite element method models of the human head. The results in this work also give promise to medical advances possible with software that may be developed. Research to extend the scope of algebraic multigrid has been focused in several areas. In collaboration with researchers at the University of Colorado, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory, the PI developed an adaptive multigrid with subcycling via complementary grids. This method has very cheap computing costs per iterate and is showing promise as a preconditioner for conjugate gradient. Recent work with Los Alamos National Laboratory concentrates on developing algorithms that take advantage of the recent advances in adaptive multigrid research. The results of the various efforts in this research could ultimately have direct use and impact to researchers for a wide variety of applications, including, astrophysics, neuroscience, contaminant transport in porous media, bi-domain heart modeling, modeling of tumor growth, and flow in heterogeneous porous media. This work has already led to basic advances in computational mathematics and numerical linear algebra and will continue to do so into the future.
Computational Methods for Sparse Solution of Linear Inverse Problems
2009-03-01
methods from harmonic analysis [5]. For example, natural images can be approximated with relatively few wavelet coefficients. As a consequence, in many...performed efficiently. For example, the cost of these products is O(N logN) when Φ is constructed from Fourier or wavelet bases. For algorithms that...stream community has proposed efficient algorithms for computing near-optimal histograms and wavelet -packet approximations from compressive samples [4
A Study of Inverse Methods for Processing of Radar Data
2006-10-01
point from each source receiver location. Both ray tracing and eikonal schemes have been used to computer these travel times. A by-product of their... point by point basis, each diffraction point contributing a part of the total signal. Kirchhoff methods pioneered by Bleistein and Cohen at the...with these algorithms their relative merits. Signal point diffractions have known responses and by using time migration or depth migration these can be
Novel TMS coils designed using an inverse boundary element method
NASA Astrophysics Data System (ADS)
Cobos Sánchez, Clemente; María Guerrero Rodriguez, Jose; Quirós Olozábal, Ángel; Blanco-Navarro, David
2017-01-01
In this work, a new method to design TMS coils is presented. It is based on the inclusion of the concept of stream function of a quasi-static electric current into a boundary element method. The proposed TMS coil design approach is a powerful technique to produce stimulators of arbitrary shape, and remarkably versatile as it permits the prototyping of many different performance requirements and constraints. To illustrate the power of this approach, it has been used for the design of TMS coils wound on rectangular flat, spherical and hemispherical surfaces, subjected to different constraints, such as minimum stored magnetic energy or power dissipation. The performances of such coils have been additionally described; and the torque experienced by each stimulator in the presence of a main magnetic static field have theoretically found in order to study the prospect of using them to perform TMS and fMRI concurrently. The obtained results show that described method is an efficient tool for the design of TMS stimulators, which can be applied to a wide range of coil geometries and performance requirements.
Quasiparticle density of states by inversion with maximum entropy method
NASA Astrophysics Data System (ADS)
Sui, Xiao-Hong; Wang, Han-Ting; Tang, Hui; Su, Zhao-Bin
2016-10-01
We propose to extract the quasiparticle density of states (DOS) of the superconductor directly from the experimentally measured superconductor-insulator-superconductor junction tunneling data by applying the maximum entropy method to the nonlinear systems. It merits the advantage of model independence with minimum a priori assumptions. Various components of the proposed method have been carefully investigated, including the meaning of the targeting function, the mock function, as well as the role and the designation of the input parameters. The validity of the developed scheme is shown by two kinds of tests for systems with known DOS. As a preliminary application to a Bi2Sr2CaCu2O8 +δ sample with its critical temperature Tc=89 K , we extract the DOS from the measured intrinsic Josephson junction current data at temperatures of T =4.2 K , 45 K , 55 K , 95 K , and 130 K . The energy gap decreases with increasing temperature below Tc, while above Tc, a kind of energy gap survives, which provides an angle to investigate the pseudogap phenomenon in high-Tc superconductors. The developed method itself might be a useful tool for future applications in various fields.
Video-based Nearshore Depth Inversion using WDM Method
NASA Astrophysics Data System (ADS)
Hampson, R. W.; Kirby, J. T.
2008-12-01
A new remote sensing method for estimating nearshore water depths from video imagery has been developed and applied as part of an ongoing field study at Bethany Beach, Delaware. The new method applies Donelan et al's Wavelet Direction Method (WDM) to compact arrays of pixel intensity time series extracted from video images. The WDM generates a non-stationary time series of the wavenumber and wave direction at different frequencies that can be used to create frequency-wavenumber and directional spectrums. The water depth is estimated at the center of each compact array by fitting the linear dispersion relation to the frequency-wavenumber spectrum. Directional spectral results show good correlation to directional spectral results obtained from a slope array located just offshore of Bethany Beach. Additionally, depth estimations from the WDM are compared to depth measurements taken with a kayak survey system at Bethany Beach. Continuous measurements of the bathymetry at Bethany Beach are needed for inputs to fluid dynamics and sediment transport models to study the morphodynamics in the nearshore zone and can be used to monitor the success of the recent beach replenishment project along the Delaware coast.
Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.
ROSAT HRI images of Abell 85 and Abell 496: Evidence for inhomogeneities in cooling flows
NASA Technical Reports Server (NTRS)
Prestwich, Andrea H.; Guimond, Stephen J.; Luginbuhl, Christian; Joy, Marshall
1994-01-01
We present ROSAT HRI images of two clusters of galaxies with cooling flows, Abell 496 and Abell 85. In these clusters, x-ray emission on small scales above the general cluster emission is significant at the 3 sigma level. There is no evidence for optical counterparts. The enhancements may be associated with lumps of gas at a lower temperature and higher density than the ambient medium, or hotter, denser gas perhaps compressed by magnetic fields. These observations can be used to test models of how thermal instabilities form and evolve in cooling flows.
Homotopy method for inverse design of the bulbous bow of a container ship
NASA Astrophysics Data System (ADS)
Huang, Yu-jia; Feng, Bai-wei; Hou, Guo-xiang; Gao, Liang; Xiao, Mi
2017-03-01
The homotopy method is utilized in the present inverse hull design problem to minimize the wave-making coefficient of a 1300 TEU container ship with a bulbous bow. Moreover, in order to improve the computational efficiency of the algorithm, a properly smooth function is employed to update the homotopy parameter during iteration. Numerical results show that the homotopy method has been successfully applied in the inverse design of the ship hull. This method has an advantage of high performance on convergence and it is credible and valuable for engineering practice.
The inversion method in measuring noise emitted by machines in opencast mines of rock material.
Pleban, Dariusz; Piechowicz, Janusz; Kosała, Krzysztof
2013-01-01
The inversion method was used to test vibroacoustic processes in large-size machines used in opencast mines of rock material. When this method is used, the tested machine is replaced with a set of substitute sources, whose acoustic parameters are determined on the basis of sound pressure levels and phase shift angles of acoustic signals, measured with an array of 24 microphones. This article presents test results of a combine unit comprising a crusher and a vibrating sieve, for which an acoustic model of 7 substitute sources was developed with the inversion method.
Dynamic inversion method based on the time-staggered stereo-modeling scheme and its acceleration
NASA Astrophysics Data System (ADS)
Jing, Hao; Yang, Dinghui; Wu, Hao
2016-12-01
A set of second-order differential equations describing the space-time behaviour of derivatives of displacement with respect to model parameters (i.e. waveform sensitivities) is obtained via taking the derivative of the original wave equations. The dynamic inversion method obtains sensitivities of the seismic displacement field with respect to earth properties directly by solving differential equations for them instead of constructing sensitivities from the displacement field itself. In this study, we have taken a new perspective on the dynamic inversion method and used acceleration approaches to reduce the computational time and memory usage to improve its ability of performing high-resolution imaging. The dynamic inversion method, which can simultaneously use different waves and multicomponent observation data, is appropriate for directly inverting elastic parameters, medium density or wave velocities. Full wavefield information is utilized as much as possible at the expense of a larger amount of calculations. To mitigate the computational burden, two ways are proposed to accelerate the method from a computer-implementation point of view. One is source encoding which uses a linear combination of all shots, and the other is to reduce the amount of calculations on forward modeling. We applied a new finite-difference (FD) method to the dynamic inversion to improve the computational accuracy and speed up the performance. Numerical experiments indicated that the new FD method can effectively suppress the numerical dispersion caused by the discretization of wave equations, resulting in enhanced computational efficiency with less memory cost for seismic modeling and inversion based on the full wave equations. We present some inversion results to demonstrate the validity of this method through both checkerboard and Marmousi models. It shows that this method is also convergent even with big deviations for the initial model. Besides, parallel calculations can be easily
NASA Astrophysics Data System (ADS)
Pan, Qi; Liu, De-Jun; Guo, Zhi-Yong; Fang, Hua-Feng; Feng, Mu-Qun
2016-06-01
In the model of a horizontal straight pipeline of finite length, the segmentation of the pipeline elements is a significant factor in the accuracy and rapidity of the forward modeling and inversion processes, but the existing pipeline segmentation method is very time-consuming. This paper proposes a section segmentation method to study the characteristics of pipeline magnetic anomalies—and the effect of model parameters on these magnetic anomalies—as a way to enhance computational performance and accelerate the convergence process of the inversion. Forward models using the piece segmentation method and section segmentation method based on magnetic dipole reconstruction (MDR) are established for comparison. The results show that the magnetic anomalies calculated by these two segmentation methods are almost the same regardless of different measuring heights and variations of the inclination and declination of the pipeline. In the optimized inversion procedure the results of the simulation data calculated by these two methods agree with the synthetic data from the original model, and the inversion accuracies of the burial depths of the two methods are approximately equal. The proposed method is more computationally efficient than the piece segmentation method—in other words, the section segmentation method can meet the requirements for precision in the detection of pipelines by magnetic anomalies and reduce the computation time of the whole process.
2014-08-19
finite element method, performance verification on experimental data, imaging of explosive devices, comparison with the classical Krein equation method...of the globally convergent numerical method of this project and the classical Krein equation method. It was established that while the first method...of a long standing problem about uniqueness of a phaseless 3-d inverse problem of quantum scattering. This was an open question since the publication
Proximal point methods for the inverse problem of identifying parameters in beam models
NASA Astrophysics Data System (ADS)
Jadamba, B.; Khan, A. A.; Paulhamus, M.; Sama, M.
2012-07-01
This paper studies the nonlinear inverse problem of identifying certain material parameters in the fourth-order boundary value problem representing the beam model. The inverse problem is solved by posing a convex optimization problem whose solution is an approximation of the sought parameters. The optimization problem is solved by the gradient based approaches, and in this setting, the most challenging aspect is the computation of the gradient of the objective functional. We present a detailed treatment of the adjoint stiffness matrix based approach for the gradient computation. We employ recently proposed self-adaptive inexact proximal point methods by Hager and Zhang [6] to solve the inverse problem. It is known that the regularization features of the proximal point methods are quite different from that of the Tikhonov regularization. We present a comparative analysis of the numerical efficiency of the used proximal point methods without using the Tikhonov regularization.
Cunefare, Kenneth A; Biesel, Van B; Tran, John; Rye, Ryan; Graf, Aaron; Holdhusen, Mark; Albanese, Anne-Marie
2003-02-01
Qualification of anechoic chambers is intended to demonstrate that the chamber supports the intended free-field environment within some permissible tolerance bounds. Key qualification issues include the method used to obtain traverse data, the analysis method for the data, and the use of pure tone or broadband noise as the chamber excitation signal. This paper evaluates the relative merits of continuous versus discrete traverses, of fixed versus optimal reference analysis of the traverse data, and of the use of pure tone versus broadband signals. The current practice of using widely space discrete sampling along a traverse is shown to inadequately sample the complexity of the sound field extant with pure tone traverses, but is suitable for broadband traverses. Continuous traverses, with spatial resolution on the order of 15% of the wavelength at the frequency of interest, are shown to be necessary to fully resolve the spatial complexity of pure tone qualifications. The use of an optimal reference method for computing the deviations from inverse square law is shown to significantly improve the apparent performance of the chamber for pure tone qualifications. Finally, the use of broadband noise as the test signal, as compared to pure tone traverses over the same span, is demonstrated to be a marginal indicator of chamber performance.
Numerical solution of 2D-vector tomography problem using the method of approximate inverse
NASA Astrophysics Data System (ADS)
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-01
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
A combined direct/inverse three-dimensional transonic wing design method for vector computers
NASA Technical Reports Server (NTRS)
Weed, R. A.; Carlson, L. A.; Anderson, W. K.
1984-01-01
A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.
Inversion of tsunami sources by the adjoint method in the presence of observational and model errors
NASA Astrophysics Data System (ADS)
Pires, C.; Miranda, P. M. A.
2003-04-01
The adjoint method is applied to the inversion of tsumani sources from tide-gauge observations in both idealized and realistic setups, with emphasis on the effects of observational, bathymetric and other model errors in the quality of the inversion. The method is developed in a way that allows for the direct optimization of seismic focal parameters, in the case of seismic tsunamis, through a 4-step inversion procedure that can be fully automated, consisting in (i) source area delimitation, by adjoint backward ray-tracing, (ii) adjoint optimization of the initial sea state, from a vanishing first-guess, (iii) non-linear adjustment of the fault model and (iv) final adjoint optimization in the fault parameter space. The methodology is systematically tested with synthetic data, showing its flexibility and robustness in the presence of significant amounts of error.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a mesh adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.
Vishnuvardhan, J; Krishnamurthy, C V; Balasubramaniam, Krishnan
2009-02-01
A novel blind inversion method using Lamb wave S(0) and A(0) mode velocities is proposed for the complete determination of elastic moduli, material symmetries, as well as principal plane orientations of anisotropic plates. The approach takes advantage of genetic algorithm, introduces the notion of "statistically significant" elastic moduli, and utilizes their sensitivities to velocity data to reconstruct the elastic moduli. The unknown material symmetry and the principal planes are then evaluated using the method proposed by Cowin and Mehrabadi [Q. J. Mech. Appl. Math. 40, 451-476 (1987)]. The blind inversion procedure was verified using simulated ultrasonic velocity data sets on materials with transversely isotropic, orthotropic, and monoclinic symmetries. A modified double ring configuration of the single transmitter and multiple receiver compact array was developed to experimentally validate the blind inversion approach on a quasi-isotropic graphite-epoxy composite plate. This technique finds application in the area of material characterization and structural health monitoring of anisotropic platelike structures.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
Freezing Time Estimation for a Cylindrical Food Using an Inverse Method
NASA Astrophysics Data System (ADS)
Hu, Yao Xing; Mihori, Tomoo; Watanabe, Hisahiko
Most of the published methods for estimating the freezing time require thermal properties of the product and any relevant heat transfer coefficients between the product and the cooling medium. However, the difficulty of obtaining thermal data for use in industrial freezing system of food has been pointed out. We have developed a new procedure for estimating the time to freeze a food of a slab by using the inverse method, which does not require the knowledge of thermal properties of the food being frozen. The method of applying inverse method to estimation of freezing time depends on the shape of the body to be frozen. In this paper, we explored the method of applying inverse method to the food body of cylindrical shape, using selected explicit expressions to describe the temperature profile. The temperature profile was found to be successfully approximated by a logarithmic function, with which an approximate equation to describe the freezing time was derived. An inversion procedure of estimating freezing time associated with the approximate equation, was validated via a numerical experiment.
The inversion method of Matrix mineral bulk modulus based on Gassmann equation
NASA Astrophysics Data System (ADS)
Kai, L.; He, X.; Zhang, Z. H.
2015-12-01
In recent years, seismic rock physics has played an important role in oil and gas exploration. The seismic rock physics model can quantitatively describe the reservoir characteristics, such as lithologic association, pore structure, geological processes and so on. But the classic rock physics models need to determine the background parameter, that is, matrix mineral bulk modulus. An inaccurate inputs greatly influence the prediction reliability. By introducing different rock physics parameters, Gassmann equation is used to derive a reasonable modification. Two forms of Matrix mineral bulk modulus inversion methods including the linear regression method and Self-adapting inversion method are proposed. They effectively solve the value issues of Matrix mineral bulk modulus in different complex parameters conditions. Based on laboratory tests data, compared with the conventional method, the linear regression method is more simple and accurate. Meanwhile Self-adapting inversion method also has higher precision in the known rich rock physics parameters. Consequently, the modulus value was applied to reservoir fluid substitution, porosity inversion and S-wave velocity prediction. The introduction of Matrix mineral modulus base on Gassmann equations can effectively improve the reliability of the fluid impact prediction, and computational efficiency.
NASA Astrophysics Data System (ADS)
Ita, B. I.; Ehi-Eromosele, C. O.; Edobor-Osoh, A.; Ikeuba, A. I.
2014-11-01
By using the Nikiforov-Uvarov (NU) method, the Schrödinger equation has been solved for the interaction of inversely quadratic Hellmann (IQHP) and inversely quadratic potential (IQP) for any angular momentum quantum number, l. The energy eigenvalues and their corresponding eigenfunctions have been obtained in terms of Laguerre polynomials. Special cases of the sum of these potentials have been considered and their energy eigenvalues also obtained.
NASA Astrophysics Data System (ADS)
Lamarche-Gagnon, Marc-Etienne; Vetel, Jerome
2016-11-01
Several methods can be used when one needs to measure wall shear stress in a fluid flow. Yet, it is known that a precise shear measurement is seldom met, mostly when both time and space resolutions are required. The electrodiffusion method lies on the mass transfer between a redox couple contained in an electrolyte and an electrode flush mounted to a wall. Similarly to the heat transfer measured by a hot wire anemometer, the mass transfer can be related to the fluid's wall shear rate. When coupled with a numerical post-treatment by the so-called inverse method, precise instantaneous wall shear rate measurements can be obtained. With further improvements, it has the potential to be effective in highly fluctuating three-dimensional flows. We present developments of the inverse method to two-component shear rate measurements, that is shear magnitude and direction. This is achieved with the use of a three-segment electrodiffusion probe. Validation tests of the inverse method are performed in an oscillating plane Poiseuille flow at moderate pulse frequencies, which also includes reverse flow phases, and in the vicinity of a separation point where the wall shear stress experiences local inversion in a controlled separated flow.
NASA Astrophysics Data System (ADS)
Palmer, Paul I.; Barnett, J. J.; Eyre, J. R.; Healy, S. B.
2000-07-01
An optimal estimation inverse method is presented which can be used to retrieve simultaneously vertical profiles of temperature and specific humidity, in addition to surface pressure, from satellite-to-satellite radio occultation observations of the Earth's atmosphere. The method is a nonlinear, maximum a posteriori technique which can accommodate most aspects of the real radio occultation problem and is found to be stable and to converge rapidly in most cases. The optimal estimation inverse method has two distinct advantages over the analytic inverse method in that it accounts for some of the effects of horizontal gradients and is able to retrieve optimally temperature and humidity simultaneously from the observations. It is also able to account for observation noise and other sources of error. Combined, these advantages ensure a realistic retrieval of atmospheric quantities. A complete error analysis emerges naturally from the optimal estimation theory, allowing a full characterization of the solution. Using this analysis, a quality control scheme is implemented which allows anomalous retrieval conditions to be recognized and removed, thus preventing gross retrieval errors. The inverse method presented in this paper has been implemented for bending angle measurements derived from GPS/MET radio occultation observations of the Earth. Preliminary results from simulated data suggest that these observations have the potential to improve numerical weather prediction model analyses significantly throughout their vertical range.
The cluster of galaxies Abell 376
NASA Astrophysics Data System (ADS)
Proust, D.; Capelato, H. V.; Hickel, G.; Sodré, L., Jr.; Lima Neto, G. B.; Cuevas, H.
2003-08-01
We present a dynamical analysis of the galaxy cluster Abell 376 based on a set of 73 velocities, most of them measured at Pic du Midi and Haute-Provence observatories and completed with data from the literature. Data on individual galaxies are presented and the accuracy of the determined velocities is discussed as well as some properties of the cluster. We obtained an improved mean redshift value z = 0.0478+0.005-0.006 and velocity dispersion sigma = 852+120-76 km s-1. Our analysis indicates that inside a radius of ~ 900 h70-1 kpc ( ~ 15 arcmin) the cluster is well relaxed without any remarkable features and the X-ray emission traces fairly well the galaxy distribution. A possible substructure is seen at 20 arcmin from the centre towards the Southwest direction, but is not confirmed by the velocity field. This SW clump is, however, kinematically bound to the main structure of Abell 376. A dense condensation of galaxies is detected at 46 arcmin (projected distance 2.6 h70-1 Mpc) from the centre towards the Northwest and analysis of the apparent luminosity distribution of its galaxies suggests that this clump is part of the large scale structure of Abell 376. X-ray spectroscopic analysis of ASCA data resulted in a temperature kT = 4.3 +/- 0.4 keV and metal abundance Z = 0.32 +/- 0.08 Zsun. The velocity dispersion corresponding to this temperature using the TX-sigma scaling relation is in agreement with the measured galaxies velocities. Based on observations made Haute-Provence and Pic du Midi Observatories (France). Table 1 is also available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/407/31
An approximate factorization method for inverse medium scattering with unknown buried objects
NASA Astrophysics Data System (ADS)
Qu, Fenglong; Yang, Jiaqing; Zhang, Bo
2017-03-01
This paper is concerned with the inverse problem of scattering of time-harmonic acoustic waves by an inhomogeneous medium with different kinds of unknown buried objects inside. By constructing a sequence of operators which are small perturbations of the far-field operator in a suitable way, we prove that each operator in this sequence has a factorization satisfying the Range Identity. We then develop an approximate factorization method for recovering the support of the inhomogeneous medium from the far-field data. Finally, numerical examples are provided to illustrate the practicability of the inversion algorithm.
Terekhov, Alexander V; Zatsiorsky, Vladimir M
2011-02-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423-453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem.
Zatsiorsky, Vladimir M.
2011-01-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907
Towards "Inverse" Character Tables? A One-Step Method for Decomposing Reducible Representations
ERIC Educational Resources Information Center
Piquemal, J.-Y.; Losno, R.; Ancian, B.
2009-01-01
In the framework of group theory, a new procedure is described for a one-step automated reduction of reducible representations. The matrix inversion tool, provided by standard spreadsheet software, is applied to the central part of the character table that contains the characters of the irreducible representation. This method is not restricted to…
2014-09-30
offers higher reliability in the estimation of dispersion than conventional spectrogram methods and allows us to perform an accurate inversion for...4: Spectrogram of a noisy synthetic signal and the dispersion tracks as estimated by a particle filter superimposed. We then propagated
Odor emission rate estimation of indoor industrial sources using a modified inverse modeling method.
Li, Xiang; Wang, Tingting; Sattayatewa, Chakkrid; Venkatesan, Dhesikan; Noll, Kenneth E; Pagilla, Krishna R; Moschandreas, Demetrios J
2011-08-01
Odor emission rates are commonly measured in the laboratory or occasionally estimated with inverse modeling techniques. A modified inverse modeling approach is used to estimate source emission rates inside of a postdigestion centrifuge building of a water reclamation plant. Conventionally, inverse modeling methods divide an indoor environment in zones on the basis of structural design and estimate source emission rates using models that assume homogeneous distribution of agent concentrations within a zone and experimentally determined link functions to simulate airflows among zones. The modified approach segregates zones as a function of agent distribution rather than building design and identifies near and far fields. Near-field agent concentrations do not satisfy the assumption of homogeneous odor concentrations; far-field concentrations satisfy this assumption and are the only ones used to estimate emission rates. The predictive ability of the modified inverse modeling approach was validated with measured emission rate values; the difference between corresponding estimated and measured odor emission rates is not statistically significant. Similarly, the difference between measured and estimated hydrogen sulfide emission rates is also not statistically significant. The modified inverse modeling approach is easy to perform because it uses odor and odorant field measurements instead of complex chamber emission rate measurements.
NASA Astrophysics Data System (ADS)
Gherlone, Marco; Cerracchio, Priscilla; Mattone, Massimiliano; Di Sciuva, Marco; Tessler, Alexander
2014-04-01
Shape sensing, i.e., reconstruction of the displacement field of a structure from surface-measured strains, has relevant implications for the monitoring, control and actuation of smart structures. The inverse finite element method (iFEM) is a shape-sensing methodology shown to be fast, accurate and robust. This paper aims to demonstrate that the recently presented iFEM for beam and frame structures is reliable when experimentally measured strains are used as input data. The theoretical framework of the methodology is first reviewed. Timoshenko beam theory is adopted, including stretching, bending, transverse shear and torsion deformation modes. The variational statement and its discretization with C0-continuous inverse elements are briefly recalled. The three-dimensional displacement field of the beam structure is reconstructed under the condition that least-squares compatibility is guaranteed between the measured strains and those interpolated within the inverse elements. The experimental setup is then described. A thin-walled cantilevered beam is subjected to different static and dynamic loads. Measured surface strains are used as input data for shape sensing at first with a single inverse element. For the same test cases, convergence is also investigated using an increasing number of inverse elements. The iFEM-recovered deflections and twist rotations are then compared with those measured experimentally. The accuracy, convergence and robustness of the iFEM with respect to unavoidable measurement errors, due to strain sensor locations, measurement systems and geometry imperfections, are demonstrated for both static and dynamic loadings.
New Modified Band Limited Impedance (BLIMP) Inversion Method Using Envelope Attribute
NASA Astrophysics Data System (ADS)
Maulana, Z. L.; Saputro, O. D.; Latief, F. D. E.
2016-01-01
Earth attenuates high frequencies from seismic wavelet. Low frequency seismics cannot be obtained by low quality geophone. The low frequencies (0-10 Hz) that are not present in seismic data are important to obtain a good result in acoustic impedance (AI) inversion. AI is important to determine reservoir quality by converting AI to reservoir properties like porosity, permeability and water saturation. The low frequencies can be supplied from impedance log (AI logs), velocity analysis, and from the combination of both data. In this study, we propose that the low frequencies could be obtained from the envelope seismic attribute. This new proposed method is essentially a modified BLIMP (Band Limited Impedance) inversion method, in which the AI logs for BLIMP substituted with the envelope attribute. In low frequency domain (0-10 Hz), the envelope attribute produces high amplitude. This low frequency from the envelope attribute is utilized to replace low frequency from AI logs in BLIMP. Linear trend in this method is acquired from the AI logs. In this study, the method is applied on synthetic seismograms created from impedance log from well ‘X’. The mean squared error from the modified BLIMP inversion is 2-4% for each trace (variation in error is caused by different normalization constant), lower than the conventional BLIMP inversion which produces error of 8%. The new method is also applied on Marmousi2 dataset and show promising result. The modified BLIMP inversion result from Marmousi2 by using one log AI is better than the one produced from the conventional method.
Ray, J.; Lee, J.; Yadav, V.; ...
2015-04-29
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also
Cycle-Based Cluster Variational Method for Direct and Inverse Inference
NASA Astrophysics Data System (ADS)
Furtlehner, Cyril; Decelle, Aurélien
2016-08-01
Large scale inference problems of practical interest can often be addressed with help of Markov random fields. This requires to solve in principle two related problems: the first one is to find offline the parameters of the MRF from empirical data (inverse problem); the second one (direct problem) is to set up the inference algorithm to make it as precise, robust and efficient as possible. In this work we address both the direct and inverse problem with mean-field methods of statistical physics, going beyond the Bethe approximation and associated belief propagation algorithm. We elaborate on the idea that loop corrections to belief propagation can be dealt with in a systematic way on pairwise Markov random fields, by using the elements of a cycle basis to define regions in a generalized belief propagation setting. For the direct problem, the region graph is specified in such a way as to avoid feed-back loops as much as possible by selecting a minimal cycle basis. Following this line we are led to propose a two-level algorithm, where a belief propagation algorithm is run alternatively at the level of each cycle and at the inter-region level. Next we observe that the inverse problem can be addressed region by region independently, with one small inverse problem per region to be solved. It turns out that each elementary inverse problem on the loop geometry can be solved efficiently. In particular in the random Ising context we propose two complementary methods based respectively on fixed point equations and on a one-parameter log likelihood function minimization. Numerical experiments confirm the effectiveness of this approach both for the direct and inverse MRF inference. Heterogeneous problems of size up to 10^5 are addressed in a reasonable computational time, notably with better convergence properties than ordinary belief propagation.
ABEL description and implementation of cyber net system
NASA Astrophysics Data System (ADS)
Lu, Jiyuan; Jing, Liang
2013-03-01
Cyber net system is a subclass of Petri Nets. It has more powerful description capability and more complex properties compared with P/T system. Due to its nonlinear relation, it can't use analysis techniques of other net systems directly. This influences the research on cyber net system. In this paper, the author uses hardware description language to describe cyber net system. Simulation analysis is carried out through EDA software tools to disclose properties of the system. This method is introduced in detail through cyber net system model of computing Fibonacci series. ABEL source codes and simulation wave are also presented. The source codes are compiled, optimized, fit design and downloaded to the Programmable Logic Device. Thus ASIC of computing Fibonacci series is obtained. It will break a new path for the analysis and application study of cyber net system.
NASA Astrophysics Data System (ADS)
Lohman, R. B.; Simons, M.
2004-12-01
We examine inversions of geodetic data for fault slip and discuss how inferred results are affected by choices of regularization. The final goal of any slip inversion is to enhance our understanding of the dynamics governing fault zone processes through kinematic descriptions of fault zone behavior at various temporal and spatial scales. Important kinematic observations include ascertaining whether fault slip is correlated with topographic and gravitational anomalies, whether coseismic and postseismic slip occur on complementary or overlapping regions of the fault plane, and how aftershock distributions compare with areas of coseismic and postseismic slip. Fault slip inversions are generally poorly-determined inverse problems requiring some sort of regularization. Attempts to place inversion results in the context of understanding fault zone processes should be accompanied by careful treatment of how the applied regularization affects characteristics of the inferred slip model. Most regularization techniques involve defining a metric that quantifies the solution "simplicity". A frequently employed method defines a "simple" slip distribution as one that is spatially smooth, balancing the fit to the data vs. the spatial complexity of the slip distribution. One problem related to the use of smoothing constraints is the "smearing" of fault slip into poorly-resolved areas on the fault plane. In addition, even if the data is fit well by a point source, the fact that a point source is spatially "rough" will force the inversion to choose a smoother model with slip over a broader area. Therefore, when we interpret the area of inferred slip we must ask whether the slipping area is truly constrained by the data, or whether it could be fit equally well by a more spatially compact source with larger amplitudes of slip. We introduce an alternate regularization technique for fault slip inversions, where we seek an end member model that is the smallest region of fault slip that
A self-constrained inversion of magnetic data based on correlation method
NASA Astrophysics Data System (ADS)
Sun, Shida; Chen, Chao
2016-12-01
Geologically-constrained inversion is a powerful method for producing geologically reasonable solutions in geophysical exploration problems. But in many cases, except the observed geophysical data to be inverted, the geological information is insufficiently available for improving reliability of recovered models. To deal with these situations, self-constraints extracted from preprocessing observed data have been applied to constrain the inversion. In this paper, we present a self-constrained inversion method based on correlation method. In our approach the correlation results are first obtained by calculating the cross-correlation between theoretical data and horizontal gradients of the observed data. Subsequently, we propose two specific strategies to extract the spatial variation from the correlation results and then translate them into spatial weighting functions. Incorporating the spatial weighting functions into the model objective function, we obtain self-constrained solutions with higher reliability. We presented two synthetic and one field magnetic data example to test the validity. All results demonstrate that the solution from our self-constrained inversion can delineate the geological bodies with clearer boundaries and much more concentrated physical property.
The Wing-Body Aeroelastic Analyses Using the Inverse Design Method
NASA Astrophysics Data System (ADS)
Lee, Seung Jun; Im, Dong-Kyun; Lee, In; Kwon, Jang-Hyuk
Flutter phenomenon is one of the most dangerous problems in aeroelasticity. When it occurs, the aircraft structure can fail in a few second. In recent aeroelastic research, computational fluid dynamics (CFD) techniques become important means to predict the aeroelastic unstable responses accurately. Among various flow equations like Navier-Stokes, Euler, full potential and so forth, the transonic small disturbance (TSD) theory is widely recognized as one of the most efficient theories. However, the small disturbance assumption limits the applicable range of the TSD theory to the thin wings. For a missile which usually has small aspect ratio wings, the influence of body aerodynamics on the wing surface may be significant. Thus, the flutter stability including the body effect should be verified. In this research an inverse design method is used to complement the aerodynamic deficiency derived from the fuselage. MGM (modified Garabedian-McFadden) inverse design method is used to optimize the aerodynamic field of a full aircraft model. Furthermore, the present TSD aeroelastic analyses do not require the grid regeneration process. The MGM inverse design method converges faster than other conventional aerodynamic theories. Consequently, the inverse designed aeroelastic analyses show that the flutter stability has been lowered by the body effect.
Mass, velocity anisotropy, and pseudo phase-space density profiles of Abell 2142
NASA Astrophysics Data System (ADS)
Munari, E.; Biviano, A.; Mamon, G. A.
2014-06-01
Aims: We aim to compute the mass and velocity anisotropy profiles of Abell 2142 and, from there, the pseudo phase-space density profile Q(r) and the density slope - velocity anisotropy β - γ relation, and then to compare them with theoretical expectations. Methods: The mass profiles were obtained by using three techniques based on member galaxy kinematics, namely the caustic method, the method of dispersion-kurtosis, and MAMPOSSt. Through the inversion of the Jeans equation, it was possible to compute the velocity anisotropy profiles. Results: The mass profiles, as well as the virial values of mass and radius, computed with the different techniques agree with one another and with the estimates coming from X-ray and weak lensing studies. A combined mass profile is obtained by averaging the lensing, X-ray, and kinematics determinations. The cluster mass profile is well fitted by an NFW profile with c = 4.0 ± 0.5. The population of red and blue galaxies appear to have a different velocity anisotropy configuration, since red galaxies are almost isotropic, while blue galaxies are radially anisotropic, with a weak dependence on radius. The Q(r) profile for the red galaxy population agrees with the theoretical results found in cosmological simulations, suggesting that any bias, relative to the dark matter particles, in velocity dispersion of the red component is independent of radius. The β - γ relation for red galaxies matches the theoretical relation only in the inner region. The deviations might be due to the use of galaxies as tracers of the gravitational potential, unlike the non-collisional tracer used in the theoretical relation.
Bledsoe, Keith C.
2015-04-01
The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratory’s INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric. This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.
Schmidt, Kevin M; Vasquez, Victor R
2015-09-28
Cohesive energy curves contain important information about energetics of atomic interactions in crystalline materials, and these are more often obtained using ab initio methods such as density functional theory. Decomposing these curves into the different interatomic contributions is of great value to evaluate and characterize the energetics of specific types of atom-atom interactions. In this work, we present and discuss a generalized method for the inversion of cohesive energy curves of crystalline materials for pairwise interatomic potentials extraction using detailed geometrical descriptions of the atomic interactions to construct a list of atomic displacements and degeneracies, which is modified using a Gaussian elimination process to isolate the pairwise interactions. The proposed method provides a more general framework for cohesive energy inversions that is robust and accurate for systems well-described by pairwise potential interactions. Results show very good reproduction of cohesive energies with the same or better accuracy than current approaches with the advantage that the method has broader applications.
NASA Astrophysics Data System (ADS)
Li, Jinghe; Song, Linping; Liu, Qing Huo
2016-02-01
A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.
Fast Dynamic Meshing Method Based on Delaunay Graph and Inverse Distance Weighting Interpolation
NASA Astrophysics Data System (ADS)
Wang, Yibin; Qin, Ning; Zhao, Ning
2016-06-01
A novel mesh deformation technique is developed based on the Delaunay graph mapping method and the inverse distance weighting (IDW) interpolation. The algorithm maintains the advantages of the efficiency of Delaunay-graph-mapping mesh deformation while possess the ability for better controlling the near surface mesh quality. The Delaunay graph is used to divide the mesh domain into a number of sub-domains. On each of the sub-domains, the inverse distance weighting interpolation is applied to build a much smaller sized translation matrix between the original mesh and the deformed mesh, resulting a similar efficiency for the mesh deformation as compared to the fast Delaunay graph mapping method. The paper will show how the near-wall mesh quality is controlled and improved by the new method while the computational time is compared with the original Delaunay graph mapping method.
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1989-01-01
An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian; Wilson, John L.
2000-09-01
Inverse methods can be used to reconstruct the release history of a known source of groundwater contamination from concentration data describing the present-day spatial distribution of the contaminant plume. Using hypothetical release history functions and contaminant plumes, we evaluate the relative effectiveness of two proposed inverse methods, Tikhonov regularization (TR) and minimum relative entropy (MRE) inversion, in reconstructing the release history of a conservative contaminant in a one-dimensional domain [Skaggs and Kabala, 1994; Woodbury and Ulrych, 1996]. We also address issues of reproducibility of the solution and the appropriateness of models for simulating random measurement error. The results show that if error-free plume concentration data are available, both methods perform well in reconstructing a smooth source history function. With error-free data the MRE method is more robust than TR in reconstructing a nonsmooth source history function; however, the TR method is more robust if the data contain measurement error. Two error models were evaluated in this study, and we found that the particular error model does not affect the reliability of the solutions. The results for the TR method have somewhat greater reproducibility because, in some cases, its input parameters are less subjective than those of the MRE method; however, the MRE solution can identify regions where the data give little or no information about the source history function, while the TR solution cannot.
Supercritical blade design on stream surfaces of revolution with an inverse method
NASA Technical Reports Server (NTRS)
Schmidt, E.; Grein, H.-D.
1991-01-01
A method to solve the inverse problem of supercritical blade-to-blade flow on stream surfaces of revolution with variable radius and variable stream surface thickness in a relative system is described. Some aspects of shockless design and of leading edge resolution in the numerical procedure are depicted. Some supercritical compressor cascades were designed and their complete flow field results were compared with computations of two different analysis methods.
A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Liu, Tianyou
2014-07-01
Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.
The generalized Phillips-Twomey method for NMR relaxation time inversion
NASA Astrophysics Data System (ADS)
Gao, Yang; Xiao, Lizhi; Zhang, Yi; Xie, Qingming
2016-10-01
The inversion of NMR relaxation time involves the Fredholm integral equation of the first kind. Due to its ill-posedness, numerical solutions to this type of equations are often found much less accurate and bear little resemblance to the true solution. There has been a strong interest in finding a well-posed method for this ill-posed problem since 1950s. In this paper, we prove the existence, the uniqueness, the stability and the convergence of the generalized Phillips-Twomey regularization method for solving this type of equations. Numerical simulations and core analyses arising from NMR transverse relaxation time inversion are conducted to show the effectiveness of the generalized Phillips-Twomey method. Both the simulation results and the core analyses agree well with the model and the realities.
New inverse method of centrifugal pump blade based on free form deformation
NASA Astrophysics Data System (ADS)
Zhang, R. H.; Guo, M.; Yang, J. H.; Liu, Y.; Li, R. N.
2013-12-01
In this research, a new inverse method for centrifugal pump blade based on free form deformation is proposed, the free form deformation is used to parametric the pump blade. The blade is implanted to a trivariate control volume which is equally subdivided by control lattice. The control volume can be deformed by moving the control lattice, thereupon the object is deformed. The flow in pump is solved by using a three dimensional turbulent model. The lattice deformation function is constructed according to the gradient distribution of fluid energy along the blade and its objective distribution. Deform the blade shape continually according to the flow solve, and we can get the objective blade shape. The calculation case shows that the proposed inverse method based on FFD method is rational.
The generalized Phillips-Twomey method for NMR relaxation time inversion.
Gao, Yang; Xiao, Lizhi; Zhang, Yi; Xie, Qingming
2016-10-01
The inversion of NMR relaxation time involves the Fredholm integral equation of the first kind. Due to its ill-posedness, numerical solutions to this type of equations are often found much less accurate and bear little resemblance to the true solution. There has been a strong interest in finding a well-posed method for this ill-posed problem since 1950s. In this paper, we prove the existence, the uniqueness, the stability and the convergence of the generalized Phillips-Twomey regularization method for solving this type of equations. Numerical simulations and core analyses arising from NMR transverse relaxation time inversion are conducted to show the effectiveness of the generalized Phillips-Twomey method. Both the simulation results and the core analyses agree well with the model and the realities.
Parallelized Three-Dimensional Resistivity Inversion Using Finite Elements And Adjoint State Methods
NASA Astrophysics Data System (ADS)
Schaa, Ralf; Gross, Lutz; Du Plessis, Jaco
2015-04-01
The resistivity method is one of the oldest geophysical exploration methods, which employs one pair of electrodes to inject current into the ground and one or more pairs of electrodes to measure the electrical potential difference. The potential difference is a non-linear function of the subsurface resistivity distribution described by an elliptic partial differential equation (PDE) of the Poisson type. Inversion of measured potentials solves for the subsurface resistivity represented by PDE coefficients. With increasing advances in multichannel resistivity acquisition systems (systems with more than 60 channels and full waveform recording are now emerging), inversion software require efficient storage and solver algorithms. We developed the finite element solver Escript, which provides a user-friendly programming environment in Python to solve large-scale PDE-based problems (see https://launchpad.net/escript-finley). Using finite elements, highly irregular shaped geology and topography can readily be taken into account. For the 3D resistivity problem, we have implemented the secondary potential approach, where the PDE is decomposed into a primary potential caused by the source current and the secondary potential caused by changes in subsurface resistivity. The primary potential is calculated analytically, and the boundary value problem for the secondary potential is solved using nodal finite elements. This approach removes the singularity caused by the source currents and provides more accurate 3D resistivity models. To solve the inversion problem we apply a 'first optimize then discretize' approach using the quasi-Newton scheme in form of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method (see Gross & Kemp 2013). The evaluation of the cost function requires the solution of the secondary potential PDE for each source current and the solution of the corresponding adjoint-state PDE for the cost function gradients with respect to the subsurface
Spectroscopic Studies of Abell Clusters
NASA Astrophysics Data System (ADS)
Way, Michael Joseph
The objectives of this work are to use spectroscopic techniques to accurately categorize galaxies as either HII region star forming galaxies or as Active Galactic Nuclei powered via a black hole, and to use radial velocities and projected positions of galaxies in clusters to obtain the total cluster mass and its distribution. The masses and distributions compare well to X-ray mass measurements. The commonly used Dressler, A., Thompson, I. & Shectman, S. 1985, ApJ, 288, 481 technique for discriminating between Active Galactic Nuclei and HII region galaxies uses the measurement of the equivalent width of the emission lines (OII) 3727 A, H/beta, and (OIII) 5007 A. High quality spectra from 42 galaxies were taken and it is shown that their method is not capable of distinguishing between Active Galactic Nuclei and HII region galaxies. The emission line flux from H/beta, (OIII) 5007 A, (OI) 6300 A, Hα, (NII) 6583 A, and (SII) 6716+6731 A in combination with the method of Veilleux, S. & Osterbrock, D. E. 1987, ApJS, 63, 295 must be used to accurately distinguish between Active Galactic Nuclei and HII region galaxies. Galaxy radial velocities from spectroscopic data and their projected 2-D positions in clusters are used to obtain robust estimates of the total mass and mass distribution in two clusters. The total mass is calculated using the Virial theorem after removing substructure. The mass distribution is estimated via several robust statistical tests for 1-D, 2-D and 3-D structure. It is shown that the derived mass estimates agree well with those found independently from hot X-ray gas emission in clusters.
Complete Measurement of S(1D2) Photofragment Alignment from Abel-Invertible Ion Images
NASA Astrophysics Data System (ADS)
Rakitzis, T. Peter; Samartzis, Peter C.; Kitsopoulos, Theofanis N.
2001-09-01
A novel method to measure directly the photofragment alignment from Abel-invertible two-dimensional ion images, as a function of photofragment recoil velocity, is demonstrated for S(1D2) atoms from the photodissociation of carbonyl sulfide at 223 nm. The results are analyzed in terms of coherent and incoherent contributions from two dissociative states, showing that the phase differences of the asymptotic wave functions of the fast and slow recoil-velocity channel are approximately π/2 and 0, respectively.
A method extracting solar cell parameters from spectral response by inverse laplace transform
NASA Astrophysics Data System (ADS)
Tuominen, E.; Acerbis, M.; Hovinen, A.; Siirtola, T.; Sinkkonen, J.
1997-01-01
A mathematical method to interpret spectral responses measured from solar cells has been developed. Taking an inverse Laplace transform from the spectral response of a solar cell the spatial dependent collection efficiency of the cell can be obtained. Several important material parameters of the solar cell can be extracted from this function. Applying this method the properties of the solar cell can be investigated without applying characterization methods to the cell itself. We have applied the method both to simulated solar cells andto real solar cells.
Inverse airfoil design procedure using a multigrid Navier-Stokes method
NASA Technical Reports Server (NTRS)
Malone, J. B.; Swanson, R. C.
1991-01-01
The Modified Garabedian McFadden (MGM) design procedure was incorporated into an existing 2-D multigrid Navier-Stokes airfoil analysis method. The resulting design method is an iterative procedure based on a residual correction algorithm and permits the automated design of airfoil sections with prescribed surface pressure distributions. The new design method, Multigrid Modified Garabedian McFadden (MG-MGM), is demonstrated for several different transonic pressure distributions obtained from both symmetric and cambered airfoil shapes. The airfoil profiles generated with the MG-MGM code are compared to the original configurations to assess the capabilities of the inverse design method.
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H.-J.; Alcolea, A.; Riva, M.; Bakr, M.; van de Wiel, N.; Stauffer, F.; Guadagnini, A.,
2009-04-01
While several inverse modeling methods for groundwater flow have been developed during the last decades, hardly any comparisons among them have been published. We present a comparison of the performance of seven inverse methods, the Regularized Pilot Points Method (both in its classical estimation (RPPM-CE) and Monte Carlo (MC) simulation (RPPM-CS) variants), the Monte-Carlo variant of the Representer Method (RM), the Sequential-Self Calibration method (SSC), the Zonation Method (ZM), the Moment Equations Method (MEM) and a recently developed Semi-Analytical Method (SAM). The aforementioned methods are applied to a two-dimensional synthetic set-up, depicting the steady-state groundwater flow around an extraction well in the presence of distributed recharge. Their relative performances were assessed in terms of characterization of (a) the log-transmissivity field, (b) the hydraulic head distribution and (c) the well catchment delineation with respect to the reference scenario. Simulations were performed for a mildly and strongly heterogeneous transmissivity field. Adopted comparison measures include the absolute mean error, the root mean square error and the average ensemble standard deviation (whenever a method allows evaluating it) of the log-transmissivity and hydraulic head distributions. In addition, the estimated median and reference well catchments were compared and the uncertainty associated with the estimated catchment was evaluated. We found that the MC-based methods (RPPM-CS, RM and SSC) yield very similar results in all tested scenarios, despite they use different parameterization schemes and different objective functions. The linear correlation coefficient between the estimates obtained by the different MC methods increases with the number of stochastic realizations adopted and attains values up to 0.99 for 500 stochastic realisations. For the mildly heterogeneous case, the other inverse methods (i.e., non MC) yielded results which were consistent with
NASA Technical Reports Server (NTRS)
Vazquez, Sixto L.; Tessler, Alexander; Quach, Cuong C.; Cooper, Eric G.; Parks, Jeffrey; Spangler, Jan L.
2005-01-01
In an effort to mitigate accidents due to system and component failure, NASA s Aviation Safety has partnered with industry, academia, and other governmental organizations to develop real-time, on-board monitoring capabilities and system performance models for early detection of airframe structure degradation. NASA Langley is investigating a structural health monitoring capability that uses a distributed fiber optic strain system and an inverse finite element method for measuring and modeling structural deformations. This report describes the constituent systems that enable this structural monitoring function and discusses results from laboratory tests using the fiber strain sensor system and the inverse finite element method to demonstrate structural deformation estimation on an instrumented test article
A domain derivative-based method for solving elastodynamic inverse obstacle scattering problems
NASA Astrophysics Data System (ADS)
Le Louër, Frédérique
2015-11-01
The present work is concerned with the shape reconstruction problem of isotropic elastic inclusions from far-field data obtained by the scattering of a finite number of time-harmonic incident plane waves. This paper aims at completing the theoretical framework which is necessary for the application of geometric optimization tools to the inverse transmission problem in elastodynamics. The forward problem is reduced to systems of boundary integral equations following the direct and indirect methods initially developed for solving acoustic transmission problems. We establish the Fréchet differentiability of the boundary to far-field operator and give a characterization of the first Fréchet derivative and its adjoint operator. Using these results we propose an inverse scattering algorithm based on the iteratively regularized Gauß-Newton method and show numerical experiments in the special case of star-shaped obstacles.
Structural Anomaly Detection Using Fiber Optic Sensors and Inverse Finite Element Method
NASA Technical Reports Server (NTRS)
Quach, Cuong C.; Vazquez, Sixto L.; Tessler, Alex; Moore, Jason P.; Cooper, Eric G.; Spangler, Jan. L.
2005-01-01
NASA Langley Research Center is investigating a variety of techniques for mitigating aircraft accidents due to structural component failure. One technique under consideration combines distributed fiber optic strain sensing with an inverse finite element method for detecting and characterizing structural anomalies anomalies that may provide early indication of airframe structure degradation. The technique identifies structural anomalies that result in observable changes in localized strain but do not impact the overall surface shape. Surface shape information is provided by an Inverse Finite Element Method that computes full-field displacements and internal loads using strain data from in-situ fiberoptic sensors. This paper describes a prototype of such a system and reports results from a series of laboratory tests conducted on a test coupon subjected to increasing levels of damage.
An inverse method for the aerodynamic design of three-dimensional aircraft engine nacelles
NASA Technical Reports Server (NTRS)
Bell, R. A.; Cedar, R. D.
1991-01-01
A fast, efficient and user friendly inverse design system for 3-D nacelles was developed. The system is a product of a 2-D inverse design method originally developed at NASA-Langley and the CFL3D analysis code which was also developed at NASA-Langley and modified for nacelle analysis. The design system uses a predictor/corrector design approach in which an analysis code is used to calculate the flow field for an initial geometry, the geometry is then modified based on the difference between the calculated and target pressures. A detailed discussion of the design method, the process of linking it to the modified CFL3D solver and its extension to 3-D is presented. This is followed by a number of examples of the use of the design system for the design of both axisymmetric and 3-D nacelles.
Seismic imaging and inversion based on spectral-element and adjoint methods
NASA Astrophysics Data System (ADS)
Luo, Yang
One of the most important topics in seismology is to construct detailed tomographic images beneath the surface, which can be interpreted geologically and geochemically to understand geodynamic processes happening in the interior of the Earth. Classically, these images are usually produced based upon linearized traveltime anomalies involving several particular seismic phases, whereas nonlinear inversion fitting synthetic seismograms and recorded signals based upon the adjoint method becomes more and more favorable. The adjoint tomography, also referred to as waveform inversion, is advantageous over classical techniques in several aspects, such as better resolution, while it also has several drawbacks, e.g., slow convergence and lack of quantitative resolution analysis. In this dissertation, we focus on solving these remaining issues in adjoint tomography, from a theoretical perspective and based upon synthetic examples. To make the thesis complete by itself and easy to follow, we start from development of the spectral-element method, a wave equation solver that enables access to accurate synthetic seismograms for an arbitrary Earth model, and the adjoint method, which provides Frechet derivatives, also named as sensitivity kernels, of a given misfit function. Then, the sensitivity kernels for waveform misfit functions are illustrated, using examples from exploration seismology, in other words, for migration purposes. Next, we show step by step how these gradient derivatives may be utilized in minimizing the misfit function, which leads to iterative refinements on the Earth model. Strategies needed to speed up the inversion, ensure convergence and improve resolution, e.g., preconditioning, quasi-Newton methods, multi-scale measurements and combination of traveltime and waveform misfit functions, are discussed. Through comparisons between the adjoint tomography and classical tomography, we address the resolution issue by calculating the point-spread function, the
Numerical study of the inverse problem for the diffusion-reaction equation using optimization method
NASA Astrophysics Data System (ADS)
Soboleva, O. V.; Brizitskii, R. V.
2016-04-01
The model of transfer of substance with mixed boundary condition is considered. The inverse extremum problem of identification of the main coefficient in a nonstationary diffusion-reaction equation is formulated. The numerical algorithm based on the Newton-method of nonlinear optimization and finite difference discretization for solving this extremum problem is developed and realized on computer. The results of numerical experiments are discussed.
Studies of Trace Gas Chemical Cycles Using Inverse Methods and Global Chemical Transport Models
NASA Technical Reports Server (NTRS)
Prinn, Ronald G.
2003-01-01
We report progress in the first year, and summarize proposed work for the second year of the three-year dynamical-chemical modeling project devoted to: (a) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for long lived gases important in ozone depletion and climate forcing, (b) utilization of inverse methods to determine these source/sink strengths using either MATCH (Model for Atmospheric Transport and Chemistry) which is based on analyzed observed wind fields or back-trajectories computed from these wind fields, (c) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple titrating gases, and (d) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3D models. Important goals include determination of regional source strengths of methane, nitrous oxide, methyl bromide, and other climatically and chemically important biogenic/anthropogenic trace gases and also of halocarbons restricted by the Montreal protocol and its follow-on agreements and hydrohalocarbons now used as alternatives to the restricted halocarbons.
Interpretation of Trace Gas Data Using Inverse Methods and Global Chemical Transport Models
NASA Technical Reports Server (NTRS)
Prinn, Ronald G.
1997-01-01
This is a theoretical research project aimed at: (1) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for long lived gases important in ozone depletion and climate forcing, (2) utilization of inverse methods to determine these source/sink strengths which use the NCAR/Boulder CCM2-T42 3-D model and a global 3-D Model for Atmospheric Transport and Chemistry (MATCH) which is based on analyzed observed wind fields (developed in collaboration by MIT and NCAR/Boulder), (3) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple titrating gases, and, (4) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3-D models. Important goals include determination of regional source strengths of methane, nitrous oxide, and other climatically and chemically important biogenic trace gases and also of halocarbons restricted by the Montreal Protocol and its follow-on agreements and hydrohalocarbons used as alternatives to the restricted halocarbons.
Hesford, Andrew J.; Chew, Weng C.
2010-01-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438
Hesford, Andrew J; Chew, Weng C
2010-08-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths.
A hybrid method for inversion of 3D DC resistivity logging measurements.
Gajda-Zagórska, Ewa; Schaefer, Robert; Smołka, Maciej; Paszyński, Maciej; Pardo, David
This paper focuses on the application of hp hierarchic genetic strategy (hp-HGS) for solution of a challenging problem, the inversion of 3D direct current (DC) resistivity logging measurements. The problem under consideration has been formulated as the global optimization one, for which the objective function (misfit between computed and reference data) exhibits multiple minima. In this paper, we consider the extension of the hp-HGS strategy, namely we couple the hp-HGS algorithm with a gradient based optimization method for a local search. Forward simulations are performed with a self-adaptive hp finite element method, hp-FEM. The computational cost of misfit evaluation by hp-FEM depends strongly on the assumed accuracy. This accuracy is adapted to the tree of populations generated by the hp-HGS algorithm, which makes the global phase significantly cheaper. Moreover, tree structure of demes as well as branch reduction and conditional sprouting mechanism reduces the number of expensive local searches up to the number of minima to be recognized. The common (direct and inverse) accuracy control, crucial for the hp-HGS efficiency, has been motivated by precise mathematical considerations. Numerical results demonstrate the suitability of the proposed method for the inversion of 3D DC resistivity logging measurements.
The Abell 85 BCG: A Nucleated, Coreless Galaxy
NASA Astrophysics Data System (ADS)
Madrid, Juan P.; Donzelli, Carlos J.
2016-03-01
New high-resolution r-band imaging of the brightest cluster galaxy (BCG) in Abell 85 (Holm 15A) was obtained using the Gemini Multi Object Spectrograph. These data were taken with the aim of deriving an accurate surface brightness profile of the BCG of Abell 85, in particular, its central region. The new Gemini data show clear evidence of a previously unreported nuclear emission that is evident as a distinct light excess in the central kiloparsec of the surface brightness profile. We find that the light profile is never flat nor does it present a downward trend toward the center of the galaxy. That is, the new Gemini data show a different physical reality from the featureless, “evacuated core” recently claimed for the Abell 85 BCG. After trying different models, we find that the surface brightness profile of the BCG of Abell 85 is best fit by a double Sérsic model.
THE ABELL 85 BCG: A NUCLEATED, CORELESS GALAXY
Madrid, Juan P.
2016-03-01
New high-resolution r-band imaging of the brightest cluster galaxy (BCG) in Abell 85 (Holm 15A) was obtained using the Gemini Multi Object Spectrograph. These data were taken with the aim of deriving an accurate surface brightness profile of the BCG of Abell 85, in particular, its central region. The new Gemini data show clear evidence of a previously unreported nuclear emission that is evident as a distinct light excess in the central kiloparsec of the surface brightness profile. We find that the light profile is never flat nor does it present a downward trend toward the center of the galaxy. That is, the new Gemini data show a different physical reality from the featureless, “evacuated core” recently claimed for the Abell 85 BCG. After trying different models, we find that the surface brightness profile of the BCG of Abell 85 is best fit by a double Sérsic model.
Design of Aspirated Compressor Blades Using Three-dimensional Inverse Method
NASA Technical Reports Server (NTRS)
Dang, T. Q.; Rooij, M. Van; Larosiliere, L. M.
2003-01-01
A three-dimensional viscous inverse method is extended to allow blading design with full interaction between the prescribed pressure-loading distribution and a specified transpiration scheme. Transpiration on blade surfaces and endwalls is implemented as inflow/outflow boundary conditions, and the basic modifications to the method are outlined. This paper focuses on a discussion concerning an application of the method to the design and analysis of a supersonic rotor with aspiration. Results show that an optimum combination of pressure-loading tailoring with surface aspiration can lead to a minimization of the amount of sucked flow required for a net performance improvement at design and off-design operations.
3D CSEM data inversion using Newton and Halley class methods
NASA Astrophysics Data System (ADS)
Amaya, M.; Hansen, K. R.; Morten, J. P.
2016-05-01
For the first time in 3D controlled source electromagnetic data inversion, we explore the use of the Newton and the Halley optimization methods, which may show their potential when the cost function has a complex topology. The inversion is formulated as a constrained nonlinear least-squares problem which is solved by iterative optimization. These methods require the derivatives up to second order of the residuals with respect to model parameters. We show how Green's functions determine the high-order derivatives, and develop a diagrammatical representation of the residual derivatives. The Green's functions are efficiently calculated on-the-fly, making use of a finite-difference frequency-domain forward modelling code based on a multi-frontal sparse direct solver. This allow us to build the second-order derivatives of the residuals keeping the memory cost in the same order as in a Gauss-Newton (GN) scheme. Model updates are computed with a trust-region based conjugate-gradient solver which does not require the computation of a stabilizer. We present inversion results for a synthetic survey and compare the GN, Newton, and super-Halley optimization schemes, and consider two different approaches to set the initial trust-region radius. Our analysis shows that the Newton and super-Halley schemes, using the same regularization configuration, add significant information to the inversion so that the convergence is reached by different paths. In our simple resistivity model examples, the convergence speed of the Newton and the super-Halley schemes are either similar or slightly superior with respect to the convergence speed of the GN scheme, close to the minimum of the cost function. Due to the current noise levels and other measurement inaccuracies in geophysical investigations, this advantageous behaviour is at present of low consequence, but may, with the further improvement of geophysical data acquisition, be an argument for more accurate higher-order methods like those
The application of inverse methods to spatially-distributed acoustic sources
NASA Astrophysics Data System (ADS)
Holland, K. R.; Nelson, P. A.
2013-10-01
Acoustic inverse methods, based on the output of an array of microphones, can be readily applied to the characterisation of acoustic sources that can be adequately modelled as a number of discrete monopoles. However, there are many situations, particularly in the fields of vibroacoustics and aeroacoustics, where the sources are distributed continuously in space over a finite area (or volume). This paper is concerned with the practical problem of applying inverse methods to such distributed source regions via the process of spatial sampling. The problem is first tackled using computer simulations of the errors associated with the application of spatial sampling to a wide range of source distributions. It is found that the spatial sampling criterion for minimising the errors in the radiated far-field reconstructed from the discretised source distributions is strongly dependent on acoustic wavelength but is only weakly dependent on the details of the source field itself. The results of the computer simulations are verified experimentally through the application of the inverse method to the sound field radiated by a ducted fan. The un-baffled fan source with the associated flow field is modelled as a set of equivalent monopole sources positioned on the baffled duct exit along with a matrix of complimentary non-flow Green functions. Successful application of the spatial sampling criterion involves careful frequency-dependent selection of source spacing, and results in the accurate reconstruction of the radiated sound field. Discussions of the conditioning of the Green function matrix which is inverted are included and it is shown that the spatial sampling criterion may be relaxed if conditioning techniques, such as regularisation, are applied to this matrix prior to inversion.
NASA Technical Reports Server (NTRS)
Gherlone, Marco; Cerracchio, Priscilla; Mattone, Massimiliano; Di Sciuva, Marco; Tessler, Alexander
2011-01-01
A robust and efficient computational method for reconstructing the three-dimensional displacement field of truss, beam, and frame structures, using measured surface-strain data, is presented. Known as shape sensing , this inverse problem has important implications for real-time actuation and control of smart structures, and for monitoring of structural integrity. The present formulation, based on the inverse Finite Element Method (iFEM), uses a least-squares variational principle involving strain measures of Timoshenko theory for stretching, torsion, bending, and transverse shear. Two inverse-frame finite elements are derived using interdependent interpolations whose interior degrees-of-freedom are condensed out at the element level. In addition, relationships between the order of kinematic-element interpolations and the number of required strain gauges are established. As an example problem, a thin-walled, circular cross-section cantilevered beam subjected to harmonic excitations in the presence of structural damping is modeled using iFEM; where, to simulate strain-gauge values and to provide reference displacements, a high-fidelity MSC/NASTRAN shell finite element model is used. Examples of low and high-frequency dynamic motion are analyzed and the solution accuracy examined with respect to various levels of discretization and the number of strain gauges.
Direct inversion of circulation and mixing from tracer measurements - Part 1: Method
NASA Astrophysics Data System (ADS)
von Clarmann, Thomas; Grabowski, Udo
2016-11-01
From a series of zonal mean global stratospheric tracer measurements sampled in altitude vs. latitude, circulation and mixing patterns are inferred by the inverse solution of the continuity equation. As a first step, the continuity equation is written as a tendency equation, which is numerically integrated over time to predict a later atmospheric state, i.e., mixing ratio and air density. The integration is formally performed by the multiplication of the initially measured atmospheric state vector by a linear prediction operator. Further, the derivative of the predicted atmospheric state with respect to the wind vector components and mixing coefficients is used to find the most likely wind vector components and mixing coefficients which minimize the residual between the predicted atmospheric state and the later measurement of the atmospheric state. Unless multiple tracers are used, this inversion problem is under-determined, and dispersive behavior of the prediction further destabilizes the inversion. Both these problems are addressed by regularization. For this purpose, a first-order smoothness constraint has been chosen. The usefulness of this method is demonstrated by application to various tracer measurements recorded with the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS). This method aims at a diagnosis of the Brewer-Dobson circulation without involving the concept of the mean age of stratospheric air, and related problems like the stratospheric tape recorder, or intrusions of mesospheric air into the stratosphere.
NASA Astrophysics Data System (ADS)
Li, Guo-Yang; Zheng, Yang; Liu, Yanlin; Destrade, Michel; Cao, Yanping
2016-11-01
A body force concentrated at a point and moving at a high speed can induce shear-wave Mach cones in dusty-plasma crystals or soft materials, as observed experimentally and named the elastic Cherenkov effect (ECE). The ECE in soft materials forms the basis of the supersonic shear imaging (SSI) technique, an ultrasound-based dynamic elastography method applied in clinics in recent years. Previous studies on the ECE in soft materials have focused on isotropic material models. In this paper, we investigate the existence and key features of the ECE in anisotropic soft media, by using both theoretical analysis and finite element (FE) simulations, and we apply the results to the non-invasive and non-destructive characterization of biological soft tissues. We also theoretically study the characteristics of the shear waves induced in a deformed hyperelastic anisotropic soft material by a source moving with high speed, considering that contact between the ultrasound probe and the soft tissue may lead to finite deformation. On the basis of our theoretical analysis and numerical simulations, we propose an inverse approach to infer both the anisotropic and hyperelastic parameters of incompressible transversely isotropic (TI) soft materials. Finally, we investigate the properties of the solutions to the inverse problem by deriving the condition numbers in analytical form and performing numerical experiments. In Part II of the paper, both ex vivo and in vivo experiments are conducted to demonstrate the applicability of the inverse method in practical use.
Fast full waveform inversion with source encoding and second-order optimization methods
NASA Astrophysics Data System (ADS)
Castellanos, Clara; Métivier, Ludovic; Operto, Stéphane; Brossier, Romain; Virieux, Jean
2015-02-01
Full waveform inversion (FWI) of 3-D data sets has recently been possible thanks to the development of high performance computing. However, FWI remains a computationally intensive task when high frequencies are injected in the inversion or more complex wave physics (viscoelastic) is accounted for. The highest computational cost results from the numerical solution of the wave equation for each seismic source. To reduce the computational burden, one well-known technique is to employ a random linear combination of the sources, rather that using each source independently. This technique, known as source encoding, has shown to successfully reduce the computational cost when applied to real data. Up to now, the inversion is normally carried out using gradient descent algorithms. With the idea of achieving a fast and robust frequency-domain FWI, we assess the performance of the random source encoding method when it is interfaced with second-order optimization methods (quasi-Newton l-BFGS, truncated Newton). Because of the additional seismic modelings required to compute the Newton descent direction, it is not clear beforehand if truncated Newton methods can indeed further reduce the computational cost compared to gradient algorithms. We design precise stopping criteria of iterations to fairly assess the computational cost and the speed-up provided by the source encoding method for each optimization method. We perform experiment on synthetic and real data sets. In both cases, we confirm that combining source encoding with second-order optimization methods reduces the computational cost compared to the case where source encoding is interfaced with gradient descent algorithms. For the synthetic data set, inspired from the geology of Gulf of Mexico, we show that the quasi-Newton l-BFGS algorithm requires the lowest computational cost. For the real data set application on the Valhall data, we show that the truncated Newton methods provide the most robust direction of descent.
A computational method for the inversion of wide-band GPR measurements
NASA Astrophysics Data System (ADS)
Salucci, M.; Tenuti, L.; Poli, L.; Oliveri, G.; Massa, A.
2016-10-01
An innovative method for the inversion of ground penetrating radar (GPR) measurements is presented. The proposed inverse scattering (IS) approach is based on the exploitation of wide-band data according to a multi-frequency (MF) strategy, and integrates a customized particle swarm optimizer (PSO) within the iterative multi-scaling approach (IMSA) to counteract the high non-linearity of the optimized cost function. If from the one hand the IMSA provides a reduction of the ratio between problem unknowns and informative data, on the other hand the stochastic nature of the PSO solver allows to "escape" from the high density of false solutions of the MF-IS subsurface problem. A set of representative numerical results verifies the effectiveness of the developed approach, as well as its superiority with respect to a deterministic implementation.
Using informative priors in facies inversion: The case of C-ISR method
NASA Astrophysics Data System (ADS)
Valakas, G.; Modis, K.
2016-08-01
Inverse problems involving the characterization of hydraulic properties of groundwater flow systems by conditioning on observations of the state variables are mathematically ill-posed because they have multiple solutions and are sensitive to small changes in the data. In the framework of McMC methods for nonlinear optimization and under an iterative spatial resampling transition kernel, we present an algorithm for narrowing the prior and thus producing improved proposal realizations. To achieve this goal, we cosimulate the facies distribution conditionally to facies observations and normal scores transformed hydrologic response measurements, assuming a linear coregionalization model. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior. The effectiveness of our approach is demonstrated by an example application on a synthetic underdetermined inverse problem in aquifer characterization.
NASA Astrophysics Data System (ADS)
Awaluddin, Moehammad; Yuwono, Bambang Darmo; Puspita, Yolanda Adya
2016-05-01
Continuous Global Positioning System (GPS) observations showed significant crustal displacements as a result of the 2010 Mentawai earthquake. The Least Square Inversion method of Mentawai earthquake slip distribution from SuGAR observations yielded in an optimum value of slip distribution by giving a weight of smoothing constraint and a weight of slip value constraint = 0 at the edge of the earthquake rupture area. A maximum coseismic slip of the inversion calculation was 1.997 m and concentrated around stations PRKB (Pagai Island). In addition, the values of dip-slip direction tend to be more dominant. The seismic moment calculated from the slip distribution was 6.89 × 10E+20 Nm, which is equivalent to a magnitude of 7.8.
Localization of incipient tip vortex cavitation using ray based matched field inversion method
NASA Astrophysics Data System (ADS)
Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon
2015-10-01
Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.
The magnitude-redshift relation for 561 Abell clusters
NASA Technical Reports Server (NTRS)
Postman, M.; Huchra, J. P.; Geller, M. J.; Henry, J. P.
1985-01-01
The Hubble diagram for the 561 Abell clusters with measured redshifts has been examined using Abell's (1958) corrected photo-red magnitudes for the tenth-ranked cluster member (m10). After correction for the Scott effect and K dimming, the data are in good agreement with a linear magnitude-redshift relation with a slope of 0.2 out to z = 0.1. New redshift data are also presented for 20 Abell clusters. Abell's m10 is suitable for redshift estimation for clusters with m10 of no more than 16.5. At fainter m10, the number of foreground galaxies expected within an Abell radius is large enough to make identification of the tenth-ranked galaxy difficult. Interlopers bias the estimated redshift toward low values at high redshift. Leir and van den Bergh's (1977) redshift estimates suffer from this same bias but to a smaller degree because of the use of multiple cluster parameters. Constraints on deviations of cluster velocities from the mean cosmological flow require greater photometric accuracy than is provided by Abell's m10 magnitudes.
Full Waveform Inversion Methods for Source and Media Characterization before and after SPE5
NASA Astrophysics Data System (ADS)
Phillips-Alonge, K. E.; Knox, H. A.; Ober, C.; Abbott, R. E.
2015-12-01
The Source Physics Experiment (SPE) was designed to advance our understanding of explosion-source phenomenology and subsequent wave propagation through the development of innovative physics-based models. Ultimately, these models will be used for characterizing explosions, which can occur with a variety of yields, depths of burial, and in complex media. To accomplish this, controlled chemical explosions were conducted in a granite outcrop at the Nevada Nuclear Security Test Site. These explosions were monitored with extensive seismic and infrasound instrumentation both in the near and far-field. Utilizing this data, we calculate predictions before the explosions occur and iteratively improve our models after each explosion. Specifically, we use an adjoint-based full waveform inversion code that employs discontinuous Galerkin techniques to predict waveforms at station locations prior to the fifth explosion in the series (SPE5). The full-waveform inversions are performed using a realistic geophysical model based on local 3D tomography and inversions for media properties using previous shot data. The code has capabilities such as unstructured meshes that align with material interfaces, local polynomial refinement, and support for various physics and methods for implicit and explicit time-integration. The inversion results we show here evaluate these different techniques, which allows for model fidelity assessment (acoustic versus elastic versus anelastic, etc.). In addition, the accuracy and efficiency of several time-integration methods can be determined. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Solving Dirac equations on a 3D lattice with inverse Hamiltonian and spectral methods
NASA Astrophysics Data System (ADS)
Ren, Z. X.; Zhang, S. Q.; Meng, J.
2017-02-01
A new method to solve the Dirac equation on a 3D lattice is proposed, in which the variational collapse problem is avoided by the inverse Hamiltonian method and the fermion doubling problem is avoided by performing spatial derivatives in momentum space with the help of the discrete Fourier transform, i.e., the spectral method. This method is demonstrated in solving the Dirac equation for a given spherical potential in a 3D lattice space. In comparison with the results obtained by the shooting method, the differences in single-particle energy are smaller than 10-4 MeV, and the densities are almost identical, which demonstrates the high accuracy of the present method. The results obtained by applying this method without any modification to solve the Dirac equations for an axial-deformed, nonaxial-deformed, and octupole-deformed potential are provided and discussed.
Ray, J.; Lee, J.; Yadav, V.; ...
2014-08-20
We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
Search for post-starburst (E+A) galaxies in the cluster Abell 3266
NASA Astrophysics Data System (ADS)
Zhang, Zhongyu
The objective of this work is to use spectroscopic techniques to further the understanding of the dynamical state of the galaxy cluster Abell 3266. This is a very rich cluster in the southern skies that has been extensively studied by many groups. The cluster shows evidence of a merger of substructure in its midst, but the geometry, dynamics, and age of this merger remain uncertain. Low resolution, fiber spectra of galaxies in Abell 3266 were analyzed and searched for “E+A” (post-starburst) galaxies, from which we selected two candidate “E+A” galaxies for follow-up high-resolution spectroscopy. The 2 candidate galaxies are confirmed as “E+A” galaxies with high-resolution, slit spectra. The ages of these “E+A” galaxies (i.e. time since their starburst occurred) are determined with the method developed by Leonardi & Rose (1996). We find that both galaxies had a major starburst in the past, but they occurred at significantly different epochs. If the starbursts are related to the recent merger history of Abell 3266, instead of being just isolated events, they would indicate that there may have been more than one merger in this cluster in the past 3 Gyr or so. This might explain the rather disparate conclusions that have been obtained in the past about the merger history of this cluster. To compare with other nearby clusters, “E+A” galaxies were also searched for among nearly 2400 galaxies in 26 clusters fields. Only 4 candidates are found. This result is consistent with the general observational fact that there are substantially fewer spectroscopically disturbed galaxies in nearby clusters than in distant clusters. The result is also in quantitative agreement with the findings in the larger, more homogeneous Las Campanas Redshift Survey, confirming the reliability of our identification in Abell 3266. The impact of these statistical analyses on the understanding of galaxy evolution in cluster environment is also discussed.
A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems
NASA Astrophysics Data System (ADS)
Iglesias, Marco A.
2016-02-01
We introduce a derivative-free computational framework for approximating solutions to nonlinear PDE-constrained inverse problems. The general aim is to merge ideas from iterative regularization with ensemble Kalman methods from Bayesian inference to develop a derivative-free stable method easy to implement in applications where the PDE (forward) model is only accessible as a black box (e.g. with commercial software). The proposed regularizing ensemble Kalman method can be derived as an approximation of the regularizing Levenberg-Marquardt (LM) scheme (Hanke 1997 Inverse Problems 13 79-95) in which the derivative of the forward operator and its adjoint are replaced with empirical covariances from an ensemble of elements from the admissible space of solutions. The resulting ensemble method consists of an update formula that is applied to each ensemble member and that has a regularization parameter selected in a similar fashion to the one in the LM scheme. Moreover, an early termination of the scheme is proposed according to a discrepancy principle-type of criterion. The proposed method can be also viewed as a regularizing version of standard Kalman approaches which are often unstable unless ad hoc fixes, such as covariance localization, are implemented. The aim of this paper is to provide a detailed numerical investigation of the regularizing and convergence properties of the proposed regularizing ensemble Kalman scheme; the proof of these properties is an open problem. By means of numerical experiments, we investigate the conditions under which the proposed method inherits the regularizing properties of the LM scheme of (Hanke 1997 Inverse Problems 13 79-95) and is thus stable and suitable for its application in problems where the computation of the Fréchet derivative is not computationally feasible. More concretely, we study the effect of ensemble size, number of measurements, selection of initial ensemble and tunable parameters on the performance of the method
Reconstruction of multiple gastric electrical wave fronts using potential-based inverse methods.
Kim, J H K; Pullan, A J; Cheng, L K
2012-08-21
One approach for non-invasively characterizing gastric electrical activity, commonly used in the field of electrocardiography, involves solving an inverse problem whereby electrical potentials on the stomach surface are directly reconstructed from dense potential measurements on the skin surface. To investigate this problem, an anatomically realistic torso model and an electrical stomach model were used to simulate potentials on stomach and skin surfaces arising from normal gastric electrical activity. The effectiveness of the Greensite-Tikhonov or the Tikhonov inverse methods were compared under the presence of 10% Gaussian noise with either 84 or 204 body surface electrodes. The stability and accuracy of the Greensite-Tikhonov method were further investigated by introducing varying levels of Gaussian signal noise or by increasing or decreasing the size of the stomach by 10%. Results showed that the reconstructed solutions were able to represent the presence of propagating multiple wave fronts and the Greensite-Tikhonov method with 204 electrodes performed best (correlation coefficients of activation time: 90%; pacemaker localization error: 3 cm). The Greensite-Tikhonov method was stable with Gaussian noise levels up to 20% and 10% change in stomach size. The use of 204 rather than 84 body surface electrodes improved the performance; however, for all investigated cases, the Greensite-Tikhonov method outperformed the Tikhonov method.
Reconstruction of multiple gastric electrical wave fronts using potential-based inverse methods
NASA Astrophysics Data System (ADS)
Kim, J. H. K.; Pullan, A. J.; Cheng, L. K.
2012-08-01
One approach for non-invasively characterizing gastric electrical activity, commonly used in the field of electrocardiography, involves solving an inverse problem whereby electrical potentials on the stomach surface are directly reconstructed from dense potential measurements on the skin surface. To investigate this problem, an anatomically realistic torso model and an electrical stomach model were used to simulate potentials on stomach and skin surfaces arising from normal gastric electrical activity. The effectiveness of the Greensite-Tikhonov or the Tikhonov inverse methods were compared under the presence of 10% Gaussian noise with either 84 or 204 body surface electrodes. The stability and accuracy of the Greensite-Tikhonov method were further investigated by introducing varying levels of Gaussian signal noise or by increasing or decreasing the size of the stomach by 10%. Results showed that the reconstructed solutions were able to represent the presence of propagating multiple wave fronts and the Greensite-Tikhonov method with 204 electrodes performed best (correlation coefficients of activation time: 90%; pacemaker localization error: 3 cm). The Greensite-Tikhonov method was stable with Gaussian noise levels up to 20% and 10% change in stomach size. The use of 204 rather than 84 body surface electrodes improved the performance; however, for all investigated cases, the Greensite-Tikhonov method outperformed the Tikhonov method.
A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty
Zhang, Kai; Wang, Zengfei; Zhang, Liming; Yao, Jun; Yan, Xia
2015-01-01
In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method), for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model’s parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively ‘important’ elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient. PMID:26252392
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
Arbanas, Goran; Williams, Mark L; Leal, Luiz C; Dunn, Michael E; Khuwaileh, Bassam A.; Wang, C; Abdel-Khalik, Hany
2015-01-01
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimiator (INSURE) module of the AMPX system [1]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way. We show how the IS/UQ method could be used to optimize uncertainties of IBEs and differential cross section data simultaneously.
Earthquake source tensor inversion with the gCAP method and 3D Green's functions
NASA Astrophysics Data System (ADS)
Zheng, J.; Ben-Zion, Y.; Zhu, L.; Ross, Z.
2013-12-01
We develop and apply a method to invert earthquake seismograms for source properties using a general tensor representation and 3D Green's functions. The method employs (i) a general representation of earthquake potency/moment tensors with double couple (DC), compensated linear vector dipole (CLVD), and isotropic (ISO) components, and (ii) a corresponding generalized CAP (gCap) scheme where the continuous wave trains are broken into Pnl and surface waves (Zhu & Ben-Zion, 2013). For comparison, we also use the waveform inversion method of Zheng & Chen (2012) and Ammon et al. (1998). Sets of 3D Green's functions are calculated on a grid of 1 km3 using the 3-D community velocity model CVM-4 (Kohler et al. 2003). A bootstrap technique is adopted to establish robustness of the inversion results using the gCap method (Ross & Ben-Zion, 2013). Synthetic tests with 1-D and 3-D waveform calculations show that the source tensor inversion procedure is reasonably reliable and robust. As initial application, the method is used to investigate source properties of the March 11, 2013, Mw=4.7 earthquake on the San Jacinto fault using recordings of ~45 stations up to ~0.2Hz. Both the best fitting and most probable solutions include ISO component of ~1% and CLVD component of ~0%. The obtained ISO component, while small, is found to be a non-negligible positive value that can have significant implications for the physics of the failure process. Work on using higher frequency data for this and other earthquakes is in progress.
A hybrid differential evolution/Levenberg-Marquardt method for solving inverse transport problems
Bledsoe, Keith C; Favorite, Jeffrey A
2010-01-01
Recently, the Differential Evolution (DE) optimization method was applied to solve inverse transport problems in finite cylindrical geometries and was shown to be far superior to the Levenberg-Marquardt optimization method at finding a global optimum for problems with several unknowns. However, while extremely adept at finding a global optimum solution, the DE method often requires a large number (hundreds or thousands) of transport calculations, making it much slower than the Levenberg-Marquardt method. In this paper, a hybridization of the Differential Evolution and Levenberg-Marquardt approaches is presented. This hybrid method takes advantage of the robust search capability of the Differential Evolution method and the speed of the Levenberg-Marquardt technique.
NASA Astrophysics Data System (ADS)
Rizzuti, G.; Gisolf, A.
2017-03-01
We study a reconstruction algorithm for the general inverse scattering problem based on the estimate of not only medium properties, as in more conventional approaches, but also wavefields propagating inside the computational domain. This extended set of unknowns is justified as a way to prevent local minimum stagnation, which is a common issue for standard methods. At each iteration of the algorithm, (i) the model parameters are obtained by solution of a convex problem, formulated from a special bilinear relationship of the data with respect to properties and wavefields (where the wavefield is kept fixed), and (ii) a better estimate of the wavefield is calculated, based on the previously reconstructed properties. The resulting scheme is computationally convenient since step (i) can greatly benefit from parallelization and the wavefield update (ii) requires modeling only in the known background model, which can be sped up considerably by factorization-based direct methods. The inversion method is successfully tested on synthetic elastic datasets.
Pin, F.G.; Belmans, P.F.R.; Culioli, J.C.; Carlson, D.D.; Tulloch, F.A.
1994-12-31
A new analytical method to resolve underspecified systems of algebraic equations is presented. The method is referred to as the Full Space Parameterization (FSP) method and utilizes easily- calculated projected solution vectors to generate the entire space of solutions of the underspecified system. Analytic parameterizations for both the space of solutions and the null space of the system reduce the determination of a task-requirement-based single solution to a m {minus} n dimensional problem, where m {minus} n is the degree of underspecification, or degree of redundancy, of the system. An analytical solution is presented to directly calculate the least-norm solution from the parameterized space and the results are compared to solutions of the standard pseudo-inverse algorithm which embodies the (least-norm) Moore-Penrose generalized inverse. Application of the new solution method to a variety of systems and task requirements are discussed and sample results using four-link planar manipulators with one or two degrees of redundancy and a seven degree-of-freedom manipulator with one or four degrees of redundancy are presented to illustrate the efficiency of the new FSP method and algorithm.
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
Arbanas, G.; Williams, M.L.; Leal, L.C.; Dunn, M.E.; Khuwaileh, B.A.; Wang, C.; Abdel-Khalik, H.
2015-01-15
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, “AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications,” Trans. Am. Nucl. Soc. 86, 118–119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore, we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way and how it could be used to optimize uncertainties of IBEs and differential cross section data simultaneously. We itemize contributions to the cost of differential data measurements needed to define a realistic cost function.
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
NASA Astrophysics Data System (ADS)
Arbanas, G.; Williams, M. L.; Leal, L. C.; Dunn, M. E.; Khuwaileh, B. A.; Wang, C.; Abdel-Khalik, H.
2015-01-01
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, "AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications," Trans. Am. Nucl. Soc. 86, 118-119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore, we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way and how it could be used to optimize uncertainties of IBEs and differential cross section data simultaneously. We itemize contributions to the cost of differential data measurements needed to define a realistic cost function.
NASA Astrophysics Data System (ADS)
Kim, A.; Dreger, D. S.; Taira, T.
2009-12-01
In this study, we developed a finite-source inversion method using the waveforms of small earthquakes as empirical Green's functions (eGf) to study the rupture process of micro-earthquakes on the San Andreas fault. This method is different from the ordinarily eGf deconvolution method which deconvolves the seismogram of the smaller simpler-source event from the seismogram of the larger event recovering the moment rate function of the larger more complex-source event. In the eGf deconvolution method commonly spectral domain deconvolution is used where the small earthquake spectrum is divided from the larger target event spectrum, and low spectral values are replaced by a water-level value to damp the effect of division by small numbers (e.g. Clayton and Wiggins, 1976). The water-level is chosen by trial and error. Such a rough regularization of the spectral ratio can result in the solution having unrealistic negative values and short-period oscillations. Also the amplitude and duration of the moment rate functions can be influenced by the adopted water-level value. In this study we propose to use the eGf waveform directly in the inversion, rather than the moment rate function obtained from spectral division. In this approach the eGf is treated as the Green’s function from each subfault, and contrary to the deconvolution approach can make use multiple eGfs distributed over the fault plane. The method can therefore be applied to short source-receiver distance situations since the variation in radiation pattern due to source-receiver geometry is better accounted for. Numerical tests of the waveform eGf inversion method indicate that in the case where the large slip asperity is not located at the hypocenter, the eGf located near the asperity recovers the prescribed model better than that using an eGf co-located with the main shock hypocenter. Synthetic analyses also show that using multiple eGfs can better constrain the slip model than using only one eGf in the
A neural network based error correction method for radio occultation electron density retrieval
NASA Astrophysics Data System (ADS)
Pham, Viet-Cuong; Juang, Jyh-Ching
2015-12-01
Abel inversion techniques have been widely employed to retrieve electron density profiles (EDPs) from radio occultation (RO) measurements, which are available by observing Global Navigation Satellite System (GNSS) satellites from low-earth-orbit (LEO) satellites. It is well known that the ordinary Abel inversion might introduce errors in the retrieval of EDPs when the spherical symmetry assumption is violated. The error, however, is case-dependent; therefore it is desirable to associate an error index or correction coefficient with respect to each retrieved EDP. Several error indices have been proposed but they only deal with electron density at the F2 peak and suffer from some drawbacks. In this paper we propose an artificial neural network (ANN) based error correction method for EDPs obtained by the ordinary Abel inversion. The ANN is first trained to learn the relationship between vertical total electron content (TEC) measurements and retrieval errors at the F2 peak, 220 km and 110 km altitudes; correction coefficients are then estimated to correct the retrieved EDPs at these three altitudes. Experiments using the NeQuick2 model and real FORMOSAT-3/COSMIC RO geometry show that the proposed method outperforms existing ones. Real incoherent scatter radar (ISR) measurements at the Jicamarca Radio Observatory and the global TEC map provided by the International GNSS Service (IGS) are also used to valid the proposed method.
Image correction method for the colour contrast effect using inverse processes of the brain.
Murakoshi, Kazushi; Miura, Mai
2010-09-01
In the colour contrast effect, the impression of a colour changes according to the situation; cases occur in which the colour appearance is misunderstood. We propose an image signal processing method for preventing such misperception of colour. Many conventional image improving methods emphasize the contrast of images as same as the brain does. However, by their processes, the colour contrast effect is not canceled; we misunderstand the colour. The objective of this study is to perceive original colour. Therefore, we propose an image correction method using inverse processes of the brain in order to cancel the processes of the brain, the colour contrast effect. We verified whether the proposed method corrected the colour contrast effect by conducting a psychological experiment. The results show that the method succeeds in canceling the colour contrast effect.
The merging cluster of galaxies Abell 3376: an optical view
NASA Astrophysics Data System (ADS)
Durret, F.; Perrot, C.; Lima Neto, G. B.; Adami, C.; Bertin, E.; Bagchi, J.
2013-12-01
Context. The cluster Abell 3376 is a merging cluster of galaxies at redshift z = 0.046. It is famous mostly for its giant radio arcs, and shows an elongated and highly substructured X-ray emission, but has not been analysed in detail at optical wavelengths. Aims: To improve our understanding of the effects of the major cluster merger on the galaxy properties, we analyse the galaxy luminosity function (GLF) in the B band in several regions as well as the dynamical properties of the substructures. Methods: We have obtained wide field images of Abell 3376 in the B band and derive the GLF applying a statistical subtraction of the background in three regions: a circle of 0.29 deg radius (1.5 Mpc) encompassing the whole cluster, and two circles centred on each of the two brightest galaxies (BCG2, northeast, coinciding with the peak of X-ray emission, and BCG1, southwest) of radii 0.15 deg (0.775 Mpc). We also compute the GLF in the zone around BCG1, which is covered by the WINGS survey in the B and V bands, by selecting cluster members in the red sequence in a (B - V) versus V diagram. Finally, we discuss the dynamical characteristics of the cluster implied by an analysis based on the Serna & Gerbal (SG) method. Results: The GLFs are not well fit by a single Schechter function, but satisfactory fits are obtained by summing a Gaussian and a Schechter function. The GLF computed by selecting galaxies in the red sequence in the region surrounding BCG1 can also be fit by a Gaussian plus a Schechter function. An excess of galaxies in the brightest bins is detected in the BCG1 and BCG2 regions. The dynamical analysis based on the SG method shows the existence of a main structure of 82 galaxies that can be subdivided into two main substructures of 25 and six galaxies. A smaller structure of six galaxies is also detected. Conclusions: The B band GLFs of Abell 3376 are clearly perturbed, as already found in other merging clusters. The dynamical properties are consistent with the
FOREWORD: 3rd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2013)
NASA Astrophysics Data System (ADS)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2013-10-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 3rd International Workshop on New Computational Methods for Inverse Problems, NCMIP 2013 (http://www.farman.ens-cachan.fr/NCMIP_2013.html). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 22 May 2013, at the initiative of Institut Farman. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 (http://www.farman.ens-cachan.fr/NCMIP_2012.html). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational
FOREWORD: 2nd International Workshop on New Computational Methods for Inverse Problems (NCMIP 2012)
NASA Astrophysics Data System (ADS)
Blanc-Féraud, Laure; Joubert, Pierre-Yves
2012-09-01
Conference logo This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 2nd International Workshop on New Computational Methods for Inverse Problems, (NCMIP 2012). This workshop took place at Ecole Normale Supérieure de Cachan, in Cachan, France, on 15 May 2012, at the initiative of Institut Farman. The first edition of NCMIP also took place in Cachan, France, within the scope of the ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/). The NCMIP Workshop focused on recent advances in the resolution of inverse problems. Indeed inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finance. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition
Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Lan, Shiwei; Bui-Thanh, Tan; Christie, Mike; Girolami, Mark
2016-03-01
The Bayesian approach to Inverse Problems relies predominantly on Markov Chain Monte Carlo methods for posterior inference. The typical nonlinear concentration of posterior measure observed in many such Inverse Problems presents severe challenges to existing simulation based inference methods. Motivated by these challenges the exploitation of local geometric information in the form of covariant gradients, metric tensors, Levi-Civita connections, and local geodesic flows have been introduced to more effectively locally explore the configuration space of the posterior measure. However, obtaining such geometric quantities usually requires extensive computational effort and despite their effectiveness affects the applicability of these geometrically-based Monte Carlo methods. In this paper we explore one way to address this issue by the construction of an emulator of the model from which all geometric objects can be obtained in a much more computationally feasible manner. The main concept is to approximate the geometric quantities using a Gaussian Process emulator which is conditioned on a carefully chosen design set of configuration points, which also determines the quality of the emulator. To this end we propose the use of statistical experiment design methods to refine a potentially arbitrarily initialized design online without destroying the convergence of the resulting Markov chain to the desired invariant measure. The practical examples considered in this paper provide a demonstration of the significant improvement possible in terms of computational loading suggesting this is a promising avenue of further development.
Fu, Y B; Chui, C K; Teo, C L
2013-04-01
Biological soft tissue is highly inhomogeneous with scattered stress-strain curves. Assuming that the instantaneous strain at a specific stress varies according to a normal distribution, a nondeterministic approach is proposed to model the scattered stress-strain relationship of the tissue samples under compression. Material parameters of the liver tissue modeled using Mooney-Rivlin hyperelastic constitutive equation were represented by a statistical function with normal distribution. Mean and standard deviation of the material parameters were determined using inverse finite element method and inverse mean-value first-order second-moment (IMVFOSM) method respectively. This method was verified using computer simulation based on direct Monte-Carlo (MC) method. The simulated cumulative distribution function (CDF) corresponded well with that of the experimental stress-strain data. The resultant nondeterministic material parameters were able to model the stress-strain curves from other separately conducted liver tissue compression tests. Stress-strain data from these new tests could be predicted using the nondeterministic material parameters.
An optical view of the filament region of Abell 85
NASA Astrophysics Data System (ADS)
Boué, G.; Durret, F.; Adami, C.; Mamon, G. A.; Ilbert, O.; Cayatte, V.
2008-10-01
Aims: We present an optical investigation of the Abell 85 cluster filament (z = 0.055) previously interpreted in X-rays as groups falling on to the main cluster. We compare the distribution of galaxies with the X-ray filament, and investigate the galaxy luminosity functions in several bands and in several regions. We search for galaxies where star formation may have been triggered by interactions with intracluster gas or tidal pressure due to the cluster potential when entering the cluster. Methods: Our analysis is based on images covering the South tip of Abell 85 and its infalling filament, obtained with CFHT MegaPrime/MegaCam (1×1 deg2 field) in four bands (u^*, g', r', i') and ESO 2.2 m WFI (38×36 arcmin2 field) in a narrow band filter corresponding to the redshifted Hα line and in an RC broad band filter. The LFs are estimated by statistically subtracting a reference field. Background contamination is minimized by cutting out galaxies redder than the observed red sequence in the g'-i' versus i' colour-magnitude diagram. Results: The galaxy distribution shows a significantly flattened cluster, whose principal axis is slightly offset from the X-ray filament. The analysis of the broad band galaxy luminosity functions shows that the filament region is well populated. The filament is also independently detected as a gravitationally bound structure by the Serna & Gerbal (1996, A&A, 309, 65) hierarchical method. 101 galaxies are detected in the Hα filter, among which 23 have spectroscopic redshifts in the cluster, 2 have spectroscopic redshifts higher than the cluster and 58 have photometric redshifts that tend to indicate that they are background objects. One galaxy that is not detected in the Hα filter probably because of the filter low wavelength cut but shows Hα emission in its SDSS spectrum in the cluster redshift range has been added to our sample. The 24 galaxies with spectroscopic redshifts in the cluster are mostly concentrated in the South part of the
Solving Large-Scale Inverse Magnetostatic Problems using the Adjoint Method
NASA Astrophysics Data System (ADS)
Bruckner, Florian; Abert, Claas; Wautischer, Gregor; Huber, Christian; Vogler, Christoph; Hinze, Michael; Suess, Dieter
2017-01-01
An efficient algorithm for the reconstruction of the magnetization state within magnetic components is presented. The occurring inverse magnetostatic problem is solved by means of an adjoint approach, based on the Fredkin-Koehler method for the solution of the forward problem. Due to the use of hybrid FEM-BEM coupling combined with matrix compression techniques the resulting algorithm is well suited for large-scale problems. Furthermore the reconstruction of the magnetization state within a permanent magnet as well as an optimal design application are demonstrated.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems
A method to calculate tunneling leakage currents in silicon inversion layers
NASA Astrophysics Data System (ADS)
Lujan, Guilherme S.; Sorée, Bart; Magnus, Wim; De Meyer, Kristin
2006-08-01
This paper proposes a quantum mechanical model for the calculation of tunneling leakage currents in a metal-oxide-semiconductor structure. The model incorporates both variational calculus and the transfer matrix method to compute the subband energies and the lifetimes of the inversion layer states. The use of variational calculus simplifies the subband energy calculation due to the analytical form of the wave functions, which offers an attractive perspective towards the calculation of the electron mobility in the channel. The model can be extended to high-k dielectrics with several layers. Good agreement between experimental data and simulation results is obtained for metal gate capacitors.
Solving Large-Scale Inverse Magnetostatic Problems using the Adjoint Method
Bruckner, Florian; Abert, Claas; Wautischer, Gregor; Huber, Christian; Vogler, Christoph; Hinze, Michael; Suess, Dieter
2017-01-01
An efficient algorithm for the reconstruction of the magnetization state within magnetic components is presented. The occurring inverse magnetostatic problem is solved by means of an adjoint approach, based on the Fredkin-Koehler method for the solution of the forward problem. Due to the use of hybrid FEM-BEM coupling combined with matrix compression techniques the resulting algorithm is well suited for large-scale problems. Furthermore the reconstruction of the magnetization state within a permanent magnet as well as an optimal design application are demonstrated. PMID:28098851
NASA Technical Reports Server (NTRS)
Ratcliff, Robert R.; Carlson, Leland A.
1989-01-01
Progress in the direct-inverse wing design method in curvilinear coordinates has been made. A spanwise oscillation problem and proposed remedies are discussed. Test cases are presented which reveal the approximate limits on the wing's aspect ratio and leading edge wing sweep angle for a successful design, and which show the significance of spanwise grid skewness, grid refinement, viscous interaction, the initial airfoil section and Mach number-pressure distribution compatibility on the final design. Furthermore, preliminary results are shown which indicate that it is feasible to successfully design a region of the wing which begins aft of the leading edge and terminates prior to the trailing edge.
An efficient numerical method for solving inverse conduction problem in a hollow cylinder
NASA Astrophysics Data System (ADS)
Mehta, R. C.
1984-06-01
A simple numerical scheme for solving the inverse conduction problem in a hollow cylinder is presented using transient temperature data for estimating the unknown surface conditions. A general digital program is discussed that can treat a variety of boundary conditions using a single set of equations. As an example, the method is applied to estimate the wall heat flux, surface temperature, convective heat transfer coefficient, and combustion gas temperature for a typical divergent rocket nozzle made of mild steel, and the results are compared with experimentally measured outer surface temperature data.
A comparative study of minimum norm inverse methods for MEG imaging
Leahy, R.M.; Mosher, J.C.; Phillips, J.W.
1996-07-01
The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we can use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.
NASA Astrophysics Data System (ADS)
Ryo, Hyok-Su; Ryo, In-Gwang
2016-08-01
In this study, a generalized inverse-pole-figure (IPF) method has been suggested to analyze domain switching in polycrystalline ferroelectrics including composition of morphotropic phase boundary (MPB). Using the generalized IPF method, saturated domain orientation textures of single-phase polycrystalline ferroelectrics with tetragonal and rhombohedral symmetry have been analytically calculated and the results have been confirmed by comparison with the results from preceding studies. In addition, saturated domain orientation textures near MPBs of different multiple-phase polycrystalline ferroelectrics have been also analytically calculated. The results show that the generalized IPF method is an efficient method to analyze not only domain switching of single-phase polycrystalline ferroelectrics but also MPB of multiple-phase polycrystalline ferroelectrics.
Preconditioned alternating direction method of multipliers for inverse problems with constraints
NASA Astrophysics Data System (ADS)
Jiao, Yuling; Jin, Qinian; Lu, Xiliang; Wang, Weijie
2017-02-01
We propose a preconditioned alternating direction method of multipliers (ADMM) to solve linear inverse problems in Hilbert spaces with constraints, where the feature of the sought solution under a linear transformation is captured by a possibly non-smooth convex function. During each iteration step, our method avoids solving large linear systems by choosing a suitable preconditioning operator. In case the data is given exactly, we prove the convergence of our preconditioned ADMM without assuming the existence of a Lagrange multiplier. In case the data is corrupted by noise, we propose a stopping rule using information on noise level and show that our preconditioned ADMM is a regularization method; we also propose a heuristic rule when the information on noise level is unavailable or unreliable and give its detailed analysis. Numerical examples are presented to test the performance of the proposed method.
A penalty method for PDE-constrained optimization in inverse problems
NASA Astrophysics Data System (ADS)
van Leeuwen, T.; Herrmann, F. J.
2016-01-01
Many inverse and parameter estimation problems can be written as PDE-constrained optimization problems. The goal is to infer the parameters, typically coefficients of the PDE, from partial measurements of the solutions of the PDE for several right-hand sides. Such PDE-constrained problems can be solved by finding a stationary point of the Lagrangian, which entails simultaneously updating the parameters and the (adjoint) state variables. For large-scale problems, such an all-at-once approach is not feasible as it requires storing all the state variables. In this case one usually resorts to a reduced approach where the constraints are explicitly eliminated (at each iteration) by solving the PDEs. These two approaches, and variations thereof, are the main workhorses for solving PDE-constrained optimization problems arising from inverse problems. In this paper, we present an alternative method that aims to combine the advantages of both approaches. Our method is based on a quadratic penalty formulation of the constrained optimization problem. By eliminating the state variable, we develop an efficient algorithm that has roughly the same computational complexity as the conventional reduced approach while exploiting a larger search space. Numerical results show that this method indeed reduces some of the nonlinearity of the problem and is less sensitive to the initial iterate.
NASA Astrophysics Data System (ADS)
Xiao, Dongsheng; Chang, Ming; Su, Yong; Hu, Qijun; Yu, Bing
2016-09-01
This study explores the quasi-real time inversion principle and precision estimation of three-dimensional coordinates of the epicenter, trigger time and magnitude of earthquakes with the aim to improve traditional methods, which are flawed due to missing information or distortion in the seismograph records. The epicenter, trigger time and magnitude from the Lushan earthquake are inverted and analyzed based on high-frequency GNSS data. The inversion results achieved a high precision, which are consistent with the data published by the China Earthquake Administration. Moreover, it has been proven that the inversion method has good theoretical value and excellent application prospects.
A new method for the inversion of atmospheric parameters of A/Am stars
NASA Astrophysics Data System (ADS)
Gebran, M.; Farah, W.; Paletou, F.; Monier, R.; Watson, V.
2016-05-01
Context. We present an automated procedure that simultaneously derives the effective temperature Teff, surface gravity log g, metallicity [Fe/H], and equatorial projected rotational velocity vsini for "normal" A and Am stars. The procedure is based on the principal component analysis (PCA) inversion method, which we published in a recent paper . Aims: A sample of 322 high-resolution spectra of F0-B9 stars, retrieved from the Polarbase, SOPHIE, and ELODIE databases, were used to test this technique with real data. We selected the spectral region from 4400-5000 Å as it contains many metallic lines and the Balmer Hβ line. Methods: Using three data sets at resolving powers of R = 42 000, 65 000 and 76 000, about ~6.6 × 106 synthetic spectra were calculated to build a large learning database. The online power iteration algorithm was applied to these learning data sets to estimate the principal components (PC). The projection of spectra onto the few PCs offered an efficient comparison metric in a low-dimensional space. The spectra of the well-known A0- and A1-type stars, Vega and Sirius A, were used as control spectra in the three databases. Spectra of other well-known A-type stars were also employed to characterize the accuracy of the inversion technique. Results: We inverted all of the observational spectra and derived the atmospheric parameters. After removal of a few outliers, the PCA-inversion method appeared to be very efficient in determining Teff, [Fe/H], and vsini for A/Am stars. The derived parameters agree very well with previous determinations. Using a statistical approach, deviations of around 150 K, 0.35 dex, 0.15 dex, and 2 km s-1 were found for Teff, log g, [Fe/H], and vsini with respect to literature values for A-type stars. Conclusions: The PCA inversion proves to be a very fast, practical, and reliable tool for estimating stellar parameters of FGK and A stars and for deriving effective temperatures of M stars. Based on data retrieved from the
Han, Jijun; Yang, Deqiang; Sun, Houjun; Xin, Sherman Xuegang
2017-01-01
Inverse method is inherently suitable for calculating the distribution of source current density related with an irregularly structured electromagnetic target field. However, the present form of inverse method cannot calculate complex field-tissue interactions. A novel hybrid inverse/finite-difference time domain (FDTD) method that can calculate the complex field-tissue interactions for the inverse design of source current density related with an irregularly structured electromagnetic target field is proposed. A Huygens' equivalent surface is established as a bridge to combine the inverse and FDTD method. Distribution of the radiofrequency (RF) magnetic field on the Huygens' equivalent surface is obtained using the FDTD method by considering the complex field-tissue interactions within the human body model. The obtained magnetic field distributed on the Huygens' equivalent surface is regarded as the next target. The current density on the designated source surface is derived using the inverse method. The homogeneity of target magnetic field and specific energy absorption rate are calculated to verify the proposed method.
Extending and Merging the Purple Crow Lidar Temperature Climatologies Using the Inversion Method
NASA Astrophysics Data System (ADS)
Jalali, Ali; Sica, R. J.; Argall, P. S.
2016-06-01
Rayleigh and Raman scatter measurements from The University of Western Ontario Purple Crow Lidar (PCL) have been used to develop temperature climatologies for the stratosphere, mesosphere, and thermosphere using data from 1994 to 2013 (Rayleigh system) and from 1999 to 2013 (vibrational Raman system). Temperature retrievals from Rayleigh-scattering lidar measurements have been performed using the methods by Hauchecorne and Chanin (1980; henceforth HC) and Khanna et al. (2012). Argall and Sica (2007) used the HC method to compute a climatology of the PCL measurements from 1994 to 2004 for 35 to 110 km, while Iserhienrhien et al. (2013) applied the same technique from 1999 to 2007 for 10 to 35 km. Khanna et al. (2012) used the inversion technique to retrieve atmospheric temperature profiles and found that it had advantages over the HC method. This paper presents an extension of the PCL climatologies created by Argall and Sica (2007) and Iserhienrhien et al. (2013). Both the inversion and HC methods were used to form the Rayleigh climatology, while only the latter was adopted for the Raman climatology. Then, two different approaches were used to merge the climatologies from 10 to 110 km. Among four different functional identities, a trigonometric hyperbolic relation results in the best choice for merging temperature profiles between the Raman and Low level Rayleigh channels, with an estimated uncertainty of 0.9 K for merging temperatures. Also, error function produces best result with uncertainty of 0.7 K between the Low Level Rayleigh and High Level Rayleigh channels. The results show that the temperature climatologies produced by the HC method when using a seed pressure are comparable to the climatologies produced by the inversion method. The Rayleigh extended climatology is slightly warmer below 80 km and slightly colder above 80 km. There are no significant differences in temperature between the extended and the previous Raman channel climatologies. Through out
Probing single biomolecules in solution using the Anti-Brownian ELectrokinetic (ABEL) trap
Wang, Quan; Goldsmith, Randall H.; Jiang, Yan; Bockenhauer, Samuel D.; Moerner, W.E.
2012-01-01
Conspectus Single-molecule fluorescence measurements allow researchers to study asynchronous dynamics and expose molecule-to-molecule structural and behavioral diversity, which contributes to the understanding of biological macromolecules. To provide measurements that are most consistent with the native environment of biomolecules, researchers would like to conduct these measurements in the solution phase if possible. However, diffusion typically limits the observation time to approximately one millisecond in many solution-phase single-molecule assays. Although surface immobilization is widely used to address this problem, this process can perturb the system being studied and contribute to the observed heterogeneity. Combining the technical capabilities of high-sensitivity single-molecule fluorescence microscopy, realtime feedback control and electrokinetic flow in a microfluidic chamber, we have developed a device called the Anti-Brownian ELectrokinetic (ABEL) trap to significantly prolong the observation time of single biomolecules in solution. We have applied the ABEL trap method to explore the photodynamics and enzymatic properties of a variety of biomolecules in aqueous solution and present four examples: the photosynthetic antenna allophycocyanin, the chaperonin enzyme TRiC, a G protein-coupled receptor protein, and the blue nitrite reductase redox enzyme. These examples illustrate the breadth and depth of information which we can extract in studies of single biomolecules with the ABEL trap. When confined in the ABEL trap, the photosynthetic antenna protein allophycocyanin exhibits rich dynamics both in its emission brightness and its excited state lifetime. As each molecule discontinuously converts from one emission/lifetime level to another in a primarily correlated way, it undergoes a series of state changes. We studied the ATP binding stoichiometry of the multi-subunit chaperonin enzyme TRiC in the ABEL trap by counting the number of hydrolyzed Cy3-ATP
Probing single biomolecules in solution using the anti-Brownian electrokinetic (ABEL) trap.
Wang, Quan; Goldsmith, Randall H; Jiang, Yan; Bockenhauer, Samuel D; Moerner, W E
2012-11-20
Single-molecule fluorescence measurements allow researchers to study asynchronous dynamics and expose molecule-to-molecule structural and behavioral diversity, which contributes to the understanding of biological macromolecules. To provide measurements that are most consistent with the native environment of biomolecules, researchers would like to conduct these measurements in the solution phase if possible. However, diffusion typically limits the observation time to approximately 1 ms in many solution-phase single-molecule assays. Although surface immobilization is widely used to address this problem, this process can perturb the system being studied and contribute to the observed heterogeneity. Combining the technical capabilities of high-sensitivity single-molecule fluorescence microscopy, real-time feedback control and electrokinetic flow in a microfluidic chamber, we have developed a device called the anti-Brownian electrokinetic (ABEL) trap to significantly prolong the observation time of single biomolecules in solution. We have applied the ABEL trap method to explore the photodynamics and enzymatic properties of a variety of biomolecules in aqueous solution and present four examples: the photosynthetic antenna allophycocyanin, the chaperonin enzyme TRiC, a G protein-coupled receptor protein, and the blue nitrite reductase redox enzyme. These examples illustrate the breadth and depth of information which we can extract in studies of single biomolecules with the ABEL trap. When confined in the ABEL trap, the photosynthetic antenna protein allophycocyanin exhibits rich dynamics both in its emission brightness and its excited state lifetime. As each molecule discontinuously converts from one emission/lifetime level to another in a primarily correlated way, it undergoes a series of state changes. We studied the ATP binding stoichiometry of the multi-subunit chaperonin enzyme TRiC in the ABEL trap by counting the number of hydrolyzed Cy3-ATP using stepwise
The merging cluster Abell 1758: an optical and dynamical view
NASA Astrophysics Data System (ADS)
Monteiro-Oliveira, Rogerio; Serra Cypriano, Eduardo; Machado, Rubens; Lima Neto, Gastao B.
2015-08-01
The galaxy cluster Abell 1758-North (z=0.28) is a binary system composed by the sub-structures NW and NE. This is supposed to be a post-merging cluster due to observed detachment between the NE BCG and the respective X-ray emitting hot gas clump in a scenario very close to the famous Bullet Cluster. On the other hand, the projected position of the NW BCG coincides with the local hot gas peak. This system was been targeted previously by several studies, using multiple wavelengths and techniques, but there is still no clear picture of the scenario that could have caused this unusual configuration. To help solving this complex puzzle we added some pieces: firstly, we have used deep B, RC and z' Subaru images to perform both weak lensing shear and magnification analysis of A1758 (including here the South component that is not in interaction with A1758-North) modeling each sub-clump as an NFW profile in order to constrain masses and its center positions through MCMC methods; the second piece is the dynamical analysis using radial velocities available in the literature (143) plus new Gemini-GMOS/N measurements (68 new redshifts).From weak lensing we found that independent shear and magnification mass determinations are in excellent agreement between them and combining both we could reduce mass error bar by ~30% compared to shear alone. By combining this two weak-lensing probes we found that the position of both Northern BCGs are consistent with the masses centers within 2σ and and the NE hot gas peak to be offseted of the respective mass peak (M200=5.5 X 1014 M⊙) with very high significance. The most massive structure is NW (M200=7.95 X 1014 M⊙ ) where we observed no detachment between gas, DM and BCG.We have calculated a low line-of-sight velocity difference (<300 km/s) between A1758 NW and NE. We have combined it with the projected velocity of 1600 km/s which was estimated by previous X-ray analysis (David & Kempner 2004) and we have obtained a small angle between
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B. Montgomery
2016-01-01
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution. PMID:27453632
FAST INVERSION METHOD FOR DETERMINATION OF PLANETARY PARAMETERS FROM TRANSIT TIMING VARIATIONS
Nesvorny, David; Beauge, Cristian
2010-01-20
The transit timing variation (TTV) method relies on monitoring changes in timing of transits of known exoplanets. Non-transiting planets in the system can be inferred from TTVs by their gravitational interaction with the transiting planet. The TTV method is sensitive to low-mass planets that cannot be detected by other means. Here we describe a fast algorithm that can be used to determine the mass and orbit of the non-transiting planets from the TTV data. We apply our code, ttvim.f, to a wide variety of planetary systems to test the uniqueness of the TTV inversion problem and its dependence on the precision of TTV observations. We find that planetary parameters, including the mass and mutual orbital inclination of planets, can be determined from the TTV data sets that should become available in near future. Unlike the radial velocity technique, the TTV method can therefore be used to characterize the inclination distribution of multi-planet systems.
The Sunyaev-Zeldovich Effect in Abell 370
NASA Technical Reports Server (NTRS)
Grego, Laura; Carlstrom, John E.; Joy, Marshall K.; Reese, Erik D.; Holder, Gilbert P.; Patel, Sandeep; Cooray, Asantha R.; Holzappel, William L.
2000-01-01
We present interferometric measurements of the Sunyaev-Zeldovich (SZ) effect toward the galaxy cluster Abell 370. These measurements, which directly probe the pressure of the cluster's gas, show the gas distribution to be strongly aspherical, as do the X-ray and gravitational lensing observations. We calculate the cluster's gas mass fraction in two ways. We first compare the gas mass derived from the SZ measurements to the lensing-derived gravitational mass near the critical lensing radius. We also calculate the gas mass fraction from the SZ data by deprojecting the three-dimensional gas density distribution and deriving the total mass under the assumption that the gas is in hydrostatic equilibrium (HSE). We test the assumptions in the HSE method by comparing the total cluster mass implied by the two methods and find that they agree within the errors of the measurement. We discuss the possible system- atic errors in the gas mass fraction measurement and the constraints it places on the matter density parameter, Omega(sub M).
Global inverse modeling of CH4 sources and sinks: an overview of methods
NASA Astrophysics Data System (ADS)
Houweling, Sander; Bergamaschi, Peter; Chevallier, Frederic; Heimann, Martin; Kaminski, Thomas; Krol, Maarten; Michalak, Anna M.; Patra, Prabir
2017-01-01
The aim of this paper is to present an overview of inverse modeling methods that have been developed over the years for estimating the global sources and sinks of CH4. It provides insight into how techniques and estimates have evolved over time and what the remaining shortcomings are. As such, it serves a didactical purpose of introducing apprentices to the field, but it also takes stock of developments so far and reflects on promising new directions. The main focus is on methodological aspects that are particularly relevant for CH4, such as its atmospheric oxidation, the use of methane isotopologues, and specific challenges in atmospheric transport modeling of CH4. The use of satellite retrievals receives special attention as it is an active field of methodological development, with special requirements on the sampling of the model and the treatment of data uncertainty. Regional scale flux estimation and attribution is still a grand challenge, which calls for new methods capable of combining information from multiple data streams of different measured parameters. A process model representation of sources and sinks in atmospheric transport inversion schemes allows the integrated use of such data. These new developments are needed not only to improve our understanding of the main processes driving the observed global trend but also to support international efforts to reduce greenhouse gas emissions.
NASA Astrophysics Data System (ADS)
Goncharsky, Alexander V.; Romanov, Sergey Y.
2017-02-01
We develop efficient iterative methods for solving inverse problems of wave tomography in models incorporating both diffraction effects and attenuation. In the inverse problem the aim is to reconstruct the velocity structure and the function that characterizes the distribution of attenuation properties in the object studied. We prove mathematically and rigorously the differentiability of the residual functional in normed spaces, and derive the corresponding formula for the Fréchet derivative. The computation of the Fréchet derivative includes solving both the direct problem with the Neumann boundary condition and the reversed-time conjugate problem. We develop efficient methods for numerical computations where the approximate solution is found using the detector measurements of the wave field and its normal derivative. The wave field derivative values at detector locations are found by solving the exterior boundary value problem with the Dirichlet boundary conditions. We illustrate the efficiency of this approach by applying it to model problems. The algorithms developed are highly parallelizable and designed to be run on supercomputers. Among the most promising medical applications of our results is the development of ultrasonic tomographs for differential diagnosis of breast cancer.
NASA Astrophysics Data System (ADS)
Braun, Douglas; Birch, A.; Rempel, M.; Duvall, T.; J.
2011-05-01
Controversy exists in the interpretation and modeling of helioseismic signals in and around magnetic regions like sunspots. We show the results of applying local helioseismic inversions to travel-time shift measurements from realistic magnetoconvective sunspot simulations. We compare travel-time maps made from several simulations, using different measurements (helioseismic holography and center-annulus time distance helioseismology), and made on real sunspots observed with the HMI instrument onboard the Solar Dynamics Observatory. We find remarkable similarities in the travel-time perturbations measured between: 1) simulations extending both 8 and 16 Mm deep, 2) the methodology (holography or time-distance) applied, and 3) the simulated and real sunspots. The application of RLS inversions, using Born approximation kernels, to narrow frequency-band travel-time shifts from the simulations demonstrates that standard methods fail to reliably reproduce the true wave speed structure. These findings emphasize the need for new methods for inferring the subsurface structure of active regions. Artificial Dopplergrams from our simulations are available to the community at www.hao.ucar.edu under "Data" and "Sunspot Models." This work is supported by NASA under the SDO Science Center project (contract NNH09CE41C).
Inversion of heterogeneous parabolic-type equations using the pilot points method
NASA Astrophysics Data System (ADS)
Alcolea, Andrés; Carrera, Jesús; Medina, Agustín
2006-07-01
The inverse problem (also referred to as parameter estimation) consists of evaluating the medium properties ruling the behaviour of a given equation from direct measurements of those properties and of the dependent state variables. The problem becomes ill-posed when the properties vary spatially in an unknown manner, which is often the case when modelling natural processes. A possibility to fight this problem consists of performing stochastic conditional simulations. That is, instead of seeking a single solution (conditional estimation), one obtains an ensemble of fields, all of which honour the small scale variability (high frequency fluctuations) and direct measurements. The high frequency component of the field is different from one simulation to another, but a fixed component for all of them. Measurements of the dependent state variables are honoured by framing simulation as an inverse problem, where both model fit and parameter plausibility are maximized with respect to the coefficients of the basis functions (pilot point values). These coefficients (model parameters) are used for parameterizing the large scale variability patterns. The pilot points method, which is often used in hydrogeology, uses the kriging weights as basis functions. The performance of the method (both its variants of conditional estimation/simulation) is tested on a synthetic example using a parabolic-type equation. Results show that including the plausibility term improves the identification of the spatial variability of the unknown field function and that the weight assigned to the plausibility term does lead to optimal results both for conditional estimation and for stochastic simulations.
A numerical method for the inverse problem of cell traction in 3D
NASA Astrophysics Data System (ADS)
Vitale, G.; Preziosi, L.; Ambrosi, D.
2012-09-01
Force traction microscopy is an inversion method that allows us to obtain the stress field applied by a living cell on the environment on the basis of a pointwise knowledge of the displacement produced by the cell itself. This classical biophysical problem, usually addressed in terms of Green’s functions, can be alternatively tackled in a variational framework. In such a case, a variation of the error functional under suitable regularization is operated in view of its minimization. This setting naturally suggests the introduction of a new equation, based on the adjoint operator of the elasticity problem. In this paper, we illustrate a numerical strategy of the inversion method that discretizes the partial differential equations associated with the optimal control problem by finite elements. A detailed discussion of the numerical approximation of a test problem (with known solution) that contains most of the mathematical difficulties of the real one allows a precise evaluation of the degree of confidence that one can achieve in the numerical results.
Cool Core Disruption in Abell 1763
NASA Astrophysics Data System (ADS)
Douglass, Edmund; Blanton, Elizabeth L.; Clarke, Tracy E.; Randall, Scott W.; Edwards, Louise O. V.; Sabry, Ziad
2017-01-01
We present the analysis of a 20 ksec Chandra archival observation of the massive galaxy cluster Abell 1763. A model-subtracted image highlighting excess cluster emission reveals a large spiral structure winding outward from the core to a radius of ~950 kpc. We measure the gas of the inner spiral to have significantly lower entropy than non-spiral regions at the same radius. This is consistent with the structure resulting from merger-induced motion of the cluster’s cool core, a phenomenon seen in many systems. Atypical of spiral-hosting clusters, an intact cool core is not detected. Its absence suggests the system has experienced significant disruption since the initial dynamical encounter that set the sloshing core in motion. Along the major axis of the elongated ICM distribution we detect thermal features consistent with the merger event most likely responsible for cool core disruption. The merger-induced transition towards non-cool core status will be discussed. The interaction between the powerful (P1.4 ~ 1026 W Hz-1) cluster-center WAT radio source and its ICM environment will also be discussed.
Fast generation of weak lensing maps by the inverse-Gaussianization method
NASA Astrophysics Data System (ADS)
Yu, Yu; Zhang, Pengjie; Jing, Yipeng
2016-10-01
To take full advantage of the unprecedented power of upcoming weak lensing surveys, understanding the noise, such as cosmic variance and geometry/mask effects, is as important as understanding the signal itself. Accurately quantifying the noise requires a large number of statistically independent mocks for a variety of cosmologies. This is impractical for weak lensing simulations, which are costly for simultaneous requirements of large box size (to cover a significant fraction of the past light cone) and high resolution (to robustly probe the small scale where most lensing signal resides). Therefore, fast mock generation methods are desired and are under intensive investigation. We propose a new fast weak lensing map generation method, named the inverse-Gaussianization method, based on the finding that a lensing convergence field can be Gaussianized to excellent accuracy by a local transformation [43 Y. Yu, P. Zhang, W. Lin, W. Cui, and J. N. Fry, Phys. Rev. D 84, 023523 (2011).]. Given a simulation, it enables us to produce as many as infinite statistically independent lensing maps as fast as producing the simulation initial conditions. The proposed method is tested against simulations for each tomography bin centered at lens redshift z ˜0.5 , 1, and 2, with various statistics. We find that the lensing maps generated by our method have reasonably accurate power spectra, bispectra, and power spectrum covariance matrix. Therefore, it will be useful for weak lensing surveys to generate realistic mocks. As an example of application, we measure the probability distribution function of the lensing power spectrum, from 16384 lensing maps produced by the inverse-Gaussianization method.
Review on applications of 3D inverse design method for pump
NASA Astrophysics Data System (ADS)
Yin, Junlian; Wang, Dezhong
2014-05-01
The 3D inverse design method, which methodology is far superior to the conventional design method that based on geometrical description, is gradually applied in pump blade design. However, no complete description about the method is outlined. Also, there are no general rules available to set the two important input parameters, blade loading distribution and stacking condition. In this sense, the basic theory and the mechanism why the design method can suppress the formation of secondary flow are summarized. And also, several typical pump design cases with different specific speeds ranging from centrifugal pump to axial pump are surveyed. The results indicates that, for centrifugal pump and mixed pump or turbine, the ratio of blade loading on the hub to that on the shroud is more than unit in the fore part of the blade, whereas in the aft part, the ratio is decreased to satisfy the same wrap angle for hub and shroud. And the choice of blade loading type depends on the balancing of efficiency and cavitation. If the cavitation is more weighted, the better choice is aft-loaded, otherwise, the fore-loaded or mid-loaded is preferable to improve the efficiency. The stacking condition, which is an auxiliary to suppress the secondary flow, can have great effect on the jet-wake outflow and the operation range for pump. Ultimately, how to link the design method to modern optimization techniques is illustrated. With the know-how design methodology and the know-how systematic optimization approach, the application of optimization design is promising for engineering. This paper summarizes the 3D inverse design method systematically.
NASA Astrophysics Data System (ADS)
Alkharji, Mohammed N.
Most fracture characterization methods provide a general description of the fracture parameters as part of the reservoirs parameters; the fracture interaction and geometry within the reservoir is given less attention. T-Matrix and Linear Slip effective medium fracture models are implemented to invert the elastic tensor for the parameters and geometries of the fractures within the reservoir. The fracture inverse problem has an ill-posed, overdetermined, underconstrained rank-deficit system of equations. Least-squares inverse methods are used to solve the problem. A good starting initial model for the parameters is a key factor in the reliability of the inversion. Most methods assume that the starting parameters are close to the solution to avoid inaccurate local minimum solutions. The prior knowledge of the fracture parameters and their geometry is not available. We develop a hybrid, enumerative and Gauss-Newton, method that estimates the fracture parameters and geometry from the elastic tensor with no prior knowledge of the initial parameter values. The fracture parameters are separated into two groups. The first group contains the fracture parameters with no prior information, and the second group contains the parameters with known prior information. Different models are generated from the first group parameters by sampling the solution space over a predefined range of possible solutions for each parameter. Each model generated by the first group is fixed and used as a starting model to invert for the second group of parameters using the Gauss-Newton method. The least-squares residual between the observed elastic tensor and the estimated elastic tensor is calculated for each model. The model parameters that yield the least-squares residual corresponds to the correct fracture reservoir parameters and geometry. Two synthetic examples of fractured reservoirs with oil and gas saturations were inverted with no prior information about the fracture properties. The
A method of fast, sequential experimental design for linearized geophysical inverse problems
NASA Astrophysics Data System (ADS)
Coles, Darrell A.; Morgan, Frank Dale
2009-07-01
An algorithm for linear(ized) experimental design is developed for a determinant-based design objective function. This objective function is common in design theory and is used to design experiments that minimize the model entropy, a measure of posterior model uncertainty. Of primary significance in design problems is computational expediency. Several earlier papers have focused attention on posing design objective functions and opted to use global search methods for finding the critical points of these functions, but these algorithms are too slow to be practical. The proposed technique is distinguished primarily for its computational efficiency, which derives partly from a greedy optimization approach, termed sequential design. Computational efficiency is further enhanced through formulae for updating determinants and matrix inverses without need for direct calculation. The design approach is orders of magnitude faster than a genetic algorithm applied to the same design problem. However, greedy optimization often trades global optimality for increased computational speed; the ramifications of this tradeoff are discussed. The design methodology is demonstrated on a simple, single-borehole DC electrical resistivity problem. Designed surveys are compared with random and standard surveys, both with and without prior information. All surveys were compared with respect to a `relative quality' measure, the post-inversion model per cent rms error. The issue of design for inherently ill-posed inverse problems is considered and an approach for circumventing such problems is proposed. The design algorithm is also applied in an adaptive manner, with excellent results suggesting that smart, compact experiments can be designed in real time.
A geometric calibration method for inverse geometry computed tomography using P-matrices
NASA Astrophysics Data System (ADS)
Slagowski, Jordan M.; Dunkerley, David A. P.; Hatt, Charles R.; Speidel, Michael A.
2016-03-01
Accurate and artifact free reconstruction of tomographic images requires precise knowledge of the imaging system geometry. This work proposes a novel projection matrix (P-matrix) based calibration method to enable C-arm inverse geometry CT (IGCT). The method is evaluated for scanning-beam digital x-ray (SBDX), a C-arm mounted inverse geometry fluoroscopic technology. A helical configuration of fiducials is imaged at each gantry angle in a rotational acquisition. For each gantry angle, digital tomosynthesis is performed at multiple planes and a composite image analogous to a cone-beam projection is generated from the plane stack. The geometry of the C-arm, source array, and detector array is determined at each angle by constructing a parameterized 3D-to-2D projection matrix that minimizes the sum-of-squared deviations between measured and projected fiducial coordinates. Simulations were used to evaluate calibration performance with translations and rotations of the source and detector. In a geometry with 1 mm translation of the central ray relative to the axis-of-rotation and 1 degree yaw of the detector and source arrays, the maximum error in the recovered translational parameters was 0.4 mm and maximum error in the rotation parameter was 0.02 degrees. The relative rootmean- square error in a reconstruction of a numerical thorax phantom was 0.4% using the calibration method, versus 7.7% without calibration. Changes in source-detector-distance were the most challenging to estimate. Reconstruction of experimental SBDX data using the proposed method eliminated double contour artifacts present in a non-calibrated reconstruction. The proposed IGCT geometric calibration method reduces image artifacts when uncertainties exist in system geometry.
A geometric calibration method for inverse geometry computed tomography using P-matrices.
Slagowski, Jordan M; Dunkerley, David A P; Hatt, Charles R; Speidel, Michael A
2016-02-27
Accurate and artifact free reconstruction of tomographic images requires precise knowledge of the imaging system geometry. This work proposes a novel projection matrix (P-matrix) based calibration method to enable C-arm inverse geometry CT (IGCT). The method is evaluated for scanning-beam digital x-ray (SBDX), a C-arm mounted inverse geometry fluoroscopic technology. A helical configuration of fiducials is imaged at each gantry angle in a rotational acquisition. For each gantry angle, digital tomosynthesis is performed at multiple planes and a composite image analogous to a cone-beam projection is generated from the plane stack. The geometry of the C-arm, source array, and detector array is determined at each angle by constructing a parameterized 3D-to-2D projection matrix that minimizes the sum-of-squared deviations between measured and projected fiducial coordinates. Simulations were used to evaluate calibration performance with translations and rotations of the source and detector. In a geometry with 1 mm translation of the central ray relative to the axis-of-rotation and 1 degree yaw of the detector and source arrays, the maximum error in the recovered translational parameters was 0.4 mm and maximum error in the rotation parameter was 0.02 degrees. The relative root-mean-square error in a reconstruction of a numerical thorax phantom was 0.4% using the calibration method, versus 7.7% without calibration. Changes in source-detector-distance were the most challenging to estimate. Reconstruction of experimental SBDX data using the proposed method eliminated double contour artifacts present in a non-calibrated reconstruction. The proposed IGCT geometric calibration method reduces image artifacts when uncertainties exist in system geometry.
A geometric calibration method for inverse geometry computed tomography using P-matrices
Slagowski, Jordan M.; Dunkerley, David A. P.; Hatt, Charles R.; Speidel, Michael A.
2016-01-01
Accurate and artifact free reconstruction of tomographic images requires precise knowledge of the imaging system geometry. This work proposes a novel projection matrix (P-matrix) based calibration method to enable C-arm inverse geometry CT (IGCT). The method is evaluated for scanning-beam digital x-ray (SBDX), a C-arm mounted inverse geometry fluoroscopic technology. A helical configuration of fiducials is imaged at each gantry angle in a rotational acquisition. For each gantry angle, digital tomosynthesis is performed at multiple planes and a composite image analogous to a cone-beam projection is generated from the plane stack. The geometry of the C-arm, source array, and detector array is determined at each angle by constructing a parameterized 3D-to-2D projection matrix that minimizes the sum-of-squared deviations between measured and projected fiducial coordinates. Simulations were used to evaluate calibration performance with translations and rotations of the source and detector. In a geometry with 1 mm translation of the central ray relative to the axis-of-rotation and 1 degree yaw of the detector and source arrays, the maximum error in the recovered translational parameters was 0.4 mm and maximum error in the rotation parameter was 0.02 degrees. The relative root-mean-square error in a reconstruction of a numerical thorax phantom was 0.4% using the calibration method, versus 7.7% without calibration. Changes in source-detector-distance were the most challenging to estimate. Reconstruction of experimental SBDX data using the proposed method eliminated double contour artifacts present in a non-calibrated reconstruction. The proposed IGCT geometric calibration method reduces image artifacts when uncertainties exist in system geometry. PMID:27375313
Inverse methods for estimating primary input signals from time-averaged isotope profiles
NASA Astrophysics Data System (ADS)
Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.
2005-08-01
Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.
Piovesan, Davide; Pierobon, Alberto; Dizio, Paul; Lackner, James R
2011-03-01
A common problem in the analyses of upper limb unfettered reaching movements is the estimation of joint torques using inverse dynamics. The inaccuracy in the estimation of joint torques can be caused by the inaccuracy in the acquisition of kinematic variables, body segment parameters (BSPs), and approximation in the biomechanical models. The effect of uncertainty in the estimation of body segment parameters can be especially important in the analysis of movements with high acceleration. A sensitivity analysis was performed to assess the relevance of different sources of inaccuracy in inverse dynamics analysis of a planar arm movement. Eight regression models and one water immersion method for the estimation of BSPs were used to quantify the influence of inertial models on the calculation of joint torques during numerical analysis of unfettered forward arm reaching movements. Thirteen subjects performed 72 forward planar reaches between two targets located on the horizontal plane and aligned with the median plane. Using a planar, double link model for the arm with a floating shoulder, we calculated the normalized joint torque peak and a normalized root mean square (rms) of torque at the shoulder and elbow joints. Statistical analyses quantified the influence of different BSP models on the kinetic variable variance for given uncertainty on the estimation of joint kinematics and biomechanical modeling errors. Our analysis revealed that the choice of BSP estimation method had a particular influence on the normalized rms of joint torques. Moreover, the normalization of kinetic variables to BSPs for a comparison among subjects showed that the interaction between the BSP estimation method and the subject specific somatotype and movement kinematics was a significant source of variance in the kinetic variables. The normalized joint torque peak and the normalized root mean square of joint torque represented valuable parameters to compare the effect of BSP estimation methods
A PC-based inverse design method for radial and mixed flow turbomachinery
NASA Technical Reports Server (NTRS)
Skoe, Ivar Helge
1991-01-01
An Inverse Design Method suitable for radial and mixed flow turbomachinery is presented. The codes are based on the streamline curvature concept; therefore, it is applicable for current personal computers from the 286/287 range. In addition to the imposed aerodynamic constraints, mechanical constraints are imposed during the design process to ensure that the resulting geometry satisfies production consideration and that structural considerations are taken into account. By the use of Bezier Curves in the geometric modeling, the same subroutine is used to prepare input for both aero and structural files since it is important to ensure that the geometric data is identical to both structural analysis and production. To illustrate the method, a mixed flow turbine design is shown.
NASA Astrophysics Data System (ADS)
Lehikoinen, A.; Huttunen, J. M.; Finsterle, S.; Kowalsky, M. B.; Kaipio, J. P.
2007-05-01
We extend the previously presented methodology for imaging the evolution of electrically conductive fluids in porous media. In that method, the nonstationary inversion problem was solved using Bayesian filtering. The method was demonstrated using a synthetically generated test case where the monitored target is a time-varying water plume in an unsaturated porous medium, and the imaging modality was electrical resistance tomography (ERT). The inverse problem was formulated as a state estimation problem, which is based on observation- evolution models. As an observation model for ERT, the complete electrode model was used, and for time- varying unsaturated flow, the Richards equation was used as an evolution model. Although the "true" evolution of water flow was simulated using a heterogeneous permeability field, in the inversion step the permeability was assumed to be homogeneous. This assumption leads to approximation errors that have been taken into account by constructing a statistical model between the different realizations of the accurate and the approximate fluid flow models. This statistical model was constructed using an ensemble of samples from the evolution model in a way that the construction can be carried out prior to taking observations. However, the statistics of approximation errors actually depends on observations (through the state). In this work we extend the previously presented method so that the statistics of the approximation error are adjusted based on the observations. The basic idea of the extension is to gather those samples from the ensemble which at the current time best represents the observed state. We then determine the statistics of the approximation error based on these collated samples. The extension of the methodology provides improved estimates of water saturation distributions compared to the previously presented approaches. The proposed methodology may be extended for imaging and estimating parameters of dynamical processes
Joining direct and indirect inverse calibration methods to characterize karst, coastal aquifers
NASA Astrophysics Data System (ADS)
De Filippis, Giovanna; Foglia, Laura; Giudici, Mauro; Mehl, Steffen; Margiotta, Stefano; Negri, Sergio
2016-04-01
Parameter estimation is extremely relevant for accurate simulation of groundwater flow. Parameter values for models of large-scale catchments are usually derived from a limited set of field observations, which can rarely be obtained in a straightforward way from field tests or laboratory measurements on samples, due to a number of factors, including measurement errors and inadequate sampling density. Indeed, a wide gap exists between the local scale, at which most of the observations are taken, and the regional or basin scale, at which the planning and management decisions are usually made. For this reason, the use of geologic information and field data is generally made by zoning the parameter fields. However, pure zoning does not perform well in the case of fairly complex aquifers and this is particularly true for karst aquifers. In fact, the support of the hydraulic conductivity measured in the field is normally much smaller than the cell size of the numerical model, so it should be upscaled to a scale consistent with that of the numerical model discretization. Automatic inverse calibration is a valuable procedure to identify model parameter values by conditioning on observed, available data, limiting the subjective evaluations introduced with the trial-and-error technique. Many approaches have been proposed to solve the inverse problem. Generally speaking, inverse methods fall into two groups: direct and indirect methods. Direct methods allow determination of hydraulic conductivities from the groundwater flow equations which relate the conductivity and head fields. Indirect methods, instead, can handle any type of parameters, independently from the mathematical equations that govern the process, and condition parameter values and model construction on measurements of model output quantities, compared with the available observation data, through the minimization of an objective function. Both approaches have pros and cons, depending also on model complexity. For
The Dark Matter filament between Abell 222/223
NASA Astrophysics Data System (ADS)
Dietrich, Jörg P.; Werner, Norbert; Clowe, Douglas; Finoguenov, Alexis; Kitching, Tom; Miller, Lance; Simionescu, Aurora
2016-10-01
Weak lensing detections and measurements of filaments have been elusive for a long time. The reason is that the low density contrast of filaments generally pushes the weak lensing signal to unobservably low scales. To nevertheless map the dark matter in filaments exquisite data and unusual systems are necessary. SuprimeCam observations of the supercluster system Abell 222/223 provided the required combination of excellent seeing images and a fortuitous alignment of the filament with the line-of-sight. This boosted the lensing signal to a detectable level and led to the first weak lensing mass measurement of a large-scale structure filament. The filament connecting Abell 222 and Abell 223 is now the only one traced by the galaxy distribution, dark matter, and X-ray emission from the hottest phase of the warm-hot intergalactic medium. The combination of these data allows us to put the first constraints on the hot gas fraction in filaments.
NASA Technical Reports Server (NTRS)
Cerracchio, Priscilla; Gherlone, Marco; Di Sciuva, Marco; Tessler, Alexander
2013-01-01
The marked increase in the use of composite and sandwich material systems in aerospace, civil, and marine structures leads to the need for integrated Structural Health Management systems. A key capability to enable such systems is the real-time reconstruction of structural deformations, stresses, and failure criteria that are inferred from in-situ, discrete-location strain measurements. This technology is commonly referred to as shape- and stress-sensing. Presented herein is a computationally efficient shape- and stress-sensing methodology that is ideally suited for applications to laminated composite and sandwich structures. The new approach employs the inverse Finite Element Method (iFEM) as a general framework and the Refined Zigzag Theory (RZT) as the underlying plate theory. A three-node inverse plate finite element is formulated. The element formulation enables robust and efficient modeling of plate structures instrumented with strain sensors that have arbitrary positions. The methodology leads to a set of linear algebraic equations that are solved efficiently for the unknown nodal displacements. These displacements are then used at the finite element level to compute full-field strains, stresses, and failure criteria that are in turn used to assess structural integrity. Numerical results for multilayered, highly heterogeneous laminates demonstrate the unique capability of this new formulation for shape- and stress-sensing.
An inverse method for estimation of the acoustic intensity in the focused ultrasound field
NASA Astrophysics Data System (ADS)
Yu, Ying; Shen, Guofeng; Chen, Yazhu
2017-03-01
Recently, a new method which based on infrared (IR) imaging was introduced. Authors (A. Shaw, et al and M. R. Myers, et al) have established the relationship between absorber surface temperature and incident intensity during the absorber was irradiated by the transducer. Theoretically, the shorter irradiating time makes estimation more in line with the actual results. But due to the influence of noise and performance constrains of the IR camera, it is hard to identify the difference in temperature with short heating time. An inverse technique is developed to reconstruct the incident intensity distribution using the surface temperature with shorter irradiating time. The algorithm is validated using surface temperature data generated numerically from three-layer model which was developed to calculate the acoustic field in the absorber, the absorbed acoustic energy during the irradiation, and the consequent temperature elevation. To assess the effect of noisy data on the reconstructed intensity profile, in the simulations, the different noise levels with zero mean were superposed on the exact data. Simulation results demonstrate that the inversion technique can provide fairly reliable intensity estimation with satisfactory accuracy.
Characterization of a Method for Inverse Heat Conduction Using Real and Simulated Thermocouple Data
NASA Technical Reports Server (NTRS)
Pizzo, Michelle E.; Glass, David E.
2017-01-01
It is often impractical to instrument the external surface of high-speed vehicles due to the aerothermodynamic heating. Temperatures can instead be measured internal to the structure using embedded thermocouples, and direct and inverse methods can then be used to estimate temperature and heat flux on the external surface. Two thermocouples embedded at different depths are required to solve direct and inverse problems, and filtering schemes are used to reduce noise in the measured data. Accuracy in the estimated surface temperature and heat flux is dependent on several factors. Factors include the thermocouple location through the thickness of a material, the sensitivity of the surface solution to the error in the specified location of the embedded thermocouples, and the sensitivity to the error in thermocouple data. The effect of these factors on solution accuracy is studied using the methodology discussed in the work of Pizzo, et. al.1 A numerical study is performed to determine if there is an optimal depth at which to embed one thermocouple through the thickness of a material assuming that a second thermocouple is installed on the back face. Solution accuracy will be discussed for a range of embedded thermocouple depths. Moreover, the sensitivity of the surface solution to (a) the error in the specified location of the embedded thermocouple and to (b) the error in the thermocouple data are quantified using numerical simulation, and the results are discussed.
Zhang, Lin; Baladandayuthapani, Veerabhadran; Mallick, Bani K.; Manyam, Ganiraju C.; Thompson, Patricia A.; Bondy, Melissa L.; Do, Kim-Anh
2015-01-01
Summary The analysis of alterations that may occur in nature when segments of chromosomes are copied (known as copy number alterations) has been a focus of research to identify genetic markers of cancer. One high-throughput technique recently adopted is the use of molecular inversion probes (MIPs) to measure probe copy number changes. The resulting data consist of high-dimensional copy number profiles that can be used to ascertain probe-specific copy number alterations in correlative studies with patient outcomes to guide risk stratification and future treatment. We propose a novel Bayesian variable selection method, the hierarchical structured variable selection (HSVS) method, which accounts for the natural gene and probe-within-gene architecture to identify important genes and probes associated with clinically relevant outcomes. We propose the HSVS model for grouped variable selection, where simultaneous selection of both groups and within-group variables is of interest. The HSVS model utilizes a discrete mixture prior distribution for group selection and group-specific Bayesian lasso hierarchies for variable selection within groups. We provide methods for accounting for serial correlations within groups that incorporate Bayesian fused lasso methods for within-group selection. Through simulations we establish that our method results in lower model errors than other methods when a natural grouping structure exists. We apply our method to an MIP study of breast cancer and show that it identifies genes and probes that are significantly associated with clinically relevant subtypes of breast cancer. PMID:25705056
Moissenet, Florent; Chèze, Laurence; Dumas, Raphaël
2012-06-01
Inverse dynamics combined with a constrained static optimization analysis has often been proposed to solve the muscular redundancy problem. Typically, the optimization problem consists in a cost function to be minimized and some equality and inequality constraints to be fulfilled. Penalty-based and Lagrange multipliers methods are common optimization methods for the equality constraints management. More recently, the pseudo-inverse method has been introduced in the field of biomechanics. The purpose of this paper is to evaluate the ability and the efficiency of this new method to solve the muscular redundancy problem, by comparing respectively the musculo-tendon forces prediction and its cost-effectiveness against common optimization methods. Since algorithm efficiency and equality constraints fulfillment highly belong to the optimization method, a two-phase procedure is proposed in order to identify and compare the complexity of the cost function, the number of iterations needed to find a solution and the computational time of the penalty-based method, the Lagrange multipliers method and pseudo-inverse method. Using a 2D knee musculo-skeletal model in an isometric context, the study of the cost functions isovalue curves shows that the solution space is 2D with the penalty-based method, 3D with the Lagrange multipliers method and 1D with the pseudo-inverse method. The minimal cost function area (defined as the area corresponding to 5% over the minimal cost) obtained for the pseudo-inverse method is very limited and along the solution space line, whereas the minimal cost function area obtained for other methods are larger or more complex. Moreover, when using a 3D lower limb musculo-skeletal model during a gait cycle simulation, the pseudo-inverse method provides the lowest number of iterations while Lagrange multipliers and pseudo-inverse method have almost the same computational time. The pseudo-inverse method, by providing a better suited cost function and an
Lippman, Sheri A.; Shade, Starley B.; Hubbard, Alan E.
2011-01-01
Background Intervention effects estimated from non-randomized intervention studies are plagued by biases, yet social or structural intervention studies are rarely randomized. There are underutilized statistical methods available to mitigate biases due to self-selection, missing data, and confounding in longitudinal, observational data permitting estimation of causal effects. We demonstrate the use of Inverse Probability Weighting (IPW) to evaluate the effect of participating in a combined clinical and social STI/HIV prevention intervention on reduction of incident chlamydia and gonorrhea infections among sex workers in Brazil. Methods We demonstrate the step-by-step use of IPW, including presentation of the theoretical background, data set up, model selection for weighting, application of weights, estimation of effects using varied modeling procedures, and discussion of assumptions for use of IPW. Results 420 sex workers contributed data on 840 incident chlamydia and gonorrhea infections. Participators were compared to non-participators following application of inverse probability weights to correct for differences in covariate patterns between exposed and unexposed participants and between those who remained in the intervention and those who were lost-to-follow-up. Estimators using four model selection procedures provided estimates of intervention effect between odds ratio (OR) .43 (95% CI:.22-.85) and .53 (95% CI:.26-1.1). Conclusions After correcting for selection bias, loss-to-follow-up, and confounding, our analysis suggests a protective effect of participating in the Encontros intervention. Evaluations of behavioral, social, and multi-level interventions to prevent STI can benefit by introduction of weighting methods such as IPW. PMID:20375927
A 1400-MHz survey of 1478 Abell clusters of galaxies
NASA Technical Reports Server (NTRS)
Owen, F. N.; White, R. A.; Hilldrup, K. C.; Hanisch, R. J.
1982-01-01
Observations of 1478 Abell clusters of galaxies with the NRAO 91-m telescope at 1400 MHz are reported. The measured beam shape was deconvolved from the measured source Gaussian fits in order to estimate the source size and position angle. All detected sources within 0.5 corrected Abell cluster radii are listed, including the cluster number, richness class, distance class, magnitude of the tenth brightest galaxy, redshift estimate, corrected cluster radius in arcmin, right ascension and error, declination and error, total flux density and error, and angular structure for each source.
NASA Astrophysics Data System (ADS)
Gao, Yingjie; Zhang, Jinhai; Yao, Zhenxing
2016-06-01
The symplectic integration method is popular in high-accuracy numerical simulations when discretizing temporal derivatives; however, it still suffers from time-dispersion error when the temporal interval is coarse, especially for long-term simulations and large-scale models. We employ the inverse time dispersion transform (ITDT) to the third-order symplectic integration method to reduce the time-dispersion error. First, we adopt the pseudospectral algorithm for the spatial discretization and the third-order symplectic integration method for the temporal discretization. Then, we apply the ITDT to eliminate time-dispersion error from the synthetic data. As a post-processing method, the ITDT can be easily cascaded in traditional numerical simulations. We implement the ITDT in one typical exiting third-order symplectic scheme and compare its performances with the performances of the conventional second-order scheme and the rapid expansion method. Theoretical analyses and numerical experiments show that the ITDT can significantly reduce the time-dispersion error, especially for long travel times. The implementation of the ITDT requires some additional computations on correcting the time-dispersion error, but it allows us to use the maximum temporal interval under stability conditions; thus, its final computational efficiency would be higher than that of the traditional symplectic integration method for long-term simulations. With the aid of the ITDT, we can obtain much more accurate simulation results but with a lower computational cost.
NASA Astrophysics Data System (ADS)
Zhang, B.; Xu, C. L.; Wang, S. M.
2016-07-01
The infrared temperature measurement technique has been applied in various fields, such as thermal efficiency analysis, environmental monitoring, industrial facility inspections, and remote temperature sensing. In the problem of infrared measurement of the metal surface temperature of superheater surfaces, the outer wall of the metal pipe is covered by radiative participating flue gas. This means that the traditional infrared measurement technique will lead to intolerable measurement errors due to the absorption and scattering of the flue gas. In this paper, an infrared measurement method for a metal surface in flue gas is investigated theoretically and experimentally. The spectral emissivity of the metal surface, and the spectral absorption and scattering coefficients of the radiative participating flue gas are retrieved simultaneously using an inverse method called quantum particle swarm optimization. Meanwhile, the detected radiation energy simulated using a forward simulation method (named the source multi-flux method) is set as the input of the retrieval. Then, the temperature of the metal surface detected by an infrared CCD camera is modified using the source multi-flux method in combination with these retrieved physical properties. Finally, an infrared measurement system for metal surface temperature is built to assess the proposed method. Experimental results show that the modified temperature is closer to the true value than that of the direct measured temperature.
NASA Technical Reports Server (NTRS)
Fymat, A. L.
1976-01-01
The paper studies the inversion of the radiative transfer equation describing the interaction of electromagnetic radiation with atmospheric aerosols. The interaction can be considered as the propagation in the aerosol medium of two light beams: the direct beam in the line-of-sight attenuated by absorption and scattering, and the diffuse beam arising from scattering into the viewing direction, which propagates more or less in random fashion. The latter beam has single scattering and multiple scattering contributions. In the former case and for single scattering, the problem is reducible to first-kind Fredholm equations, while for multiple scattering it is necessary to invert partial integrodifferential equations. A nonlinear minimization search method, applicable to the solution of both types of problems has been developed, and is applied here to the problem of monitoring aerosol pollution, namely the complex refractive index and size distribution of aerosol particles.
A numerical method for inverse source problems for Poisson and Helmholtz equations
NASA Astrophysics Data System (ADS)
Hamad, A.; Tadi, M.
2016-11-01
This paper is concerned with an iterative algorithm for inverse evaluation of the source function for two elliptic systems. The algorithm starts with an initial guess for the unknown source function, obtains a background field and, obtains the working equations for the error field. The correction to the assumed value appears as a source term for the error field. It formulates two well-posed problems for the error field which makes it possible to obtain the correction term. The algorithm can also recover the source function with partial data at the boundary. We consider 2-D as well as 3-D domains. The method can be applied to both Poisson and Helmholtz operators. Numerical results indicate that the algorithm can recover close estimates of the unknown source functions based on measurements collected at the boundary.
Full Dynamic Compound Inverse Method: Extension to General and Rayleigh damping
NASA Astrophysics Data System (ADS)
Pioldi, Fabio; Rizzi, Egidio
2017-04-01
The present paper takes from the original output-only identification approach named Full Dynamic Compound Inverse Method (FDCIM), recently published on this journal by the authors, and proposes an innovative, much enhanced version, in the description of more general forms of structural damping, including for classically adopted Rayleigh damping. This has led to an extended FDCIM formulation, which offers superior performance, on all the targeted identification parameters, namely: modal properties, Rayleigh damping coefficients, structural features at the element-level and input seismic excitation time history. Synthetic earthquake-induced structural response signals are adopted as input channels for the FDCIM approach, towards comparison and validation. The identification algorithm is run first on a benchmark 3-storey shear-type frame, and then on a realistic 10-storey frame, also by considering noise added to the response signals. Consistency of the identification results is demonstrated, with definite superiority of this latter FDCIM proposal.
NASA Astrophysics Data System (ADS)
Ferreira, Carlos; Casari, Pascal; Bouzidi, Rabah; Jacquemin, Frédéric
2006-09-01
The aim of this paper is to investigate the mechanical properties of a PVC foam core and especially the Young modulus profile along a commercialised 50 mm beam thickness. The identification of the Young modulus gradient is realized through the uniaxial compression test of a 50 mm cube sample. The in-plane strain fields of one cube face under loading in both directions (longitudinal and transversal) are achieved using a diffuse light interferometric technique, the speckle interferometry. Next to that, a numerical model is built using finite elements code CAST3M. We choose a multilayer model in order to introduce spatial variation of the mechanical properties. The boundaries conditions are very close to those prescribed in the experimental tests. Finally, the present work shows that the non uniform profile of the Young modulus can be estimated by using a simple inverse method and the finite elements analysis to reproduce the experimental strain field.
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1989-01-01
Progress in the direct-inverse wing design method in curvilinear coordinates has been made. This includes the remedying of a spanwise oscillation problem and the assessment of grid skewness, viscous interaction, and the initial airfoil section on the final design. It was found that, in response to the spanwise oscillation problem that designing at every other spanwise station produced the best results for the cases presented, a smoothly varying grid is especially needed for the accurate design at the wing tip, the boundary layer displacement thicknesses must be included in a successful wing design, the design of high and medium aspect ratio wings is possible with this code, and the final airfoil section designed is fairly independent of the initial section.
NASA Technical Reports Server (NTRS)
Fu, L.-L.
1981-01-01
The circulation and meridional heat transport of the subtropical South Atlantic Ocean are determined through the application of the inverse method of Wunsch (1978) to hydrographic data from the IGY and METEOR expeditions. Meridional circulation results of the two data sets agree on a northward mass transport of about 20 million metric tons/sec for waters above the North Atlantic Deep Water (NADW), and a comparable southward transport of deep waters. Additional gross features held in common are the Benguela, South Equatorial and North Brazilian Coastal currents' northward transport of the Surface Water, and the deflection of the southward-flowing NADW from the South American Coast into the mid ocean by a seamount chain near 20 deg S. Total heat transport is equatorward, with a magnitude of 0.8 X 10 to the 15th W near 30 deg S and indistinguishable from zero near 8 deg S.
Full Dynamic Compound Inverse Method: Extension to General and Rayleigh damping
NASA Astrophysics Data System (ADS)
Pioldi, Fabio; Rizzi, Egidio
2017-01-01
The present paper takes from the original output-only identification approach named Full Dynamic Compound Inverse Method (FDCIM), recently published on this journal by the authors, and proposes an innovative, much enhanced version, in the description of more general forms of structural damping, including for classically adopted Rayleigh damping. This has led to an extended FDCIM formulation, which offers superior performance, on all the targeted identification parameters, namely: modal properties, Rayleigh damping coefficients, structural features at the element-level and input seismic excitation time history. Synthetic earthquake-induced structural response signals are adopted as input channels for the FDCIM approach, towards comparison and validation. The identification algorithm is run first on a benchmark 3-storey shear-type frame, and then on a realistic 10-storey frame, also by considering noise added to the response signals. Consistency of the identification results is demonstrated, with definite superiority of this latter FDCIM proposal.
NASA Astrophysics Data System (ADS)
Kaporin, I. E.
2012-02-01
In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.
Assessment of Tikhonov-type regularization methods for solving atmospheric inverse problems
NASA Astrophysics Data System (ADS)
Xu, Jian; Schreier, Franz; Doicu, Adrian; Trautmann, Thomas
2016-11-01
Inverse problems occurring in atmospheric science aim to estimate state parameters (e.g. temperature or constituent concentration) from observations. To cope with nonlinear ill-posed problems, both direct and iterative Tikhonov-type regularization methods can be used. The major challenge in the framework of direct Tikhonov regularization (TR) concerns the choice of the regularization parameter λ, while iterative regularization methods require an appropriate stopping rule and a flexible λ-sequence. In the framework of TR, a suitable value of the regularization parameter can be generally determined based on a priori, a posteriori, and error-free selection rules. In this study, five practical regularization parameter selection methods, i.e. the expected error estimation (EEE), the discrepancy principle (DP), the generalized cross-validation (GCV), the maximum likelihood estimation (MLE), and the L-curve (LC), have been assessed. As a representative of iterative methods, the iteratively regularized Gauss-Newton (IRGN) algorithm has been compared with TR. This algorithm uses a monotonically decreasing λ-sequence and DP as an a posteriori stopping criterion. Practical implementations pertaining to retrievals of vertically distributed temperature and trace gas profiles from synthetic microwave emission measurements and from real far infrared data, respectively, have been conducted. Our numerical analysis demonstrates that none of the parameter selection methods dedicated to TR appear to be perfect and each has its own advantages and disadvantages. Alternatively, IRGN is capable of producing plausible retrieval results, allowing a more efficient manner for estimating λ.
Method of Minimax Optimization in the Coefficient Inverse Heat-Conduction Problem
NASA Astrophysics Data System (ADS)
Diligenskaya, A. N.; Rapoport, É. Ya.
2016-07-01
Consideration has been given to the inverse problem on identification of a temperature-dependent thermal-conductivity coefficient. The problem was formulated in an extremum statement as a problem of search for a quantity considered as the optimum control of an object with distributed parameters, which is described by a nonlinear homogeneous spatially one-dimensional Fourier partial equation with boundary conditions of the second kind. As the optimality criterion, the authors used the error (minimized on the time interval of observation) of uniform approximation of the temperature computed on the object's model at an assigned point of the segment of variation in the spatial variable to its directly measured value. Pre-parametrization of the sought control action, which a priori records its description accurate to assigning parameters of representation in the class of polynomial temperature functions, ensured the reduction of the problem under study to a problem of parametric optimization. To solve the formulated problem, the authors used an analytical minimax-optimization method taking account of the alternance properties of the sought optimum solutions based on which the algorithm of computation of the optimum values of the sought parameters is reduced to a system (closed for these unknowns) of equations fixing minimax deviations of the calculated values of temperature from those observed on the time interval of identification. The obtained results confirm the efficiency of the proposed method for solution of a certain range of applied problems. The authors have studied the influence of the coordinate of a point of temperature measurement on the exactness of solution of the inverse problem.
A fast nonstationary iterative method with convex penalty for inverse problems in Hilbert spaces
NASA Astrophysics Data System (ADS)
Jin, Qinian; Lu, Xiliang
2014-04-01
In this paper we consider the computation of approximate solutions for inverse problems in Hilbert spaces. In order to capture the special feature of solutions, non-smooth convex functions are introduced as penalty terms. By exploiting the Hilbert space structure of the underlying problems, we propose a fast iterative regularization method which reduces to the classical nonstationary iterated Tikhonov regularization when the penalty term is chosen to be the square of norm. Each iteration of the method consists of two steps: the first step involves only the operator from the problem while the second step involves only the penalty term. This splitting character has the advantage of making the computation efficient. In case the data is corrupted by noise, a stopping rule is proposed to terminate the method and the corresponding regularization property is established. Finally, we test the performance of the method by reporting various numerical simulations, including the image deblurring, the determination of source term in Poisson equation, and the de-autoconvolution problem.
Inversion of potential field data using the finite element method on parallel computers
NASA Astrophysics Data System (ADS)
Gross, L.; Altinay, C.; Shaw, S.
2015-11-01
In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.
Cao, Jianping; Du, Zhengjian; Mo, Jinhan; Li, Xinxiao; Xu, Qiujian; Zhang, Yinping
2016-12-20
Passive sampling is an alternative to active sampling for measuring concentrations of gas-phase volatile organic compounds (VOCs). However, the uncertainty or relative error of the measurements have not been minimized due to the limitations of existing design methods. In this paper, we have developed a novel method, the inverse problem optimization method, to address the problems associated with designing accurate passive samplers. The principle is to determine the most appropriate physical properties of the materials, and the optimal geometry of a passive sampler, by minimizing the relative sampling error based on the mass transfer model of VOCs for a passive sampler. As an example application, we used our proposed method to optimize radial passive samplers for the sampling of benzene and formaldehyde in a normal indoor environment. A new passive sampler, which we have called the Tsinghua Passive Diffusive Sampler (THPDS), for indoor benzene measurement was developed according to the optimized results. Silica zeolite was selected as the sorbent for the THPDS. The measured overall uncertainty of THPDS (22% for benzene) is lower than that of most commercially available passive samplers but is quite a bit larger than the modeled uncertainty (4.8% for benzene, the optimized result), suggesting that further research is required.
An Augmented Lagrangian Method for a Class of Inverse Quadratic Programming Problems
Zhang Jianzhong; Zhang Liwei
2010-02-15
We consider an inverse quadratic programming (QP) problem in which the parameters in the objective function of a given QP problem are adjusted as little as possible so that a known feasible solution becomes the optimal one. We formulate this problem as a minimization problem with a positive semidefinite cone constraint and its dual is a linearly constrained semismoothly differentiable (SC{sup 1}) convex programming problem with fewer variables than the original one. We demonstrate the global convergence of the augmented Lagrangian method for the dual problem and prove that the convergence rate of primal iterates, generated by the augmented Lagrange method, is proportional to 1/r, and the rate of multiplier iterates is proportional to 1/{radical}r, where r is the penalty parameter in the augmented Lagrangian. As the objective function of the dual problem is a SC{sup 1} function involving the projection operator onto the cone of symmetrically semi-definite matrices, the analysis requires extensive tools such as the singular value decomposition of matrices, an implicit function theorem for semismooth functions, and properties of the projection operator in the symmetric-matrix space. Furthermore, the semismooth Newton method with Armijo line search is applied to solve the subproblems in the augmented Lagrange approach, which is proven to have global convergence and local quadratic rate. Finally numerical results, implemented by the augmented Lagrangian method, are reported.
GPU-accelerated Monte Carlo simulation of particle coagulation based on the inverse method
NASA Astrophysics Data System (ADS)
Wei, J.; Kruis, F. E.
2013-09-01
Simulating particle coagulation using Monte Carlo methods is in general a challenging computational task due to its numerical complexity and the computing cost. Currently, the lowest computing costs are obtained when applying a graphic processing unit (GPU) originally developed for speeding up graphic processing in the consumer market. In this article we present an implementation of accelerating a Monte Carlo method based on the Inverse scheme for simulating particle coagulation on the GPU. The abundant data parallelism embedded within the Monte Carlo method is explained as it will allow an efficient parallelization of the MC code on the GPU. Furthermore, the computation accuracy of the MC on GPU was validated with a benchmark, a CPU-based discrete-sectional method. To evaluate the performance gains by using the GPU, the computing time on the GPU against its sequential counterpart on the CPU were compared. The measured speedups show that the GPU can accelerate the execution of the MC code by a factor 10-100, depending on the chosen particle number of simulation particles. The algorithm shows a linear dependence of computing time with the number of simulation particles, which is a remarkable result in view of the n2 dependence of the coagulation.
NASA Astrophysics Data System (ADS)
Fortin, Will F. J.
The utility and meaning of a geophysical dataset is dependent on good interpretation informed by high-quality data, processing, and attribute examination via technical methodologies. Active source marine seismic reflection data contains a great deal of information in the location, phase, and amplitude of both pre- and post-stack seismic reflections. Using pre- and post-stack data, this work has extracted useful information from marine reflection seismic data in novel ways in both the oceanic water column and the sub-seafloor geology. In chapter 1 we develop a new method for estimating oceanic turbulence from a seismic image. This method is tested on synthetic seismic data to show the method's ability to accurately recover both distribution and levels of turbulent diffusivity. Then we apply the method to real data offshore Costa Rica where we observe lee waves. Our results find elevated diffusivities near the seafloor as well as above the lee waves five times greater than surrounding waters and 50 times greater than open ocean diffusivities. Chapter 2 investigates subsurface geology in the Cascadia Subduction Zone and outlines a workflow for using pre-stack waveform inversion to produce highly detailed velocity models and seismic images. Using a newly developed inversion code, we achieve better imaging results as compared to the product of a standard, user-intensive method for building a velocity model. Our results image the subduction interface ~30 km farther landward than previous work and better images faults and sedimentary structures above the oceanic plate as well as in the accretionary prism. The resultant velocity model is highly detailed, inverted every 6.25 m with ~20 m vertical resolution, and will be used to examine the role of fluids in the subduction system. These results help us to better understand the natural hazards risks associated with the Cascadia Subduction Zone. Chapter 3 returns to seismic oceanography and examines the dynamics of nonlinear
Estimates of European emissions of methyl chloroform using a Bayesian inversion method
NASA Astrophysics Data System (ADS)
Maione, M.; Graziosi, F.; Arduini, J.; Furlani, F.; Giostra, U.; Blake, D. R.; Bonasoni, P.; Fang, X.; Montzka, S. A.; O'Doherty, S. J.; Reimann, S.; Stohl, A.; Vollmer, M. K.
2014-03-01
Methyl chloroform (MCF) is a man-made chlorinated solvent contributing to the destruction of stratospheric ozone and is controlled under the Montreal Protocol on Substances that Deplete the Ozone Layer. Long-term, high-frequency observations of MCF carried out at three European sites show a constant decline of the background mixing ratios of MCF. However, we observe persistent non-negligible mixing ratio enhancements of MCF in pollution episodes suggesting unexpectedly high ongoing emissions in Europe. In order to identify the source regions and to give an estimate of the magnitude of such emissions, we have used a Bayesian inversion method and a point source analysis, based on high-frequency long-term observations at the three European sites. The inversion identified south-eastern France (SEF) as a region with enhanced MCF emissions. This estimate was confirmed by the point source analysis. We performed this analysis using an eleven-year data set, from January 2002 to December 2012. Overall emissions estimated for the European study domain decreased nearly exponentially from 1.1 Gg yr-1 in 2002 to 0.32 Gg yr-1 in 2012, of which the estimated emissions from the SEF region accounted for 0.49 Gg yr-1 in 2002 and 0.20 Gg yr-1 in 2012. The European estimates are a significant fraction of the total semi-hemisphere (30-90° N) emissions, contributing a minimum of 9.8% in 2004 and a maximum of 33.7% in 2011, of which on average 50% are from the SEF region. On the global scale, the SEF region is thus responsible from a minimum of 2.6% (in 2003) to a maximum of 10.3% (in 2009) of the global MCF emissions.
Estimates of European emissions of methyl chloroform using a Bayesian inversion method
NASA Astrophysics Data System (ADS)
Maione, M.; Graziosi, F.; Arduini, J.; Furlani, F.; Giostra, U.; Blake, D. R.; Bonasoni, P.; Fang, X.; Montzka, S. A.; O'Doherty, S. J.; Reimann, S.; Stohl, A.; Vollmer, M. K.
2014-09-01
Methyl chloroform (MCF) is a man-made chlorinated solvent contributing to the destruction of stratospheric ozone and is controlled under the "Montreal Protocol on Substances that Deplete the Ozone Layer" and its amendments, which called for its phase-out in 1996 in developed countries and 2015 in developing countries. Long-term, high-frequency observations of MCF carried out at three European sites show a constant decline in the background mixing ratios of MCF. However, we observe persistent non-negligible mixing ratio enhancements of MCF in pollution episodes, suggesting unexpectedly high ongoing emissions in Europe. In order to identify the source regions and to give an estimate of the magnitude of such emissions, we have used a Bayesian inversion method and a point source analysis, based on high-frequency long-term observations at the three European sites. The inversion identified southeastern France (SEF) as a region with enhanced MCF emissions. This estimate was confirmed by the point source analysis. We performed this analysis using an 11-year data set, from January 2002 to December 2012. Overall, emissions estimated for the European study domain decreased nearly exponentially from 1.1 Gg yr-1 in 2002 to 0.32 Gg yr-1 in 2012, of which the estimated emissions from the SEF region accounted for 0.49 Gg yr-1 in 2002 and 0.20 Gg yr-1 in 2012. The European estimates are a significant fraction of the total semi-hemisphere (30-90° N) emissions, contributing a minimum of 9.8% in 2004 and a maximum of 33.7% in 2011, of which on average 50% are from the SEF region. On the global scale, the SEF region is thus responsible for a minimum of 2.6% (in 2003) and a maximum of 10.3% (in 2009) of the global MCF emissions.
NASA Astrophysics Data System (ADS)
Mehl, S.; Foglia, L.; Hill, M. C.
2009-12-01
Methods for analyzing inverse modeling results can be separated into two categories: (1) linear methods, such as Cook’s D, which are computationally frugal and do not require additional model runs, and (2) nonlinear methods, such as cross validation, which are computationally more expensive because they generally require additional model runs. Depending on the type of nonlinear analysis performed, the additional runs can be the difference between 10’s of runs and 1000’s of runs. For example, cross-validation studies require the model to be recalibrated (the regression repeated) for each observation or set of observations analyzed. This can be computationally prohibitive if many observations or sets of observations are investigated and/or the model has many estimated parameters. A tradeoff exists between linear and nonlinear methods, with linear methods being computationally efficient, but the results being questioned when models are nonlinear. The trade offs between computational efficiency and accuracy are investigated by comparing results from several linear measures of observation importance (for example, Cook’s D, DFBETA’s) to their nonlinear counterparts based on cross validation. Examples from ground water models of the Maggia Valley in southern Switzerland are used to make comparisons. The models include representation of the stream-aquifer interaction and range from simple to complex, with associated modified Beale’s measure ranging from mildly nonlinear to highly nonlinear, respectively. These results demonstrate applicability and limitations of applying linear methods over a range of model complexity and linearity and can be used to better understand when the additional computation burden of nonlinear methods may be necessary.
Retrieval Performance and Indexing Differences in ABELL and MLAIB
ERIC Educational Resources Information Center
Graziano, Vince
2012-01-01
Searches for 117 British authors are compared in the Annual Bibliography of English Language and Literature (ABELL) and the Modern Language Association International Bibliography (MLAIB). Authors are organized by period and genre within the early modern era. The number of records for each author was subdivided by format, language of publication,…
NASA Astrophysics Data System (ADS)
Pham, H. V.; Elshall, A. S.; Tsai, F. T.; Yan, L.
2012-12-01
The inverse problem in groundwater modeling deals with a rugged (i.e. ill-conditioned and multimodal), nonseparable and noisy function since it involves solving second order nonlinear partial deferential equations with forcing terms. Derivative-based optimization algorithms may fail to reach a near global solution due to their stagnation at a local minimum solution. To avoid entrapment in a local optimum and enhance search efficiency, this study introduces the covariance matrix adaptation-evolution strategy (CMA-ES) as a local derivative-free optimization method. In the first part of the study, we compare CMA-ES with five commonly used heuristic methods and the traditional derivative-based Gauss-Newton method on a hypothetical problem. This problem involves four different cases to allow a rigorous assessment against ten criterions: ruggedness in terms of nonsmooth and multimodal, ruggedness in terms of ill-conditioning and high nonlinearity, nonseparablity, high dimensionality, noise, algorithm adaptation, algorithm tuning, performance, consistency, parallelization (scaling with number of cores) and invariance (solution vector and function values). The CMA-ES adapts a covariance matrix representing the pair-wise dependency between decision variables, which approximates the inverse of the Hessian matrix up to a certain factor. The solution is updated with the covariance matrix and an adaptable step size, which are adapted through two conjugates that implement heuristic control terms. The covariance matrix adaptation uses information from the current population of solutions and from the previous search path. Since such an elaborate search mechanism is not common in the other heuristic methods, CMA-ES proves to be more robust than other population-based heuristic methods in terms of reaching a near-optimal solution for a rugged, nonseparable and noisy inverse problem. Other favorable properties that the CMA-ES exhibits are the consistency of the solution for repeated
Inverse Method for Identification of Material Parameters Directly from Milling Experiments
NASA Astrophysics Data System (ADS)
Maurel, A.; Michel, G.; Thibaud, S.; Fontaine, M.; Gelin, J. C.
2007-04-01
An identification procedure for the determination of material parameters that are used for the FEM simulation of High Speed Machining processes is proposed. This procedure is based on the coupling of a numerical identification procedure and FEM simulations of milling operations. The experimental data result directly from measurements performed during milling experiments. A special device has been instrumented and calibrated to perform force and torque measures, directly during machining experiments in using a piezoelectric dynamometer and a high frequency charge amplifier. The forces and torques are stored and low pass filtered if necessary, and these data provide the main basis for the identification procedure which is based on coupling 3D FEM simulations of milling and optimization/identification algorithms. The identification approach is mainly based on the Surfaces Response Method in the material parameters space, coupled to a sensitivity analysis. A Moving Least Square Approximation method is used to accelerate the identification process. The material behaviour is described from Johnson-Cook law. A fracture model is also added to consider chip formation and separation. The FEM simulations of milling are performed using explicit ALE based FEM code. The inverse method of identification is here applied on a 304L stainless steel and the first results are presented.
Inverse Method for Identification of Material Parameters Directly from Milling Experiments
Maurel, A.; Michel, G.; Thibaud, S.; Fontaine, M.; Gelin, J. C.
2007-04-07
An identification procedure for the determination of material parameters that are used for the FEM simulation of High Speed Machining processes is proposed. This procedure is based on the coupling of a numerical identification procedure and FEM simulations of milling operations. The experimental data result directly from measurements performed during milling experiments. A special device has been instrumented and calibrated to perform force and torque measures, directly during machining experiments in using a piezoelectric dynamometer and a high frequency charge amplifier. The forces and torques are stored and low pass filtered if necessary, and these data provide the main basis for the identification procedure which is based on coupling 3D FEM simulations of milling and optimization/identification algorithms. The identification approach is mainly based on the Surfaces Response Method in the material parameters space, coupled to a sensitivity analysis. A Moving Least Square Approximation method is used to accelerate the identification process. The material behaviour is described from Johnson-Cook law. A fracture model is also added to consider chip formation and separation. The FEM simulations of milling are performed using explicit ALE based FEM code. The inverse method of identification is here applied on a 304L stainless steel and the first results are presented.
Pan, Feifei; Peters-lidard, Christa D.; King, Anthony Wayne
2010-11-01
Soil particle size distribution (PSD) (i.e., clay, silt, sand, and rock contents) information is one of critical factors for understanding water cycle since it affects almost all of water cycle processes, e.g., drainage, runoff, soil moisture, evaporation, and evapotranspiration. With information about soil PSD, we can estimate almost all soil hydraulic properties (e.g., saturated soil moisture, field capacity, wilting point, residual soil moisture, saturated hydraulic conductivity, pore-size distribution index, and bubbling capillary pressure) based on published empirical relationships. Therefore, a regional or global soil PSD database is essential for studying water cycle regionally or globally. At the present stage, three soil geographic databases are commonly used, i.e., the Soil Survey Geographic database, the State Soil Geographic database, and the National Soil Geographic database. Those soil data are map unit based and associated with great uncertainty. Ground soil surveys are a way to reduce this uncertainty. However, ground surveys are time consuming and labor intensive. In this study, an inverse method for estimating mean and standard deviation of soil PSD from observed soil moisture is proposed and applied to Throughfall Displacement Experiment sites in Walker Branch Watershed in eastern Tennessee. This method is based on the relationship between spatial mean and standard deviation of soil moisture. The results indicate that the suggested method is feasible and has potential for retrieving soil PSD information globally from remotely sensed soil moisture data.
NASA Technical Reports Server (NTRS)
Bonataki, E.; Chaviaropoulos, P.; Papailiou, K. D.
1991-01-01
A new inverse inviscid method suitable for the design of rotating blade sections lying on an arbitrary axisymmetric stream-surface with varying streamtube width is presented. The geometry of the axisymmetric stream-surface and the streamtube width variation with meridional distance, the number of blades, the inlet flow conditions, the rotational speed and the suction and pressure side velocity distributions as functions of the normalized arc-length are given. The flow is considered irrotational in the absolute frame of reference and compressible. The output of the computation is the blade section that satisfies the above data. The method solves the flow equations on a (phi 1, psi) potential function-streamfunction plane for the velocity modulus, W and the flow angle beta; the blade section shape can then be obtained as part of the physical plane geometry by integrating the flow angle distribution along streamlines. The (phi 1, psi) plane is defined so that the monotonic behavior of the potential function is guaranteed, even in cases with high peripheral velocities. The method is validated on a rotating turbine case and used to design new blades. To obtain a closed blade, a set of closure conditions were developed and referred.
Inverse Monte Carlo method in a multilayered tissue model for diffuse reflectance spectroscopy.
Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas
2012-04-01
Model based data analysis of diffuse reflectance spectroscopy data enables the estimation of optical and structural tissue parameters. The aim of this study was to present an inverse Monte Carlo method based on spectra from two source-detector distances (0.4 and 1.2 mm), using a multilayered tissue model. The tissue model variables include geometrical properties, light scattering properties, tissue chromophores such as melanin and hemoglobin, oxygen saturation and average vessel diameter. The method utilizes a small set of presimulated Monte Carlo data for combinations of different levels of epidermal thickness and tissue scattering. The path length distributions in the different layers are stored and the effect of the other parameters is added in the post-processing. The accuracy of the method was evaluated using Monte Carlo simulations of tissue-like models containing discrete blood vessels, evaluating blood tissue fraction and oxygenation. It was also compared to a homogeneous model. The multilayer model performed better than the homogeneous model and all tissue parameters significantly improved spectral fitting. Recorded in vivo spectra were fitted well at both distances, which we previously found was not possible with a homogeneous model. No absolute intensity calibration is needed and the algorithm is fast enough for real-time processing.
Efficient Inversion in Underwater Acoustics with Analytic, Iterative and Sequential Bayesian Methods
2015-09-30
Award Number: N000141310077 Category of research: shallow water acoustics LONG TERM GOALS The long term goal of this project is to develop...efficient inversion algorithms for successful geoacoustic parameter estimation, inversion for sound-speed in the water -column, and source localization...acoustic field and optimization . The potential of analytic approaches is also investigated. OBJECTIVES • Achieve accurate and computationally
NASA Astrophysics Data System (ADS)
D'Auria, Luca; Fernandez, Jose; Puglisi, Giuseppe; Rivalta, Eleonora; Camacho, Antonio; Nikkhoo, Mehdi; Walter, Thomas
2016-04-01
The inversion of ground deformation and gravity data is affected by an intrinsic ambiguity because of the mathematical formulation of the inverse problem. Current methods for the inversion of geodetic data rely on both parametric (i.e. assuming a source geometry) and non-parametric approaches. The former are able to catch the fundamental features of the ground deformation source but, if the assumptions are wrong or oversimplified, they could provide misleading results. On the other hand, the latter class of methods, even if not relying on stringent assumptions, could suffer from artifacts, especially when dealing with poor datasets. In the framework of the EC-FP7 MED-SUV project we aim at comparing different inverse approaches to verify how they cope with basic goals of Volcano Geodesy: determining the source depth, the source shape (size and geometry), the nature of the source (magmatic/hydrothermal) and hinting the complexity of the source. Other aspects that are important in volcano monitoring are: volume/mass transfer toward shallow depths, propagation of dikes/sills, forecasting the opening of eruptive vents. On the basis of similar experiments already done in the fields of seismic tomography and geophysical imaging, we have devised a bind test experiment. Our group was divided into one model design team and several inversion teams. The model design team devised two physical models representing volcanic events at two distinct volcanoes (one stratovolcano and one caldera). They provided the inversion teams with: the topographic reliefs, the calculated deformation field (on a set of simulated GPS stations and as InSAR interferograms) and the gravity change (on a set of simulated campaign stations). The nature of the volcanic events remained unknown to the inversion teams until after the submission of the inversion results. Here we present the preliminary results of this comparison in order to determine which features of the ground deformation and gravity source
NASA Astrophysics Data System (ADS)
D'Auria, L.; Fernandez, J.; Puglisi, G.; Rivalta, E.; Camacho, A. G.; Nikkhoo, M.; Walter, T. R.
2015-12-01
The inversion of ground deformation and gravity data is affected by an intrinsic ambiguity because of the mathematical formulation of the inverse problem. Current methods for the inversion of geodetic data rely on both parametric (i.e. assuming a source geometry) and non-parametric approaches. The former are able to catch the fundamental features of the ground deformation source but, if the assumptions are wrong or oversimplified, they could provide misleading results. On the other hand, the latter class of methods, even if not relying on stringent assumptions, could suffer from artifacts, especially when dealing with poor datasets. In the framework of the EC-FP7 MED-SUV project we aim at comparing different inverse approaches to verify how they cope with basic goals of Volcano Geodesy: determining the source depth, the source shape (size and geometry), the nature of the source (magmatic/hydrothermal) and hinting the complexity of the source. Other aspects that are important in volcano monitoring are: volume/mass transfer toward shallow depths, propagation of dikes/sills, forecasting the opening of eruptive vents. On the basis of similar experiments already done in the fields of seismic tomography and geophysical imaging, we have devised a bind test experiment. Our group was divided into one model design team and several inversion teams. The model design team devised two physical models representing volcanic events at two distinct volcanoes (one stratovolcano and one caldera). They provided the inversion teams with: the topographic reliefs, the calculated deformation field (on a set of simulated GPS stations and as InSAR interferograms) and the gravity change (on a set of simulated campaign stations). The nature of the volcanic events remained unknown to the inversion teams until after the submission of the inversion results. Here we present the preliminary results of this comparison in order to determine which features of the ground deformation and gravity source
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
Kimoto, K.; Hirose, S.
2005-04-09
A linearized inverse scattering method, so called the Kirchhoff inversion, is formulated in time domain for SH-waves measured by a contact ultrasonic transducer and tested using experimental data. The data for reconstruction are experimentally obtained by measuring ultrasonic echoes from artificial flaws in steel plate specimens. For an efficient and accurate data collection, a contact SH-wave linear array transducer is used. The shapes of the artificial flaws are reconstructed by the Kirchhoff inversion and the conventional SAFT (Synthetic Aperture Focusing Technique) using the waves from a single ray path. Comparison of the two methods shows that the Kirchhoff inversion works well for experimental data and outperforms SAFT although only an illuminated portion of the flaw boundaries is reconstructed by either method. In order to gain more information on the flaw boundaries, the Kirchhoff inversion which takes into account multiple ray paths is also tested with the same experimental data. As a result, it is shown that a larger part of the flaw boundaries can be visualized by considering the multiple ray paths.
The Sunyaev-Zel'dovich Effect in Abell 370
NASA Technical Reports Server (NTRS)
Grego, Laura; Carlstrom, John E.; Joy, Marshall K.; Reese, Erik D.; Holder, Gilbert P.; Patel, Sandeep; Holzapfel, William L.; Cooray, Asantha K.
1999-01-01
We present interferometric measurements of the Sunyaev-Zel'dovich (SZ) effect towards the galaxy cluster Abell 370. These measurements, which directly probe the pressure of the cluster's gas, show the gas is strongly aspherical, on agreement with the morphology revealed by x-ray and gravitational lensing observations. We calculate the cluster's gas mass fraction by comparing the gas mass derived from the SZ measurements to the lensing-derived gravitational mass near the critical lensing radius. We also calculate the gas mass fraction from the SZ data by deriving the total mass under the assumption that the gas is in hydrostatic equilibrium (HSE). We test the assumptions in the HSE method by comparing the total cluster mass implied by the two methods. The Hubble constant derived for this cluster, when the known systematic uncertainties are included, has a very wide range of values and therefore does not provide additional constraints on the validity of the assumptions. We examine carefully the possible systematic errors in the gas fraction measurement. The gas fraction is a lower limit to the cluster's baryon fraction and so we compare the gas mass fraction, calibrated by numerical simulations to approximately the virial radius, to measurements of the global mass fraction of baryonic matter, OMEGA(sub B)/OMEGA(sub matter). Our lower limit to the cluster baryon fraction is f(sub B) = (0.043 +/- 0.014)/h (sub 100). From this, we derive an upper limit to the universal matter density, OMEGA(sub matter) <= 0.72/h(sub 100), and a likely value of OMEGA(sub matter) <= (0.44(sup 0.15, sub -0.12)/h(sub 100).
NASA Astrophysics Data System (ADS)
Dolman, A. J.; Shvidenko, A.; Schepaschenko, D.; Ciais, P.; Tchebakova, N.; Chen, T.; van der Molen, M. K.; Belelli Marchesini, L.; Maximov, T. C.; Maksyutov, S.; Schulze, E.-D.
2012-12-01
We determine the net land to atmosphere flux of carbon in Russia, including Ukraine, Belarus and Kazakhstan, using inventory-based, eddy covariance, and inversion methods. Our high boundary estimate is -342 Tg C yr-1 from the eddy covariance method, and this is close to the upper bounds of the inventory-based Land Ecosystem Assessment and inverse models estimates. A lower boundary estimate is provided at -1350 Tg C yr-1 from the inversion models. The average of the three methods is -613.5 Tg C yr-1. The methane emission is estimated separately at 41.4 Tg C yr-1. These three methods agree well within their respective error bounds. There is thus good consistency between bottom-up and top-down methods. The forests of Russia primarily cause the net atmosphere to land flux (-692 Tg C yr-1 from the LEA. It remains however remarkable that the three methods provide such close estimates (-615, -662, -554 Tg C yr-1) for net biome production (NBP), given the inherent uncertainties in all of the approaches. The lack of recent forest inventories, the few eddy covariance sites and associated uncertainty with upscaling and undersampling of concentrations for the inversions are among the prime causes of the uncertainty. The dynamic global vegetation models (DGVMs) suggest a much lower uptake at -91 Tg C yr-1, and we argue that this is caused by a high estimate of heterotrophic respiration compared to other methods.
NASA Astrophysics Data System (ADS)
Nazarov, L. A.; Nazarova, L. A.; Romenskii, E. I.; Tcheverda, V. A.; Epov, M. I.
2016-02-01
A method for estimating the stress-strain state of a rock massif in the vicinity of underground facilities is substantiated. This method is based on solution of the boundary inverse problem of defining the components of an external stress field from the acoustic sounding data. The acoustic sounding data used are the arrival times of diving head longitudinal waves, recorded in a long mine shaft. Numerical experiments have revealed the optimal arrangement of the recording network and the limited relative error in the input data, which, taken together, provide for solvability of the inverse problem.
A tracer-based inversion method for diagnosing eddy-induced diffusivity and advection
NASA Astrophysics Data System (ADS)
Bachman, S. D.; Fox-Kemper, B.; Bryan, F. O.
2015-02-01
A diagnosis method is presented which inverts a set of tracer flux statistics into an eddy-induced transport intended to apply for all tracers. The underlying assumption is that a linear flux-gradient relationship describes eddy-induced tracer transport, but a full tensor coefficient is assumed rather than a scalar coefficient which allows for down-gradient and skew transports. Thus, Lagrangian advection and anisotropic diffusion not necessarily aligned with the tracer gradient can be diagnosed. In this method, multiple passive tracers are initialized in an eddy-resolving flow simulation. Their spatially-averaged gradients form a matrix, where the gradient of each tracer is assumed to satisfy an identical flux-gradient relationship. The resulting linear system, which is overdetermined when using more than three tracers, is then solved to obtain an eddy transport tensor R which describes the eddy advection (antisymmetric part of R) and potentially anisotropic diffusion (symmetric part of R) in terms of coarse-grained variables. The mathematical basis for this inversion method is presented here, along with practical guidelines for its implementation. We present recommendations for initialization of the passive tracers, maintaining the required misalignment of the tracer gradients, correcting for nonconservative effects, and quantifying the error in the diagnosed transport tensor. A method is proposed to find unique, tracer-independent, distinct rotational and divergent Lagrangian transport operators, but the results indicate that these operators are not meaningfully relatable to tracer-independent eddy advection or diffusion. With the optimal method of diagnosis, the diagnosed transport tensor is capable of predicting the fluxes of other tracers that are withheld from the diagnosis, including even active tracers such as buoyancy, such that relative errors of 14% or less are found.
Sato, Ryota; Shirai, Toru; Taniguchi, Yo; Murase, Takenori; Bito, Yoshitaka; Ochi, Hisaaki
2017-03-27
Quantitative susceptibility mapping (QSM) is a new magnetic resonance imaging (MRI) technique for noninvasively estimating the magnetic susceptibility of biological tissue. Several methods for QSM have been proposed. One of these methods can estimate susceptibility with high accuracy in tissues whose contrast is consistent between magnitude images and susceptibility maps, such as deep gray-matter nuclei. However, the susceptibility of small veins is underestimated and not well depicted by using the above approach, because the contrast of small veins is inconsistent between a magnitude image and a susceptibility map. In order to improve the estimation accuracy and visibility of small veins without streaking artifacts, a method with multiple dipole-inversion combination with k-space segmentation (MUDICK) has been proposed. In the proposed method, k-space was divided into three domains (low-frequency, magic-angle, and high-frequency). The k-space data in low-frequency and magic-angle domains were obtained by L1-norm regularization using structural information of a pre-estimated susceptibility map. The k-space data in high-frequency domain were obtained from the pre-estimated susceptibility map in order to preserve small-vein contrasts. Using numerical simulation and human brain study at 3 Tesla, streaking artifacts and small-vein susceptibility were compared between MUDICK and conventional methods (MEDI and TKD). The numerical simulation and human brain study showed that MUDICK and MEDI had no severe streaking artifacts and MUDICK showed higher contrast and accuracy of susceptibility in small-veins compared to MEDI. These results suggest that MUDICK can improve the accuracy and visibility of susceptibility in small-veins without severe streaking artifacts.
An inverse method to recover the SFR and reddening properties from spectra of galaxies
NASA Astrophysics Data System (ADS)
Vergely, J.-L.; Lançon, A.; Mouhcine
2002-11-01
We develop a non-parametric inverse method to investigate the star formation rate, the metallicity evolution and the reddening properties of galaxies based on their spectral energy distributions (SEDs). This approach allows us to clarify the level of information present in the data, depending on its signal-to-noise ratio. When low resolution SEDs are available in the ultraviolet, optical and near-IR wavelength ranges together, we conclude that it is possible to constrain the star formation rate and the effective dust optical depth simultaneously with a signal-to-noise ratio of 25. With excellent signal-to-noise ratios, the age-metallicity relation can also be constrained. We apply this method to the well-known nuclear starburst in the interacting galaxy NGC 7714. We focus on deriving the star formation history and the reddening law. We confirm that classical extinction models cannot provide an acceptable simultaneous fit of the SED and the lines. We also confirm that, with the adopted population synthesis models and in addition to the current starburst, an episode of enhanced star formation that started more than 200 Myr ago is required. As the time elapsed since the last interaction with NGC 7715, based on dynamical studies, is about 100 Myr, our result reinforces the suggestion that this interaction might not have been the most important event in the life of NGC 7714.
NASA Astrophysics Data System (ADS)
Cao, Danping; Liao, Wenyuan
2015-03-01
Full waveform inversion (FWI) is a model-based data-fitting technique that has been widely used to estimate model parameters in Geophysics. In this work, we propose an efficient computational approach to solve the FWI of crosswell seismic data. The FWI problem is mathematically formulated as a partial differential equation (PDE)-constrained optimization problem, which is numerically solved using a gradient-based optimization method. The efficiency and accuracy of FWI are mainly determined by the three main components: forward modeling, gradient calculation and model update which usually involves the gradient-based optimization algorithm. Given the large number of iterations needed by FWI, an accurate gradient is critical for the success of FWI, as it will not only speed up the convergence but also increase the accuracy of the solution. However computing the gradient still remains a challenging task even after the adjoint PDE has been derived. Automatic differentiation (AD) tools have been proved very effective in a variety of application areas including Geoscience. In this work we investigated the feasibility of integrating TAPENADE, a powerful AD tool into FWI, so that the FWI workflow is simplified to allow us to focus on the forward modeling and the model updating. In this paper we choose the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method due to its robustness and fast convergence. Numerical experiments have been conducted to demonstrate the effectiveness, efficiency and robustness of the new computational approach for FWI.
Bulk Modulus of Spherical Palladium Nanoparticles by Chen-Mobius Lattice Inversion Method
NASA Astrophysics Data System (ADS)
Abdul-Hafidh, Esam
2015-03-01
Palladium is a precious and rare element that belongs to the Platinum group metals (PGMS) with the lowest density and melting point. Numerous uses of Pd in dentistry, medicine and industrial applications attracted considerable investment. Preparation and characterization of palladium nanoparticles have been conducted by many researchers, but very little effort has taken place on the study of Pd physical properties, such as, mechanical, optical, and electrical. In this study, Chen-Mobius lattice inversion method is used to calculate the cohesive energy and modulus of palladium. The method was employed to calculate the cohesive energy by summing over all pairs of atoms within palladium spherical nanoparticles. The modulus is derived from the cohesive energy curve as a function of particles' sizes. The cohesive energy has been calculated using the potential energy function proposed by (Rose et al., 1981). The results are found to be comparable with previous predictions of metallic nanoparticles. This work is supported by the Royal commission at Yanbu- Saudi Arabia.
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto
2015-12-28
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730–3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
Modified Inverse First Order Reliability Method (I-FORM) for Predicting Extreme Sea States.
Eckert-Gallup, Aubrey Celia; Sallaberry, Cedric Jean-Marie; Dallman, Ann Renee; Neary, Vincent Sinclair
2014-09-01
Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulation s as a part of the stand ard current practice for designing marine structure s to survive extreme sea states. Such environmental contours are characterized by combinations of significant wave height ( ) and energy period ( ) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first - order reliability method (IFORM) i s standard design practice for generating environmental contours. In this paper, the traditional appli cation of the IFORM to generating environmental contours representing extreme sea states is described in detail and its merits and drawbacks are assessed. The application of additional methods for analyzing sea state data including the use of principal component analysis (PCA) to create an uncorrelated representation of the data under consideration is proposed. A reexamination of the components of the IFORM application to the problem at hand including the use of new distribution fitting techniques are shown to contribute to the development of more accurate a nd reasonable representations of extreme sea states for use in survivability analysis for marine struc tures. Keywords: In verse FORM, Principal Component Analysis , Environmental Contours, Extreme Sea State Characteri zation, Wave Energy Converters
NASA Astrophysics Data System (ADS)
Fernández-Oliveras, Alicia; Rubiño, Manuel; Pérez, María. M.
2013-11-01
Light propagation in biological media is characterized by the absorption coefficient, the scattering coefficient, the scattering phase function, the refractive index, and the surface conditions (roughness). By means of the inverse-adding-doubling (IAD) method, transmittance and reflectance measurements lead to the determination of the absorption coefficient and the reduced scattering coefficient. The additional measurement of the phase function performed by goniometry allows the separation of the reduced scattering coefficient into the scattering coefficient and the scattering anisotropy factor. The majority of techniques, such as the one utilized in this work, involve the use of integrating spheres to measure total transmission and reflection. We have employed an integrating sphere setup to measure the total transmittance and reflectance of dental biomaterials used in restorative dentistry. Dental biomaterials are meant to replace dental tissues, such as enamel and dentine, in irreversibly diseased teeth. In previous works we performed goniometric measurements in order to evaluate the scattering anisotropy factor for these kinds of materials. In the present work we have used the IAD method to combine the measurements performed using the integrating sphere setup with the results of the previous goniometric measurements. The aim was to optically characterize the dental biomaterials analyzed, since whole studies to assess the appropriate material properties are required in medical applications. In this context, complete optical characterizations play an important role in achieving the fulfillment of optimal quality and the final success of dental biomaterials used in restorative dentistry.
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
NASA Astrophysics Data System (ADS)
Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto
2015-12-01
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730-3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
The merging cluster Abell 1758 revisited: multi-wavelength observations and numerical simulations
NASA Astrophysics Data System (ADS)
Durret, F.; Laganá, T. F.; Haider, M.
2011-05-01
Context. Cluster properties can be more distinctly studied in pairs of clusters, where we expect the effects of interactions to be strong. Aims: We here discuss the properties of the double cluster Abell 1758 at a redshift z ~ 0.279. These clusters show strong evidence for merging. Methods: We analyse the optical properties of the North and South cluster of Abell 1758 based on deep imaging obtained with the Canada-France-Hawaii Telescope (CFHT) archive Megaprime/Megacam camera in the g' and r' bands, covering a total region of about 1.05 × 1.16 deg2, or 16.1 × 17.6 Mpc2. Our X-ray analysis is based on archive XMM-Newton images. Numerical simulations were performed using an N-body algorithm to treat the dark-matter component, a semi-analytical galaxy-formation model for the evolution of the galaxies and a grid-based hydrodynamic code with a parts per million (PPM) scheme for the dynamics of the intra-cluster medium. We computed galaxy luminosity functions (GLFs) and 2D temperature and metallicity maps of the X-ray gas, which we then compared to the results of our numerical simulations. Results: The GLFs of Abell 1758 North are well fit by Schechter functions in the g' and r' bands, but with a small excess of bright galaxies, particularly in the r' band; their faint-end slopes are similar in both bands. In contrast, the GLFs of Abell 1758 South are not well fit by Schechter functions: excesses of bright galaxies are seen in both bands; the faint-end of the GLF is not very well defined in g'. The GLF computed from our numerical simulations assuming a halo mass-luminosity relation agrees with those derived from the observations. From the X-ray analysis, the most striking features are structures in the metal distribution. We found two elongated regions of high metallicity in Abell 1758 North with two peaks towards the centre. In contrast, Abell 1758 South shows a deficit of metals in its central regions. Comparing observational results to those derived from numerical
NASA Technical Reports Server (NTRS)
Bennett, Andrew F.
1990-01-01
Inverse methods for estimating the surface ciculation of the equatorial Pacific by combining a linear reduced-gravity shallow-water model with the Tropical Ocean-Global Atmosphere ship-of-opportunity expendable bathythermograph (TOGA SOP XBT) observing program are examined. It is demonstrated that a simple linear model of the upper circulation of the equatorial Pacific can be successfully used as a weak constraint when smoothing the TOGA SOP XBT data. A circulation is sought as the weighted least squares fit to the dynamics and the data. The solution method is an expansion in representer functions, and the generalized inverse problem is thereby reduced from a functional problem to an algebraic problem for the coefficients of the representer. A specific inverse calculation using synthetic forcing and data is presented.
Site Effects Estimation by a Transfer-Station Generalized Inversion Method
NASA Astrophysics Data System (ADS)
Zhang, Wenbo; Yu, Xiangwei
2016-04-01
Site effect is one of the essential factors in characterizing strong ground motion as well as in earthquake engineering design. In this study, the generalized inversion technique (GIT) is applied to estimate site effects. Moreover, the GIT is modified to improve its analytical ability.GIT needs a reference station as a standard. Ideally the reference station is located at a rock site, and its site effect is considered to be a constant. For the same earthquake, the record spectrum of an interested station is divided by that of the reference station, and the source term is eliminated. Thus site effects and the attenuation can be acquired. In the GIT process, the amount of earthquake data available in analysis is limited to that recorded by the reference station, and the stations of which site effects can be estimated are also restricted to those stations which recorded common events with the reference station. In order to improve the limitation of the GIT, a modified GIT is put forward in this study, namely, the transfer-station generalized inversion method (TSGI). Comparing with the GIT, this modified GIT can be used to enlarge data set and increase the number of stations whose site effects can be analyzed. And this makes solution much more stable. To verify the results of GIT, a non-reference method, the genetic algorithms (GA), is applied to estimate absolute site effects. On April 20, 2013, an earthquake with magnitude of MS 7.0 occurred in the Lushan region, China. After this event, more than several hundred aftershocks with ML<3.0 occurred in this region. The purpose of this paper is to investigate the site effects and Q factor for this area based on the aftershock strong motion records from the China National Strong Motion Observation Network System. Our results show that when the TSGI is applied instead of the GIT, the total number of events used in the inversion increases from 31 to 54 and the total number of stations whose site effect can be estimated
NASA Astrophysics Data System (ADS)
Hendricks Franssen, Harrie-Jan; Brunner, Philip; Eugster, Martin; Bauer, Peter; Kinzelbach, Wolfgang
The study area is the Chobe Enclave region in semi-arid Northern Botswana. Growing water demand in the local villages led to the development of different water supply scenarios one of which uses groundwater from a nearby aquifer. A regional groundwater flow model was established, both within a stochastic and a deterministic approach. In principle recharge can be derived from a surface water balance. The input data for the water balance, evapotranspiration and precipitation, were calculated using remotely sensed data. The calculation of evapotranspiration is based on the surface energy balance using multi-channel images from the Advanced Very High Resolution Radiometer (AVHRR). For several days of the year, actual ET is calculated and compared to station potential ET to yield crop coefficients. The crop coefficients are interpolated in time. Finally long-term ET is calculated by multiplying the crop coefficients with station potential ET. Precipitation is taken from station data and precipitation maps prepared by USAID using Meteosat images. As in most of the area surface runoff is small, subtracting evapotranspiration from precipitation yields recharge maps for the period 1990-2000. However, the values thus calculated are very inaccurate, as the errors both in precipitation and evapotranspiration estimates are large. Still, zones of different recharge and probable errors can be identified. The absolute value of the recharge flux in each zone is derived from the chloride method. Alternatively, the recharge flux was also estimated by the sequential self-calibrated method, a stochastic inverse modelling approach based on observed heads and pumping test data. Recharge values and transmissivities are estimated jointly in this method. The recharge zones derived from the water balance together with their stochastic properties are used as prior information. The method generates multiple equally likely solutions to the estimation problem and allows to assess the uncertainty
Design optimization of axial flow hydraulic turbine runner: Part I - an improved Q3D inverse method
NASA Astrophysics Data System (ADS)
Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji
2002-06-01
With the aim of constructing a comprehensive design optimization procedure of axial flow hydraulic turbine, an improved quasi-three-dimensional inverse method has been proposed from the viewpoint of system and a set of rotational flow governing equations as well as a blade geometry design equation has been derived. The computation domain is firstly taken from the inlet of guide vane to the far outlet of runner blade in the inverse method and flows in different regions are solved simultaneously. So the influence of wicket gate parameters on the runner blade design can be considered and the difficulty to define the flow condition at the runner blade inlet is surmounted. As a pre-computation of initial blade design on S2m surface is newly adopted, the iteration of S1 and S2m surfaces has been reduced greatly and the convergence of inverse computation has been improved. The present model has been applied to the inverse computation of a Kaplan turbine runner. Experimental results and the direct flow analysis have proved the validation of inverse computation. Numerical investigations show that a proper enlargement of guide vane distribution diameter is advantageous to improve the performance of axial hydraulic turbine runner. Copyright
Methods to control phase inversions and enhance mass transfer in liquid-liquid dispersions
Tsouris, Constantinos; Dong, Junhang
2002-01-01
The present invention is directed to the effects of applied electric fields on liquid-liquid dispersions. In general, the present invention is directed to the control of phase inversions in liquid-liquid dispersions. Because of polarization and deformation effects, coalescence of aqueous drops is facilitated by the application of electric fields. As a result, with an increase in the applied voltage, the ambivalence region is narrowed and shifted toward higher volume fractions of the dispersed phase. This permits the invention to be used to ensure that the aqueous phase remains continuous, even at a high volume fraction of the organic phase. Additionally, the volume fraction of the organic phase may be increased without causing phase inversion, and may be used to correct a phase inversion which has already occurred. Finally, the invention may be used to enhance mass transfer rates from one phase to another through the use of phase inversions.
Solè, Isabel; Pey, Carmen M; Maestro, Alicia; González, Carmen; Porras, Montserrat; Solans, Conxita; Gutiérrez, José M
2010-04-15
The aim of this work is to study, through experimental design, the effect of vessel geometry and scale-up in the properties of nano-emulsions prepared through the phase inversion composition method (PIC). Results show that a proper mixing is crucial for small droplet-sized nano-emulsions, especially when remaining free oil is found together with the key liquid crystal phase formed during the emulsification process. In these cases, mixing must be near the perfect mixed model. Proper geometries must be selected to promote a good mixture. Small addition rates V(ad) and high mixing rates omega promote the necessary mixing level. However, results indicate that, if free oil remains together with liquid crystal formed during emulsification, a too high omega could promote coalescence of oil droplets. When a cubic liquid crystal phase Pm3n is formed instead during emulsification, without free oil, coalescence is not promoted, probably due to the extremely high viscosity. For the system where Pm3n is formed during emulsification, scale-up cannot be done, as it would be expected, maintaining adimensional variables--Reynolds, Re, and adimensional time. A perfect correspondence between scales is observed when the total addition time and the lineal mixing rate are maintained between scales instead. Re, i.e. the ratio between inertial and viscous forces, does not seem adequate to describe the system, as inertial forces are worthless due to the extremely high viscosity.
Fabrication and characterization of cerium-doped barium titanate inverse opal by sol-gel method
Jin Yi; Zhu Yihua Yang Xiaoling; Li Chunzhong; Zhou Jinghong
2007-01-15
Cerium-doped barium titanate inverted opal was synthesized from barium acetate contained cerous acetate and tetrabutyl titanate in the interstitial spaces of a polystyrene (PS) opal. This procedure involves infiltration of precursors into the interstices of the PS opal template followed by hydrolytic polycondensation of the precursors to amorphous barium titanate and removal of the PS opal by calcination. The morphologies of opal and inverse opal were characterized by scanning electron microscope (SEM). The pores were characterized by mercury intrusion porosimetry (MIP). X-ray photoelectron spectroscopy (XPS) investigation showed the doping structure of cerium, barium and titanium. And powder X-ray diffraction allows one to observe the influence of doping degree on the grain size. The lattice parameters, crystal size and lattice strain were calculated by the Rietveld refinement method. The synthesis of cerium-doped barium titanate inverted opals provides an opportunity to electrically and optically engineer the photonic band structure and the possibility of developing tunable three-dimensional photonic crystal devices. - Graphical abstract: Cerium-doped barium titanate inverted opal was synthesized from barium acetate acid contained cerous acetate and tetrabutyl titanate in the interstitial spaces of a PS opal, which involves infiltration of precursors into the interstices of the PS opal template and removal of the PS opal by calcination.
NASA Astrophysics Data System (ADS)
Reddy, K. S.; Somasundharam, S.
2016-09-01
In this work, inverse heat conduction problem (IHCP) involving the simultaneous estimation of principal thermal conductivities (kxx,kyy,kzz ) and specific heat capacity of orthotropic materials is solved by using surrogate forward model. Uniformly distributed random samples for each unknown parameter is generated from the prior knowledge about these parameters and Finite Volume Method (FVM) is employed to solve the forward problem for temperature distribution with space and time. A supervised machine learning technique- Gaussian Process Regression (GPR) is used to construct the surrogate forward model with the available temperature solution and randomly generated unknown parameter data. The statistical and machine learning toolbox available in MATLAB R2015b is used for this purpose. The robustness of the surrogate model constructed using GPR is examined by carrying out the parameter estimation for 100 new randomly generated test samples at a measurement error of ±0.3K. The temperature measurement is obtained by adding random noise with the mean at zero and known standard deviation (σ = 0.1) to the FVM solution of the forward problem. The test results show that Mean Percentage Deviation (MPD) of all test samples for all parameters is < 10%.
An inverse finite element method for determining the anisotropic properties of the cornea.
Nguyen, T D; Boyce, B L
2011-06-01
An inverse finite element method was developed to determine the anisotropic properties of bovine cornea from an in vitro inflation experiment. The experiment used digital image correlation (DIC) to measure the three-dimensional surface geometry and displacement field of the cornea at multiple pressures. A finite element model of a bovine cornea was developed using the DIC measured surface geometry of the undeformed specimen. The model was applied to determine five parameters of an anisotropic hyperelastic model that minimized the error between the measured and computed surface displacement field and to investigate the sensitivity of the measured bovine inflation response to variations in the anisotropic properties of the cornea. The results of the parameter optimization revealed that the collagen structure of bovine cornea exhibited a high degree of anisotropy in the limbus region, which agreed with recent histological findings, and a transversely isotropic central region. The parameter study showed that the bovine corneal response to the inflation experiment was sensitive to the shear modulus of the matrix at pressures below the intraocular pressure, the properties of the collagen lamella at higher pressures, and the degree of anisotropy in the limbus region. It was not sensitive to a weak collagen anisotropy in the central region.
A proposed through-flow inverse method for the design of mixed-flow pumps
NASA Technical Reports Server (NTRS)
Borges, Joao Eduardo
1991-01-01
A through-flow (hub-to-shroud) truly inverse method is proposed and described. It uses an imposition of mean swirl, i.e., radius times mean tangential velocity, given throughout the meridional section of the turbomachine as an initial design specification. In the present implementation, it is assumed that the fluid is inviscid, incompressible, and irrotational at inlet and that the blades are supposed to have zero thickness. Only blade rows that impart to the fluid a constant work along the space are considered. An application of this procedure to design the rotor of a mixed-flow pump is described in detail. The strategy used to find a suitable mean swirl distribution and the other design inputs is also described. The final blade shape and pressure distributions on the blade surface are presented, showing that it is possible to obtain feasible designs using this technique. Another advantage of this technique is the fact that it does not require large amounts of CPU time.
Well test analysis benefits from new method of Laplace space inversion
Wooden, B.; Azari, M.; Soliman, M. )
1992-07-20
This paper reports that for modeling well test data more reliably, a new computer program easily and accurately inverts the Laplace transform. Converting real time and space solution to Laplace space is often done in the petroleum industry and provides the vehicle to develop numerous new solutions. The Laplace space transform is frequently used in pressure transient analysis primarily because it can reduce or transform a highly difficult problem into a much simpler one. Typically, a Laplace space equation can be manipulated by use of simple algebra to accomplish other desired ends, such as incorporating additional transformed equations to solve other aspects of the engineering problem. Once the transformed equation is complete, it is then necessary to convert to real time and space. This conversion is accomplished analytically by what is referred to as inverting the Laplace transform with sets of formulas and relationships between real and transformed space and time. In many cases, this inversion is not easy or cannot be done by conventional analytic means. In those situation, an engineer requires a program that numerically inverts the transform. The new program, the Azari-Wooden-Graver, or AWG method, has this capability.
Inverse method predicting spinning modes radiated by a ducted fan from free-field measurements.
Lewy, Serge
2005-02-01
In the study the inverse problem of deducing the modal structure of the acoustic field generated by a ducted turbofan is addressed using conventional farfield directivity measurements. The final objective is to make input data available for predicting noise radiation in other configurations that would not have been tested. The present paper is devoted to the analytical part of that study. The proposed method is based on the equations governing ducted sound propagation and free-field radiation. It leads to fast computations checked on Rolls-Royce tests made in the framework of previous European projects. Results seem to be reliable although the system of equations to be solved is generally underdetermined (more propagating modes than acoustic measurements). A limited number of modes are thus selected according to any a priori knowledge of the sources. A first guess of the source amplitudes is obtained by adjusting the calculated maximum of radiation of each mode to the measured sound pressure level at the same angle. A least squares fitting gives the final solution. A simple correction can be made to take account of the mean flow velocity inside the nacelle which shifts the directivity patterns. It consists of modifying the actual frequency to keep the cut-off ratios unchanged.
Direct band gap silicon crystals predicted by an inverse design method
NASA Astrophysics Data System (ADS)
Oh, Young Jun; Lee, In-Ho; Lee, Jooyoung; Kim, Sunghyun; Chang, Kee Joo
2015-03-01
Cubic diamond silicon has an indirect band gap and does not absorb or emit light as efficiently as other semiconductors with direct band gaps. Thus, searching for Si crystals with direct band gaps around 1.3 eV is important to realize efficient thin-film solar cells. In this work, we report various crystalline silicon allotropes with direct and quasi-direct band gaps, which are predicted by the inverse design method which combines a conformation space annealing algorithm for global optimization and first-principles density functional calculations. The predicted allotropes exhibit energies less than 0.3 eV per atom and good lattice matches, compared with the diamond structure. The structural stability is examined by performing finite-temperature ab initio molecular dynamics simulations and calculating the phonon spectra. The absorption spectra are obtained by solving the Bethe-Salpeter equation together with the quasiparticle G0W0 approximation. For several allotropes with the band gaps around 1 eV, photovoltaic efficiencies are comparable to those of best-known photovoltaic absorbers such as CuInSe2. This work is supported by the National Research Foundation of Korea (2005-0093845 and 2008-0061987), Samsung Science and Technology Foundation (SSTF-BA1401-08), KIAS Center for Advanced Computation, and KISTI (KSC-2013-C2-040).
Wang, G.L.; Chew, W.C.; Cui, T.J.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.
2004-01-01
Three-dimensional (3D) subsurface imaging by using inversion of data obtained from the very early time electromagnetic system (VETEM) was discussed. The study was carried out by using the distorted Born iterative method to match the internal nonlinear property of the 3D inversion problem. The forward solver was based on the total-current formulation bi-conjugate gradient-fast Fourier transform (BCCG-FFT). It was found that the selection of regularization parameter follow a heuristic rule as used in the Levenberg-Marquardt algorithm so that the iteration is stable.
NASA Astrophysics Data System (ADS)
Huang, H.; Meng, D. Q.; Lai, X. C.; Liu, T. W.; Long, Y.; Hu, Q. M.
2014-08-01
The combined interatomic pair potentials of TiZrNi, including Morse and Inversion Gaussian, are successfully built by the lattice inversion method. Some experimental controversies on atomic occupancies of sites 6-8 in W-TiZrNi are analyzed and settled with these inverted potentials. According to the characteristics of composition and site preference occupancy of W-TiZrNi, two stable structural models of W-TiZrNi are proposed and the possibilities are partly confirmed by experimental data. The stabilities of W-TiZrNi mostly result from the contribution of Zr atoms to the phonon densities of states in lower frequencies.
NASA Astrophysics Data System (ADS)
Bubis, E. L.; Lozhkarev, V. V.; Stepanov, A. N.; Smirnov, A. I.; Kuzmin, I. V.; Malshakova, O. A.; Gusev, S. A.; Skorokhodov, E. V.
2016-08-01
The adaptive phase-contrast method with nonlinear (photothermal) and linear Zernike filters was investigated. Liquid and polymer media partially absorbing radiation served as photothermal Zernike filters. Efficient visualization and inversion of images of small-scale model objects were demonstrated experimentally. Growth-sector boundary in a nonlinear crystal was visualized.
ERIC Educational Resources Information Center
Axinte, D. A.
2008-01-01
The paper presents an "inverse" method to teach specialist manufacturing processes by identifying a focal representative product (RP) from which, key specialist manufacturing (KSM) processes are analysed and interrelated to assess the capability of integrated manufacturing routes. In this approach, RP should: comprise KSM processes; involve…
Peng, Yucheng; Gardner, Douglas J; Han, Yousoo; Cai, Zhiyong; Tshabalala, Mandla A
2013-09-01
Research and development of the renewable nanomaterial cellulose nanofibrils (CNFs) has received considerable attention. The effect of drying on the surface energy of CNFs was investigated. Samples of nanofibrillated cellulose (NFC) and cellulose nanocrystals (CNC) were each subjected to four separate drying methods: air-drying, freeze-drying, spray-drying, and supercritical-drying. The surface morphology of the dried CNFs was examined using a scanning electron microscope. The surface energy of the dried CNFs was determined using inverse gas chromatography at infinite dilution and column temperatures: 30, 40, 50, 55, and 60 °C. Surface energy measurements of supercritical-dried NFCs were performed also at column temperatures: 70, 75, and 80 °C. Different drying methods produced CNFs with different morphologies which in turn significantly influenced their surface energy. Supercritical-drying resulted in NFCs having a dispersion component of surface energy of 98.3±5.8 mJ/m(2) at 30 °C. The dispersion component of surface energy of freeze-dried NFCs (44.3±0.4 mJ/m(2) at 30 °C) and CNCs (46.5±0.9 mJ/m(2) at 30 °C) were the lowest among all the CNFs. The pre-freezing treatment during the freeze-drying process is hypothesized to have a major impact on the dispersion component of surface energy of the CNFs. The acid and base parameters of all the dried CNFs were amphoteric (acidic and basic) although predominantly basic in nature.
NASA Astrophysics Data System (ADS)
Gustafsson, Ove K. S.; Eriksson, Gunnar; Holm, Peter; Waern, Åsa; von Schoenberg, Pontus; Thaning, Lennart; Nordstrand, Melker; Persson, Rolf
2006-09-01
Radio wave propagation over sea paths is influenced by the local meteorological condition at the atmospheric layer near the surface, especially during ducts. Duct condition can be determined by measurements of local meteorological parameters, by weather forecast models or by using inverse methods. In order to evaluate the feasibility of using inverse methods to retrieve the refractivity profiles a measurement of RF signals and meteorological parameters were carried out at a test site in the Baltic. During the measurements, signal power from two broadcast antennas, one at Visby and one at Vastervik, were received at Musko, an island south of Stockholm. The measurements were performed during the summer 2005 and the data was used to test the software package for inversion methods, SAGA (Seismo Acoustic inversion using Genetic Algorithms, by Peter Gerstoft UCSD, US). Refractivity profiles retrieved by SAGA were compared with the refractivity profiles calculated from measured parameters, during parts of the experiment, from rocket sounding, radio sounding, local meteorological measurements using bulk model calculations, and also obtained by the Swedish operational weather forecast model HIRLAM. Surface based duct height are predicted in relative many situations even though the number of frequencies or antennas height has to be increased to diminish the ambiguous of the refractive index profile.
NASA Astrophysics Data System (ADS)
Yin, Zhi; Xu, Caijun; Wen, Yangmao; Jiang, Guoyan; Fan, Qingbiao; Liu, Yang
2016-05-01
Planar faults are widely adopted during inversions to determine slip distributions and fault geometries using geodetic observations; however, little research has been conducted with respect to curved faults. We attribute this to the lack of an appropriate parameterized modelling method. In this paper, we present a curved-fault modelling method (CFMM) that describes a curved fault according to specific parameters, and we also develop a corresponding hybrid iterative inversion algorithm (HIIA) to perform inversions for parametric curved-fault geometries and slips. The results of the strike-component and dip-component synthetic tests show that a complex S-shaped fault surface and a circular slip distribution are successfully recovered, indicating the strong performance of the CFMM and HIIA methods. In addition, we describe and verify a scenario for determining the number of necessary geometrical parameters for the HIIA and examine the case study of the Wenchuan earthquake, which occurred on a complex listric fault surface. During the iteration process of the HIIA, both the fault geometry and slip distribution of the Beichuan and Pengguan faults converge to optimal values, indicating a Beichuan fault (BCF) model with a continuous listric shape and gradual steepening from the southwest to the northeast, which is highly consistent with geological survey results. Both the synthetic and real-world case studies show that the HIIA and the CMFF are superior to the conventional fault modelling method based on rectangular planes and that these models have the potential for use in more integrated research involving inversion studies, such as joint slip/curved-fault-geometry inversions that take into account data resolving power.
X-Ray Imaging-Spectroscopy of Abell 1835
NASA Technical Reports Server (NTRS)
Peterson, J. R.; Paerels, F. B. S.; Kaastra, J. S.; Arnaud, M.; Reiprich T. H.; Fabian, A. C.; Mushotzky, R. F.; Jernigan, J. G.; Sakelliou, I.
2000-01-01
We present detailed spatially-resolved spectroscopy results of the observation of Abell 1835 using the European Photon Imaging Cameras (EPIC) and the Reflection Grating Spectrometers (RGS) on the XMM-Newton observatory. Abell 1835 is a luminous (10(exp 46)ergs/s), medium redshift (z = 0.2523), X-ray emitting cluster of galaxies. The observations support the interpretation that large amounts of cool gas are present in a multi-phase medium surrounded by a hot (kT(sub e) = 8.2 keV) outer envelope. We detect O VIII Ly(alpha) and two Fe XXIV complexes in the RGS spectrum. The emission measure of the cool gas below kT(sub e) = 2.7 keV is much lower than expected from standard cooling-flow models, suggesting either a more complicated cooling process than simple isobaric radiative cooling or differential cold absorption of the cooler gas.
Basin mass dynamic changes in China from GRACE based on a multibasin inversion method
NASA Astrophysics Data System (ADS)
Yi, Shuang; Wang, Qiuyu; Sun, Wenke
2016-05-01
Complex landforms, miscellaneous climates, and enormous populations have influenced various geophysical phenomena in China, which range from water depletion in the underground to retreating glaciers on high mountains and have attracted abundant scientific interest. This paper, which utilizes gravity observations during 2003-2014 from the Gravity Recovery and Climate Experiment (GRACE), intends to comprehensively estimate the mass status in 16 drainage basins in the region. We propose a multibasin inversion method that features resistance to stripe noise and an ability to alleviate signal attenuation from the truncation and smoothing of GRACE data. The results show both positive and negative trends. Tremendous mass accumulation has occurred from the Tibetan Plateau (12.1 ± 0.6 Gt/yr) to the Yangtze River (7.7 ± 1.3 Gt/yr) and southeastern coastal areas, which is suggested to involve an increase in the groundwater storage, lake and reservoir water volume, and the flow of materials from tectonic processes. Additionally, mass loss has occurred in the Huang-Huai-Hai-Liao River Basin (-10.2 ± 0.9 Gt/yr), the Brahmaputra-Nujiang-Lancang River Basin (-15.0 ± 1.1 Gt/yr), and Tienshan Mountain (-4.1 ± 0.3 Gt/yr), a result of groundwater pumping and glacier melting. Areas with groundwater depletion are consistent with the distribution of cities with land subsidence in North China. We find that intensified precipitation can alter the local water supply and that GRACE can adequately capture these dynamics, which could be instructive for China's South-to-North Water Diversion hydrologic project.
Full-Waveform Inversion Method for Data Measured by the CONSERT Instrument aboard Rosetta
NASA Astrophysics Data System (ADS)
Statz, C.; Plettemeier, D.; Herique, A.; Kofman, W. W.
2014-12-01
The primary scientific objective of the Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT) aboard Rosetta is to perform a dielectric characterization of comet 67P/Chuyurmov-Gerasimenko's nucleus by means of a bi-static sounding between the lander Philae launched onto the comet's surface and the orbiter Rosetta. For the sounding the lander part of CONSERT will receive and process the radio signal emitted by the orbiter part of the instrument and transmit a signal to the orbiter to be received by CONSERT. With data measured during the first science phase, a three-dimensional model of the material distribution with regard to the complex dielectric permittivity of the comet's nucleus is to be reconstructed. In order to perform the 3D characterization of the nucleus we employ a full-waveform least-squares based inversion in time-domain. The reconstruction is performed on the envelope of the received signal. The direct problem of simulating the wave-propagation inside the comet's nucleus is modelled using a wideband nonstandard finite-differences in time-domain approach and a compensation method to account for the differences in free-space path-loss due to the removal of the carrier in the simulation. This approach will yield an approximation of the permittivity distribution including features large compared to the bandwith of the sounding signal. In order to account for restrictions on the measurement positions by the orbitography and limitations on the instrument dynamic range we employ a regularization technique where the permittivity distribution and the gradient with regard to the permittivity is projected in a domain defined by a viable model of the spatial material distribution. The least-squares optimization step of the reconstruction is performed in such domain on a reduced set of parameters. To demonstrate the viability of the proposed approaches we provide reconstruction results based on simulation data and scale-model laboratory
NASA Astrophysics Data System (ADS)
Bellet, Michel; Massoni, Elisabeth; Boude, Serge
2004-06-01
Superplastic forming is a thermoforming-like process commonly applied to titanium and aluminum alloys at high temperature and in specific conditions. This paper presents the application of an inverse analysis technique to the identification of rheological and tribological parameters. The method consists of two steps. First, two different kinds of forming tests have been carried out for rheological and tribological identification, using specific mold shapes. Accurate instrumentation and measurements have been done in order to feed an experimental database (values of appropriate observables). In a second step, the development of an inverse method has been carried out. It consists of the minimization of an objective function representative of the distance — in a least squares sense — between measured and calculated values of the observables. The algorithm, which is coupled with the finite element model FORGE2®, is based on a Gauss-Newton method, including a sensitivity matrix calculated by the semi-analytical method.
NASA Astrophysics Data System (ADS)
Monteiller, Vadim; Chevrot, Sébastien; Komatitsch, Dimitri; Wang, Yi
2015-08-01
We present a method for high-resolution imaging of lithospheric structures based on full waveform inversion of teleseismic waveforms. We model the propagation of seismic waves using our recently developed direct solution method/spectral-element method hybrid technique, which allows us to simulate the propagation of short-period teleseismic waves through a regional 3-D model. We implement an iterative quasi-Newton method based upon the L-BFGS algorithm, where the gradient of the misfit function is computed using the adjoint-state method. Compared to gradient or conjugate-gradient methods, the L-BFGS algorithm has a much faster convergence rate. We illustrate the potential of this method on a synthetic test case that consists of a crustal model with a crustal discontinuity at 25 km depth and a sharp Moho jump. This model contains short- and long-wavelength heterogeneities along the lateral and vertical directions. The iterative inversion starts from a smooth 1-D model derived from the IASP91 reference Earth model. We invert both radial and vertical component waveforms, starting from long-period signals filtered at 10 s and gradually decreasing the cut-off period down to 1.25 s. This multiscale algorithm quickly converges towards a model that is very close to the true model, in contrast to inversions involving short-period waveforms only, which always get trapped into a local minimum of the cost function.
NASA Astrophysics Data System (ADS)
Yadav, V.; Shiga, Y. P.; Michalak, A. M.
2012-12-01
The accurate spatio-temporal quantification of fossil fuel emissions is a scientific challenge. Atmospheric inverse models have the capability to overcome this challenge and provide estimates of fossil fuel emissions. Observational and computational limitations limit current analyses to the estimations of a combined "biospheric flux and fossil-fuel emissions" carbon dioxide (CO2) signal, at coarse spatial and temporal resolution. Even in these coarse resolution inverse models, the disaggregation of a strong biospheric signal form a weaker fossil-fuel signal has proven difficult. The use of multiple tracers (delta 14C, CO, CH4, etc.) has provided a potential path forward, but challenges remain. In this study, we attempt to disaggregate biospheric fluxes and fossil-fuel emissions on the basis of error covariance models rather through tracer based CO2 inversions. The goal is to more accurately define the underlying structure of the two processes by using a stationary exponential covariance model for the biospheric fluxes, in conjunction with a semi-stationary covariance model derived from nightlights for fossil fuel emissions. A non-negativity constraint on fossil fuel emissions is imposed using a data transformation approach embedded in an iterative quasi-linear inverse modeling algorithm. The study is performed for January and June 2008, using the ground-based CO2 measurement network over North America. The quality of disaggregation is examined by comparing the inferred spatial distribution of biospheric fluxes and fossil-fuel emissions in a synthetic-data inversion. In addition to disaggregation of fluxes, the ability of the covariance models derived from nightlights to explain the fossil-fuel emissions over North America is also examined. The simple covariance model proposed in this study is found to improve estimation and disaggregation of fossil-fuel emissions from biospheric fluxes in the tracer-based inverse models.
Inversion of lunar regolith layer thickness with CELMS data using BPNN method
NASA Astrophysics Data System (ADS)
Meng, Zhiguo; Xu, Yi; Zheng, Yongchun; Zhu, Yongchao; Jia, Yu; Chen, Shengbo
2014-10-01
Inversion of the lunar regolith layer thickness is one of the scientific objectives of current Moon research. In this paper, the global lunar regolith layer thickness is inversed with the back propagation neural network (BPNN) technique. First, the radiative transfer simulation is employed to study the relationship between the lunar regolith layer thickness d and the observed brightness temperature TB's. The simulation results show that the parameters such as the surface roughness σ, slope θs and the (FeO+TiO2) abundance S have strong influence on the observed TB's. Therefore, TB's, σ, θs and S are selected as the inputs of the BPNN network. Next, the four-layer BPNN network with seven-dimension input and two hidden layers is constructed by taking nonlinearity into account with sigmoid functions. Then, BPNN network is trained with the corresponding parameters collected in Apollo landing sites. To tackle issues introduced by the small number of the training samples, the six-dimension similarity degree is introduced to indicate similarities of the inversion results to the correspondent training samples. Thus, the output lunar regolith layer thickness is defined as the sum of the product of the similarity degree and the thickness at the corresponding landing site. Once training phase finishes, the lunar regolith layer thickness can be inversed speedily with the four-channel TB's concluded from the CELMS data, σ and θs estimated from LOLA data and S derived from Clementine UV/vis data. the inversed thickness agrees well with the values estimated by ground-based radar data in low latitude regions. The results indicate that the thickness in the maria varies from about 0.5 m to 12 m, and the mean is about 6.52 m; while the thickness in highlands is a bit thicker than the previous estimation, where the thickness varies widely from 10 m to 31.5 m, and the mean thickness is about 16.8 m. In addition, the relation between the ages, the (FeO+TiO2) abundance and the
Solution of inverse heat conduction problem using the Tikhonov regularization method
NASA Astrophysics Data System (ADS)
Duda, Piotr
2017-02-01
It is hard to solve ill-posed problems, as calculated temperatures are very sensitive to errors made while calculating "measured" temperatures or performing real-time measurements. The errors can create temperature oscillation, which can be the cause of an unstable solution. In order to overcome such difficulties, a variety of techniques have been proposed in literature, including regularization, future time steps and smoothing digital filters. In this paper, the Tikhonov regularization is applied to stabilize the solution of the inverse heat conduction problem. The impact on the inverse solution stability and accuracy is demonstrated.
NASA Astrophysics Data System (ADS)
Fujimoto, Ken'ichi; Tanaka, Yoshihiro; Abou Al-Ola, Omar M.; Yoshinaga, Tetsuya
2014-06-01
We propose a novel approach for solving box-constrained inverse problems in intensity-modulated radiation therapy (IMRT) treatment planning based on the idea of continuous dynamical methods and split-feasibility algorithms. Our method can compute a feasible solution without the second derivative of an objective function, which is required for gradient-based optimization algorithms. We prove theoretically that a double Kullback-Leibler divergence can be used as the Lyapunov function for the IMRT planning system.
The GenABEL Project for statistical genomics
Karssen, Lennart C.; van Duijn, Cornelia M.; Aulchenko, Yurii S.
2016-01-01
Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the “core team”, facilitating agile statistical omics methodology development and fast dissemination. PMID:27347381
Jiang, Hai-ling; Yang, Hang; Chen, Xiao-ping; Wang, Shu-dong; Li, Xue-ke; Liu, Kai; Cen, Yi
2015-04-01
Spectral index method was widely applied to the inversion of crop chlorophyll content. In the present study, PSR3500 spectrometer and SPAD-502 chlorophyll fluorometer were used to acquire the spectrum and relative chlorophyll content (SPAD value) of winter wheat leaves on May 2nd 2013 when it was at the jointing stage of winter wheat. Then the measured spectra were resampled to simulate TM multispectral data and Hyperion hyperspectral data respectively, using the Gaussian spectral response function. We chose four typical spectral indices including normalized difference vegetation index (NDVD, triangle vegetation index (TVI), the ratio of modified transformed chlorophyll absorption ratio index (MCARI) to optimized soil adjusted vegetation index (OSAVI) (MCARI/OSAVI) and vegetation index based on universal pattern decomposition (VIUPD), which were constructed with the feature bands sensitive to the vegetation chlorophyll. After calculating these spectral indices based on the resampling TM and Hyperion data, the regression equation between spectral indices and chlorophyll content was established. For TM, the result indicates that VIUPD has the best correlation with chlorophyll (R2 = 0.819 7) followed by NDVI (R2 = 0.791 8), while MCARI/OSAVI and TVI also show a good correlation with R2 higher than 0.5. For the simulated Hyperion data, VIUPD again ranks first with R2 = 0.817 1, followed by MCARI/OSAVI (R2 = 0.658 6), while NDVI and TVI show very low values with R2 less than 0.2. It was demonstrated that VIUPD has the best accuracy and stability to estimate chlorophyll of winter wheat whether using simulated TM data or Hyperion data, which reaffirms that VIUPD is comparatively sensor independent. The chlorophyll estimation accuracy and stability of MCARI/OSAVI also works well, partly because OSAVI could reduce the influence of backgrounds. Two broadband spectral indices NDVI and TVI are weak for the chlorophyll estimation of simulated Hyperion data mainly because of
Wang, Fei; Lin, Qi-zhong; Wang, Qin-jun; Li, Shuai
2011-05-01
The rapid identification of the minerals in the field is crucial in the remote sensing geology study and mineral exploration. The characteristic spectrum linear inversion modeling is able to obtain the mineral information quickly in the field study. However, the authors found that there was significant difference among the results of the model using the different kinds of spectra of the same sample. The present paper mainly studied the continuum based fast Fourier transform processing (CFFT) method and the characteristic spectrum linear inversion modeling (CSLM). On one hand, the authors obtained the optimal preferences of the CFFT method when applying it to rock samples: setting the CFFT low-pass frequency to 150 Hz. On the other hand, through the evaluation and study of the results of CSLM using different spectra, the authors found that the ASD spectra which were denoised in the CFFT method could provide better results when using them to extract the mineral information in the field.
RADIO AND DEEP CHANDRA OBSERVATIONS OF THE DISTURBED COOL CORE CLUSTER ABELL 133
Randall, S. W.; Nulsen, P. E. J.; Forman, W. R.; Murray, S. S.; Clarke, T. E.; Owers, M. S.; Sarazin, C. L.
2010-10-10
We present results based on new Chandra and multi-frequency radio observations of the disturbed cool core cluster Abell 133. The diffuse gas has a complex bird-like morphology, with a plume of emission extending from two symmetric wing-like features. The plume is capped with a filamentary radio structure that has been previously classified as a radio relic. X-ray spectral fits in the region of the relic indicate the presence of either high-temperature gas or non-thermal emission, although the measured photon index is flatter than would be expected if the non-thermal emission is from inverse Compton scattering of the cosmic microwave background by the radio-emitting particles. We find evidence for a weak elliptical X-ray surface brightness edge surrounding the core, which we show is consistent with a sloshing cold front. The plume is consistent with having formed due to uplift by a buoyantly rising radio bubble, now seen as the radio relic, and has properties consistent with buoyantly lifted plumes seen in other systems (e.g., M87). Alternatively, the plume may be a gas sloshing spiral viewed edge-on. Results from spectral analysis of the wing-like features are inconsistent with the previous suggestion that the wings formed due to the passage of a weak shock through the cool core. We instead conclude that the wings are due to X-ray cavities formed by displacement of X-ray gas by the radio relic. The central cD galaxy contains two small-scale cold gas clumps that are slightly offset from their optical and UV counterparts, suggestive of a galaxy-galaxy merger event. On larger scales, there is evidence for cluster substructure in both optical observations and the X-ray temperature map. We suggest that the Abell 133 cluster has recently undergone a merger event with an interloping subgroup, initialing gas sloshing in the core. The torus of sloshed gas is seen close to edge-on, leading to the somewhat ragged appearance of the elliptical surface brightness edge. We show
NASA Astrophysics Data System (ADS)
Edwards, L. O. V.; Alpert, H. S.; Trierweiler, I. L.; Abraham, T.; Beizer, V. G.
2016-09-01
We present the first results from an integral field unit (IFU) spectroscopic survey of a ˜75 kpc region around three brightest cluster galaxies (BCGs), combining over 100 IFU fibres to study the intracluster light (ICL). We fit population synthesis models to estimate age and metallicity. For Abell 85 and Abell 2457, the ICL is best-fit with a fraction of old, metal-rich stars like in the BCG, but requires 30-50 per cent young and metal-poor stars, a component not found in the BCGs. This is consistent with the ICL having been formed by a combination of interactions with less massive, younger, more metal-poor cluster members in addition to stars that form the BCG. We find that the three galaxies are in different stages of evolution and may be the result of different formation mechanisms. The BCG in Abell 85 is near a relatively young, metal-poor galaxy, but the dynamical friction time-scale is long and the two are unlikely to be undergoing a merger. The outer regions of Abell 2457 show a higher relative fraction of metal-poor stars, and we find one companion, with a higher fraction of young, metal-poor stars than the BCG, which is likely to merge within a gigayear. Several luminous red galaxies are found at the centre of the cluster IIZw108, with short merger time-scales, suggesting that the system is about to embark on a series of major mergers to build up a dominant BCG. The young, metal-poor component found in the ICL is not found in the merging galaxies.
A weak-lensing analysis of the Abell 383 cluster
NASA Astrophysics Data System (ADS)
Huang, Z.; Radovich, M.; Grado, A.; Puddu, E.; Romano, A.; Limatola, L.; Fu, L.
2011-05-01
Aims: We use deep CFHT and SUBARU uBVRIz archival images of the Abell 383 cluster (z = 0.187) to estimate its mass by weak-lensing. Methods: To this end, we first use simulated images to check the accuracy provided by our Kaiser-Squires-Broadhurst (KSB) pipeline. These simulations include shear testing programme (STEP) 1 and 2 simulations, as well as more realistic simulations of the distortion of galaxy shapes by a cluster with a Navarro-Frenk-White (NFW) profile. From these simulations we estimate the effect of noise on shear measurement and derive the correction terms. The R-band image is used to derive the mass by fitting the observed tangential shear profile with an NFW mass profile. Photometric redshifts are computed from the uBVRIz catalogs. Different methods for the foreground/background galaxy selection are implemented, namely selection by magnitude, color, and photometric redshifts, and the results are compared. In particular, we developed a semi-automatic algorithm to select the foreground galaxies in the color-color diagram, based on the observed colors. Results: Using color selection or photometric redshifts improves the correction of dilution from foreground galaxies: this leads to higher signals in the inner parts of the cluster. We obtain a cluster mass Mvir = 7.5+2.7_{-1.9 × 1014} M⊙: this value is 20% higher than previous estimates and is more consistent the mass expected from X-ray data. The R-band luminosity function of the cluster is computed and gives a total luminosity Ltot = (2.14 ± 0.5) × 1012 L⊙ and a mass-to-luminosity ratio M/L 300 M⊙/L⊙. Based on: data collected with the Subaru Telescope (University of Tokyo) and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan; observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council (NRC) of Canada
NASA Astrophysics Data System (ADS)
Kirby, Jon F.
2014-09-01
The effective elastic thickness (Te) is a geometric measure of the flexural rigidity of the lithosphere, which describes the resistance to bending under the application of applied, vertical loads. As such, it is likely that its magnitude has a major role in governing the tectonic evolution of both continental and oceanic plates. Of the several ways to estimate Te, one has gained popularity in the 40 years since its development because it only requires gravity and topography data, both of which are now readily available and provide excellent coverage over the Earth and even the rocky planets and moons of the solar system. This method, the ‘inverse spectral method’, develops measures of the relationship between observed gravity and topography data in the spatial frequency (wavenumber) domain, namely the admittance and coherence. The observed measures are subsequently inverted against the predictions of thin, elastic plate models, giving estimates of Te and other lithospheric parameters. This article provides a review of inverse spectral methodology and the studies that have used it. It is not, however, concerned with the geological or geodynamic significance or interpretation of Te, nor does it discuss and compare Te results from different methods in different provinces. Since the three main aspects of the subject are thin elastic plate flexure, spectral analysis, and inversion methods, the article broadly follows developments in these. The review also covers synthetic plate modelling, and concludes with a summary of the controversy currently surrounding inverse spectral methods, whether or not the large Te values returned in cratonic regions are artefacts of the method, or genuine observations.
An inversion method for retrieving soil moisture information from satellite altimetry observations
NASA Astrophysics Data System (ADS)
Uebbing, Bernd; Forootan, Ehsan; Kusche, Jürgen; Braakmann-Folgmann, Anne
2016-04-01
Soil moisture represents an important component of the terrestrial water cycle that controls., evapotranspiration and vegetation growth. Consequently, knowledge on soil moisture variability is essential to understand the interactions between land and atmosphere. Yet, terrestrial measurements are sparse and their information content is limited due to the large spatial variability of soil moisture. Therefore, over the last two decades, several active and passive radar and satellite missions such as ERS/SCAT, AMSR, SMOS or SMAP have been providing backscatter information that can be used to estimate surface conditions including soil moisture which is proportional to the dielectric constant of the upper (few cm) soil layers . Another source of soil moisture information are satellite radar altimeters, originally designed to measure sea surface height over the oceans. Measurements of Jason-1/2 (Ku- and C-Band) or Envisat (Ku- and S-Band) nadir radar backscatter provide high-resolution along-track information (~ 300m along-track resolution) on backscatter every ~10 days (Jason-1/2) or ~35 days (Envisat). Recent studies found good correlation between backscatter and soil moisture in upper layers, especially in arid and semi-arid regions, indicating the potential of satellite altimetry both to reconstruct and to monitor soil moisture variability. However, measuring soil moisture using altimetry has some drawbacks that include: (1) the noisy behavior of the altimetry-derived backscatter (due to e.g., existence of surface water in the radar foot-print), (2) the strong assumptions for converting altimetry backscatters to the soil moisture storage changes, and (3) the need for interpolating between the tracks. In this study, we suggest a new inversion framework that allows to retrieve soil moisture information from along-track Jason-2 and Envisat satellite altimetry data, and we test this scheme over the Australian arid and semi-arid regions. Our method consists of: (i
NASA Astrophysics Data System (ADS)
Olsen, Scott Charles
In this dissertation, new inverse scattering algorithms are derived for the Helmholtz equation using the Extended Born field model (eikonal rescattered field), and the angular spectrum (parabolic) layered field model. These two field models performed the 'best' of all the field models evaluated. Algorithms are solved with conjugate gradient methods. An advanced ultrasonic data acquisition system is also designed. Many different field models for use in a reconstruction algorithm are investigated. 'Layered' field models that mathematically partition the field calculation in layers in space possess the advantage that the field in layer n is calculated from the field in layer n - 1. Several of the 'layered' field models are investigated in terms of accuracy and computational complexity. Field model accuracy using field rescattering is also tested. The models investigated are the eikonal field model, the angular spectrum (AS) field model, and the parabolic field models known as the Split-Step Fast-Fourier Transform and the Crank-Nicolson algorithms. All of the 'layered' field models can be referred to as Extended Born field models since the 'layered' field models are more accurate than the Born approximated total field. The Rescattered Extended Born (eikonal rescattered field) Transmission Mode (REBTM) algorithm with the AS field model and the Nonrescattered AS Reconstruction (NASR) algorithm are tested with several types of objects: a single-layer cylinder, double-layer cylinders, two double-layer cylinders and the breast model. Both algorithms, REBTM and NASR work well; however, the NASR algorithm is faster and more accurate than the REBTM algorithm. The NASR algorithm is matched well with the requirements of breast model reconstructions. A major purpose of new scanner development is to collect both transmission and reflection data from multiple ultrasonic transducer arrays to test the next generation of reconstruction algorithms. The data acquisition system advanced
Hassaballah, Abdallah I.; Hassan, Mohsen A.; Mardi, Azizi N.; Hamdi, Mohd
2013-01-01
The determination of the myocardium’s tissue properties is important in constructing functional finite element (FE) models of the human heart. To obtain accurate properties especially for functional modeling of a heart, tissue properties have to be determined in vivo. At present, there are only few in vivo methods that can be applied to characterize the internal myocardium tissue mechanics. This work introduced and evaluated an FE inverse method to determine the myocardial tissue compressibility. Specifically, it combined an inverse FE method with the experimentally-measured left ventricular (LV) internal cavity pressure and volume versus time curves. Results indicated that the FE inverse method showed good correlation between LV repolarization and the variations in the myocardium tissue bulk modulus K (K = 1/compressibility), as well as provided an ability to describe in vivo human myocardium material behavior. The myocardium bulk modulus can be effectively used as a diagnostic tool of the heart ejection fraction. The model developed is proved to be robust and efficient. It offers a new perspective and means to the study of living-myocardium tissue properties, as it shows the variation of the bulk modulus throughout the cardiac cycle. PMID:24367544
Leyre, Sven; Meuret, Youri; Durinck, Guy; Hofkens, Johan; Deconinck, Geert; Hanselaer, Peter
2014-04-01
The accuracy of optical simulations including bulk diffusors is heavily dependent on the accuracy of the bulk scattering properties. If no knowledge on the physical scattering effects is available, an iterative procedure is usually used to obtain the scattering properties, such as the inverse Monte Carlo method or the inverse adding-doubling (AD) method. In these methods, a predefined phase function with one free parameter is usually used to limit the number of free parameters. In this work, three predefined phase functions (Henyey-Greenstein, two-term Henyey-Greenstein, and Gegenbauer kernel (GK) phase function) are implemented in the inverse AD method to determine the optical properties of two strongly diffusing materials: low-density polyethylene and TiO₂ particles. Using the presented approach, an estimation of the effective phase function was made. It was found that the use of the GK phase function resulted in the best agreement between calculated and experimental transmittance, reflectance, and scattered radiant intensity distribution for the LDPE sample. For the TiO₂ sample, a good agreement was obtained with both the two-term Henyey-Greenstein and the GK phase function.
NASA Astrophysics Data System (ADS)
Li, Liang; Chen, Zhiqiang; Xu, Ronglan; Huang, Ya
2011-12-01
The plasma density distribution of plasmasphere in the geomagnetic equatorial plane can help us study the magnetosphere like plasmasphere, ionosphere and their kinetics. In this paper, we introduce a new inversion method, GE-ART, to calculate the plasma density distribution in the geomagnetic equatorial plane from the Extreme Ultraviolet (EUV) data of IMAGE satellite under the assumption that the plasma density is constant along each geomagnetic field line. The new GE-ART algorithm was derived from the traditional Algebraic Reconstruction Techniques (ART) in Computed Tomography (CT) which was different from the several existing methods. In this new method, each value of the EUV image data was back-projected evenly to the geomagnetic field lines intersected by this EUV sight. A 3-D inversion matrix was produced by the contributions of all the voxels contained in the plasmasphere covered by the EUV sensor. That is, we considered that each value of the EUV image data was relative to the plasma densities of all the voxels passed through by the corresponding EUV radiation, which is the biggest difference to all the existing inversion methods. Finally, the GE-ART algorithm was evaluated by the real EUV data from the IMAGE satellite.
NASA Astrophysics Data System (ADS)
Shin, Wae-Gyeong; Lee, Soo-Hong
Reliability of automotive parts has been one of the most interesting fields in the automotive industry. Especially small DC motor was issued because of the increasing adoption for passengers' safety and convenience. This study was performed to develop the accelerated life test method using Inverse power law model for small DC motors. The failure mode of small DC motor includes brush wear-out. Inverse power law model is applied effectively the electronic components to reduce the testing time and to achieve the accelerating test conditions. Accelerated life testing method was induced to bring on the brush wear-out as increasing voltage of motor. Life distribution of the small DC motor was supposed to follow Weibull distribution and life test time was calculated under the conditions of B10 life and 90% confidence level.
Basis set expansion for inverse problems in plasma diagnostic analysis
Jones, B.; Ruiz, C. L.
2013-07-15
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20–25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Basis set expansion for inverse problems in plasma diagnostic analysis
NASA Astrophysics Data System (ADS)
Jones, B.; Ruiz, C. L.
2013-07-01
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)], 10.1063/1.1482156 is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
NASA Astrophysics Data System (ADS)
Sergienko, Olga
2013-04-01
Since Doug MacAyeal's pioneering studies of the ice-stream basal traction optimizations by control methods, inversions for unknown parameters (e.g., basal traction, accumulation patterns, etc) have become a hallmark of the present-day ice-sheet modeling. The common feature of such inversion exercises is a direct relationship between optimized parameters and observations used in the optimization procedure. For instance, in the standard optimization for basal traction by the control method, ice-stream surface velocities constitute the control data. The optimized basal traction parameters explicitly appear in the momentum equations for the ice-stream velocities (compared to the control data). The inversion for basal traction is carried out by minimization of the cost (or objective, misfit) function that includes the momentum equations facilitated by the Lagrange multipliers. Here, we build upon this idea, and demonstrate how to optimize for parameters indirectly related to observed data using a suite of nested constraints (like Russian dolls) with additional sets of Lagrange multipliers in the cost function. This method opens the opportunity to use data from a variety of sources and types (e.g., velocities, radar layers, surface elevation changes, etc.) in the same optimization process.
Structure of Abell 1995 from optical and X-ray data: a galaxy cluster with an elongated radio halo
NASA Astrophysics Data System (ADS)
Boschin, W.; Girardi, M.; Barrena, R.
2012-11-01
Context. Abell 1995 is a puzzling galaxy cluster hosting a powerful radio halo, but it has not yet been recognized as a obvious cluster merger, as usually expected for clusters with diffuse radio emission. Aims: We aim at an exhaustive analysis of the internal structure of Abell 1995 to verify that this cluster is really dynamically relaxed, as reported in previous studies. Methods: We base our analysis on new and archival spectroscopic and photometric data for 126 galaxies in the field of Abell 1995. The study of the hot intracluster medium was performed on X-ray archival data. Results: Based on 87 fiducial cluster members, we have computed the average cluster redshift ⟨z⟩ = 0.322 and the global radial velocity dispersion σV ~ 1300 km s-1. We detect two main optical subclusters separated by 1.5'that cause the known NE-SW elongation of the galaxy distribution and a significant velocity gradient in the same direction. As for the X-ray analysis, we confirm that the intracluster medium is mildly elongated, but we also detect three X-ray peaks. Two X-ray peaks are offset with respect to the two galaxy peaks and lie between them, thus suggesting a bimodal merger caught in a phase of post core-core passage. The third X-ray peak lies between the NE galaxy peak and a third, minor galaxy peak suggesting a more complex merger. The difficulty of separating the two main systems leads to a large uncertainty on the line-of-sight (LOS) velocity separation and the system mass: ΔVrf,LOS = 600-2000 km s-1and Msys = 2-5×1015 h70-1 M⊙, respectively. Simple analytical arguments suggest a merging scenario for Abell 1995, where two main subsystems are seen just after the collision with an intermediate projection angle. Conclusions: The high mass of Abell 1995 and the evidence of merging suggest it is not atypical among clusters with known radio halos. Interestingly, our findings reinforce the previous evidence for the peculiar dichotomy between the dark matter and galaxy
NASA Astrophysics Data System (ADS)
Pagnacco, E.; de Cursi, E. Souza; Sampaio, R.
2016-07-01
This study concerns the computation of frequency responses of linear stochastic mechanical systems through a modal analysis. A new strategy, based on transposing standards deterministic deflated and subspace inverse power methods into stochastic framework, is introduced via polynomial chaos representation. Applicability and effectiveness of the proposed schemes is demonstrated through three simple application examples and one realistic application example. It is shown that null and repeated-eigenvalue situations are addressed successfully.
NASA Astrophysics Data System (ADS)
Chen, H.; Li, K.
2012-12-01
We applied a wave-equation based adjoint wavefield method for seismic illumination/resolution analyses and full waveform inversion. A two-way wave-equation is used to calculate directional and diffracted energy fluxes for waves propagating between sources and receivers to the subsurface target. The first-order staggered-grid pressure-velocity formulation, which lacks the characteristic of being self-adjoint is further validated and corrected to render the modeling operator before its practical application. Despite most published papers on synthetic kernel research, realistic applications to two field experiments are demonstrated and emphasize its practical needs. The Fréchet sensitivity kernels are used to quantify the target illumination conditions. For realistic illumination measurements and resolution analyses, two completely different survey geometries and nontrivial pre-conditioning strategies based on seismic data type are demonstrated and compared. From illumination studies, particle velocity responses are more sensitive to lateral velocity variations than pressure records. For waveform inversion, the more accurately estimated velocity model obtained the deeper the depth of investigation would be reached. To achieve better resolution and illumination, closely spaced OBS receiver interval is preferred. Based on the results, waveform inversion is applied for a gas hydrate site in Taiwan for shallow structure and BSR detection. Full waveform approach potentially provides better depth resolution than ray approach. The quantitative analyses, a by-product of full waveform inversion, are useful for quantifying seismic processing and depth migration strategies.llumination/resolution analysis for a 3D MCS/OBS survey in 2008. Analysis of OBS data shows that pressure (top), horizontal (middle) and vertical (bottom) velocity records produce different resolving power for gas hydrate exploration. ull waveform inversion of 8 OBS data along Yuan-An Ridge in SW Taiwan
MUSE observations of the lensing cluster Abell 1689
NASA Astrophysics Data System (ADS)
Bina, D.; Pelló, R.; Richard, J.; Lewis, J.; Patrício, V.; Cantalupo, S.; Herenz, E. C.; Soto, K.; Weilbacher, P. M.; Bacon, R.; Vernet, J. D. R.; Wisotzki, L.; Clément, B.; Cuby, J. G.; Lagattuta, D. J.; Soucail, G.; Verhamme, A.
2016-05-01
Context. This paper presents the results obtained with the Multi Unit Spectroscopic Explorer (MUSE) for the core of the lensing cluster Abell 1689, as part of MUSE's commissioning at the ESO Very Large Telescope. Aims: Integral-field observations with MUSE provide a unique view of the central 1 × 1 arcmin2 region at intermediate spectral resolution in the visible domain, allowing us to conduct a complete census of both cluster galaxies and lensed background sources. Methods: We performed a spectroscopic analysis of all sources found in the MUSE data cube. Two hundred and eighty-two objects were systematically extracted from the cube based on a guided-and-manual approach. We also tested three different tools for the automated detection and extraction of line emitters. Cluster galaxies and lensed sources were identified based on their spectral features. We investigated the multiple-image configuration for all known sources in the field. Results: Previous to our survey, 28 different lensed galaxies displaying 46 multiple images were known in the MUSE field of view, most of them were detected through photometric redshifts and lensing considerations. Of these, we spectroscopically confirm 12 images based on their emission lines, corresponding to 7 different lensed galaxies between z = 0.95 and 5.0. In addition, 14 new galaxies have been spectroscopically identified in this area thanks to MUSE data, with redshifts ranging between 0.8 and 6.2. All background sources detected within the MUSE field of view correspond to multiple-imaged systems lensed by A1689. Seventeen sources in total are found at z ≥ 3 based on their Lyman-α emission, with Lyman-α luminosities ranging between 40.5 ≲ log (Lyα) ≲ 42.5 after correction for magnification. This sample is particularly sensitive to the slope of the luminosity function toward the faintest end. The density of sources obtained in this survey is consistent with a steep value of α ≤ -1.5, although this result still
NASA Astrophysics Data System (ADS)
Agata, R.; Ichimura, T.; Hori, T.; Hirahara, K.; Hashimoto, C.; Hori, M.
2015-12-01
Inverse analysis of the coseismic/postseismic slip using postseismic deformation observation data is an important topic in geodetic inversion. Inverse analysis method may be improved by using numerical simulation (e.g. finite element (FE) method) of viscoelastic deformation, the model of which is of high-fidelity to the available high-resolution crustal data. The authors had been developing a large-scale simulation method using such FE high-fidelity models (HFM), assuming use of K computer, the current fastest supercomputer in Japan. In this study, we developed an inverse analysis method incorporating HFM, in which the asthenosphere viscosity and fault slip are estimated simultaneously, since the value of viscosity in the simulation is not trivial. We carried out numerical experiments using synthetic crustal deformation data. Based on Ichimura et al. (2013), we constructed an HFM in the domain of 2048x1536x850 km, which includes the Tohoku region in northeast Japan. We used the data set of JTOPO30 (2003), Koketsu et al. (2008) and CAMP standard model (Hashimoto et al. 2004) for the model geometry. The HFM is currently in 2km resolution, resulting in 0.5 billion degrees-of-freedom. The figure shows the overview of HFM. Synthetic crustal deformation data of three years after an earthquake in the location of GEONET, GPS/A observation points, and S-net were used. Inverse analysis was formulated as minimization of L2 norm of the difference between the FE simulation results and the observation data with respect to viscosity and fault slip, combining quasi-Newton algorithm with adjoint method. Coseismic slip was expressed by superposition of 53 subfaults, with four viscoelastic layers. We carried out 90 forward simulations, and the 57 parameters converged to the true values. Due to the fast computation method, it took only five hours using 2048 nodes (1/40 of entire resource) of K computer. In the future, we would like to also consider estimation of after slip and apply
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Oneida, Erin K.; Shell, Eric B.; Sabbagh, Harold A.; Sabbagh, Elias; Murphy, R. Kim; Mazdiyasni, Siamack; Lindgren, Eric A.; Mooers, Ryan D.
2017-02-01
A model-based calibration process is introduced that estimates the state of the eddy current probe. First, a carefully designed surrogate model was built using VIC-3D® simulations covering the critical range of probe rotation angles, tilt in two directions, and probe offset (liftoff) for both transverse and longitudinal flaw orientations. Some approximations and numerical compromises in the model were made to represent tilt in two directions and reduce simulation time; however, this surrogate model was found to represent the key trends in the eddy current response for each of the four probe properties in experimental verification studies well. Next, this model was incorporated into an iterative inversion scheme during the calibration process, to estimate the probe state while also addressing the amplitude/phase fit and centering the calibration notch indication. Results are presented showing several examples of the blind estimation of tilt and rotation angle for known experimental cases with reasonable agreement. Once the probe state is estimated, the final step is to transform the base crack inversion surrogate model and apply it for crack characterization. Using this process, results are presented demonstrating improved crack inversion performance for extreme probe states.
NASA Astrophysics Data System (ADS)
Nezhad, Mohsen Motahari; Shojaeefard, Mohammad Hassan; Shahraki, Saeid
2016-02-01
In this study, the experiments aimed at analyzing thermally the exhaust valve in an air-cooled internal combustion engine and estimating the thermal contact conductance in fixed and periodic contacts. Due to the nature of internal combustion engines, the duration of contact between the valve and its seat is too short, and much time is needed to reach the quasi-steady state in the periodic contact between the exhaust valve and its seat. Using the methods of linear extrapolation and the inverse solution, the surface contact temperatures and the fixed and periodic thermal contact conductance were calculated. The results of linear extrapolation and inverse methods have similar trends, and based on the error analysis, they are accurate enough to estimate the thermal contact conductance. Moreover, due to the error analysis, a linear extrapolation method using inverse ratio is preferred. The effects of pressure, contact frequency, heat flux, and cooling air speed on thermal contact conductance have been investigated. The results show that by increasing the contact pressure the thermal contact conductance increases substantially. In addition, by increasing the engine speed the thermal contact conductance decreases. On the other hand, by boosting the air speed the thermal contact conductance increases, and by raising the heat flux the thermal contact conductance reduces. The average calculated error equals to 12.9 %.
Determining the strength of rotating broadband sources in ducts by inverse methods
NASA Astrophysics Data System (ADS)
Lowis, C. R.; Joseph, P. F.
2006-08-01
Aeroengine broadband fan noise is a major contributor to the community noise exposure from aircraft. It is currently believed that the dominant broadband noise mechanisms are due to interaction of the turbulent wake from the rotor with the stator, and interaction of the turbulent boundary layers on the rotor blades with their trailing edges. Currently there are no measurement techniques that allow the localisation and quantification of rotor-based broadband noise sources. This paper presents an inversion technique for estimating the broadband acoustic source strength distribution over a ducted rotor using pressure measurements made at the duct wall. It is shown that the rotation of acoustic sources in a duct prevents the use of standard acoustic inversion techniques. A new technique is presented here for inverting the strength of rotating broadband sources that makes use of a new Green function taking into account the effect of source rotation. The new Green function is used together with a modal decomposition technique to remove the effect of source rotation, thereby allowing an estimation of the rotor-based source strengths in the rotating reference frame. It is shown that the pressure measured at the sensors after application of this technique is identical to that measured by sensors rotating at the same speed as the rotor. Results from numerical simulations are presented to investigate the resolution limits of the inversion technique. The azimuthal resolution limit, namely the ability of the measurement technique to discriminate between sources on adjacent blades, is shown to improve as the speed of rotation increases. To improve the robustness of the inversion technique, a simplifying assumption is made whereby the sources on different blades are assumed to be identical. It is also shown that the accuracy and robustness of the inversion procedure improve as the axial separation between the rotor and sensors decreases. Simulation results demonstrate that for a
NASA Astrophysics Data System (ADS)
Imanishi, K.; Takeo, M.; Ito, H.; Ellsworth, W.; Matsuzawa, T.; Kuwahara, Y.; Iio, Y.; Horiuchi, S.; Ohmi, S.
2002-12-01
We estimate source parameters of small earthquakes from stopping phases and investigate the scaling relationships between source parameters. The method we employed [Imanishi and Takeo, 2002] assumes an elliptical fault model proposed by Savage [1966]. In this model, two high-frequency stopping phases, Hilbert transformations of each other, are radiated and the difference in arrival times between the two stopping phases is dependent on the average value of rupture velocity, the source dimension, the aspect ratio of elliptical fault, the direction of rupture propagation and the orientation of the fault plane. These parameters can be estimated by a nonlinear least squares inversion method. Earthquakes studied occurred between May and August 1999 at the western Nagano prefecture, Japan, which is characterized by high levels of shallow earthquakes. The data consist of seismograms recorded by an 800 m deep borehole and a 46 surface seismic array whose spacing is a few km. In particular, the 800 m borehole data provide a wide frequency bandwidth and greatly reduce ground noise and coda wave amplitude compared to surface recordings. High-frequency stopping phases are readily detected on accelerograms recorded in the borehole. After correcting both borehole and surface data for attenuation, we also measure the rise time, which is defined as the time lag from the arrival time of the direct wave to the first slope change in the displacement pulse. Using these durations, we estimate source parameters of 25 earthquakes ranging in size from M1.2 to M2.6. The rupture aspect ratio is estimated to be about 0.8 on an average. This suggests that the assumption of a circular crack model is valid as a first order approximation for earthquakes analyzed in this study. Static stress drops range from approximately 0.1 to 5 MPa and do not vary with seismic moment. It seems that the breakdown seen in the previous studies by other authors using surface data is simply an artifact of
NASA Astrophysics Data System (ADS)
Iriana, Windy; Tonokura, Kenichi; Kawasaki, Masahiro; Inoue, Gen; Kusin, Kitso; Limin, Suwido H.
2016-09-01
Evaluation of CO2 flux from peatland soil respiration is important to understand the effect of land use change on the global carbon cycle and climate change and particularly to support carbon emission reduction policies. However, quantitative estimation of emitted CO2 fluxes in Indonesia is constrained by existing field data. Current methods for CO2 measurement are limited by high initial cost, manpower, and the difficulties associated with construction issues. Measurement campaigns were performed using a newly developed nocturnal temperature-inversion trap method, which measures the amount of CO2 trapped beneath the nocturnal inversion layer, in the dry season of 2013 at a drained tropical peatland near Palangkaraya, Central Kalimantan, Indonesia. This method is cost-effective and data processing is easier than other flux estimation methods. We compared CO2 fluxes measured using this method with the published data from the existing eddy covariance and closed chamber methods. The maximum value of our measurement results was 10% lower than maximum value of eddy covariance method and average value was 6% higher than average of chamber method in drained tropical peatlands. In addition, the measurement results shows good correlation with groundwater table. The results of this comparison suggest that this methodology for the CO2 flux measurement is useful for field research in tropical peatlands.
The Distribution of Dark and Luminous Matter in the Galaxy Cluster Merger Abell 2146
NASA Astrophysics Data System (ADS)
King, Lindsay; Clowe, Douglas; Coleman, Joseph E.; Russell, Helen; Santana, Rebecca; White, Jacob; Canning, Rebecca; Deering, Nicole; Fabian, Andrew C.; Lee, Brandyn; Li, Baojiu; McNamara, Brian R.
2017-01-01
Abell 2146 (z = 0.232) consists of two galaxy clusters undergoing a major merger, presenting two large shock fronts on Chandra X-ray Observatory maps. These observations are consistent with a collision close to the plane of the sky, caught soon after first core passage. Here we outline the weak gravitational lensing analysis of the total mass in the system, using the distorted shapes of distant galaxies seen with Hubble Space Telescope. The highest peak in the mass reconstruction is centred on the brightest cluster galaxy in Abell 2146-A. The mass associated with Abell 2146-B is more extended. The best-fitting mass model with two components has a mass ratio of ~3:1 for the two clusters. From the weak lensing analysis, Abell 2146-A is the primary halo component, and the origin of the apparent discrepancy with the X-ray analysis where Abell 2146-B is the primary halo will be discussed.
Doughty, Christine A.
1996-05-01
The hydrologic properties of heterogeneous geologic media are estimated by simultaneously inverting multiple observations from well-test data. A set of pressure transients observed during one or more interference tests is compared to the corresponding values obtained by numerically simulating the tests using a mathematical model. The parameters of the mathematical model are varied and the simulation repeated until a satisfactory match to the observed pressure transients is obtained, at which point the model parameters are accepted as providing a possible representation of the hydrologic property distribution. Restricting the search to parameters that represent fractal hydrologic property distributions can improve the inversion process. Far fewer parameters are needed to describe heterogeneity with a fractal geometry, improving the efficiency and robustness of the inversion. Additionally, each parameter set produces a hydrologic property distribution with a hierarchical structure, which mimics the multiple scales of heterogeneity often seen in natural geological media. Application of the IFS inverse method to synthetic interference-test data shows that the method reproduces the synthetic heterogeneity successfully for idealized heterogeneities, for geologically-realistic heterogeneities, and when the pressure data includes noise.
Wang, Ji-Hua; Huang, Wen-Jiang; Lao, Cai-Lian; Zhang, Lu-Da; Luo, Chang-Bing; Wang, Tao; Liu, Liang-Yun; Song, Xiao-Yu; Ma, Zhi-Hong
2007-07-01
With the widespread application of remote sensing (RS) in agriculture, monitoring and prediction of crop nutrition condition attracts attention of many scientists. Foliar nitrogen content (N) is one of the most important nutrients for plant growth, and vertical leaf N gradient is an important indicator of crop nutrition situation. Investigations have been made on N vertical distribution to describe the growth status of winter wheat. Results indicate that from the canopy top to the ground surface, N shows an obvious gradient decreasing trend. The objective of this study was to discuss the inversion method of N vertical distribution with canopy reflected spectrum by the partial least squares regression (PLS) method. PLS was selected for the inversion of upper, middle and lower layers of N. To improve the accuracy of prediction, the N in the upper layer as well as in the middle and bottom layers should be taken into consideration when crop nutrition condition is appraised by RS data. The established models by the observed data in year 2001-2002 were validated by the data in year 2003-2004. The inversion precision and error were acceptable. It provided a theoretic basis for widely and non-damaged variable rate nitrogen application of winter wheat by canopy reflected spectrum.
Kimura, Wayne D.; Romea, Richard D.; Steinhauer, Loren C.
1998-01-01
A method and apparatus for exchanging energy between relativistic charged particles and laser radiation using inverse diffraction radiation or inverse transition radiation. The beam of laser light is directed onto a particle beam by means of two optical elements which have apertures or foils through which the particle beam passes. The two apertures or foils are spaced by a predetermined distance of separation and the angle of interaction between the laser beam and the particle beam is set at a specific angle. The separation and angle are a function of the wavelength of the laser light and the relativistic energy of the particle beam. In a diffraction embodiment, the interaction between the laser and particle beams is determined by the diffraction effect due to the apertures in the optical elements. In a transition embodiment, the interaction between the laser and particle beams is determined by the transition effect due to pieces of foil placed in the particle beam path.
NASA Astrophysics Data System (ADS)
Dorn, O.; Lesselier, D.
2010-07-01
Inverse problems in electromagnetics have a long history and have stimulated exciting research over many decades. New applications and solution methods are still emerging, providing a rich source of challenging topics for further investigation. The purpose of this special issue is to combine descriptions of several such developments that are expected to have the potential to fundamentally fuel new research, and to provide an overview of novel methods and applications for electromagnetic inverse problems. There have been several special sections published in Inverse Problems over the last decade addressing fully, or partly, electromagnetic inverse problems. Examples are: Electromagnetic imaging and inversion of the Earth's subsurface (Guest Editors: D Lesselier and T Habashy) October 2000 Testing inversion algorithms against experimental data (Guest Editors: K Belkebir and M Saillard) December 2001 Electromagnetic and ultrasonic nondestructive evaluation (Guest Editors: D Lesselier and J Bowler) December 2002 Electromagnetic characterization of buried obstacles (Guest Editors: D Lesselier and W C Chew) December 2004 Testing inversion algorithms against experimental data: inhomogeneous targets (Guest Editors: K Belkebir and M Saillard) December 2005 Testing inversion algorithms against experimental data: 3D targets (Guest Editors: A Litman and L Crocco) February 2009 In a certain sense, the current issue can be understood as a continuation of this series of special sections on electromagnetic inverse problems. On the other hand, its focus is intended to be more general than previous ones. Instead of trying to cover a well-defined, somewhat specialized research topic as completely as possible, this issue aims to show the broad range of techniques and applications that are relevant to electromagnetic imaging nowadays, which may serve as a source of inspiration and encouragement for all those entering this active and rapidly developing research area. Also, the
Rosetta Consert Radio Sounding Experiment: A Numerical Method for the Inverse Problem
NASA Astrophysics Data System (ADS)
Cardiet, M.; Herique, A.; Rogez, Y.; Douté, S.; Kofman, W. W.
2014-12-01
Rosetta's module Philae will soon land on 67P CG nucleus, giving unprecedented insight about a comet nucleus, its composition and interior. The CONSERT instrument is one of the 20 scientific instruments of the mission. It's a bistatic two-modules radar, one on the orbiter, one on the lander. They generate EM waves that are transmitted through the nucleus. The signal is therefore delayed and attenuated by the nucleus materials and possible inhomogeneities. An accurate measurement and processing of these signals, repeated along the orbit, will allow us to perform a tomography, and for the first time, map the dielectric properties of a comet nucleus internal structures .Our approach for the resolution of this inverse problem is to use a custom built software called SIMSERT, which simulates the end-to-end experiment, using a ray-tracing algorithm. This tool is the key to prepare CONSERT operation and perform signal analysis. Given a comet shape and a landing site, we have conducted simulations to understand, quantify and get rid of the biases due to the discretization of the shape model.The first inversion using the comet shape model given by OSIRIS and NavCam teams , will assume a propagation in an homogeneous medium. The first goal is to identify and correct artefacts due to the surface interface. The second goal is to evaluate the coherency of the different permittivity estimations given by inverting the latter model on the signal measured at different positions along the orbit. Then it is likely that, based on the first investigations, more sophisticated models (rubble pile, strata) and inversions will be required. A comparative approach between the simulated data and the CONSERT data, will lead to permittivity maps of the nucleus, that are coherent with the observation, with a certain probability. These maps, the first of this type, will provide unprecedented information about the internal structure, the accretion history and the nucleus time evolution.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.
2015-07-01
This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated
NASA Astrophysics Data System (ADS)
Haley, Craig; McLinden, Chris; Sioris, Christopher; Brohede, Samuel
Key to the retrieval of stratospheric minor species information from limb-scatter measurements are the selections of a radiative transfer model (RTM) and inversion method (solver). Here we assess the impact of choice of RTM and solver on the retrievals of stratospheric ozone and nitrogen dioxide from the OSIRIS instrument using the ‘Ozone Triplet' and Differential Optical Absorption Spectroscopy (DOAS) techniques that are used in the operational Level 2 processing algorithms. The RTMs assessed are LIMBTRAN, VECTOR, SCIARAYS, and SASKTRAN. The solvers studied include the Maximum A Posteriori (MAP), Maximum Likelihood (ML), Iterative Least Squares (ILS), and Chahine methods.
NASA Astrophysics Data System (ADS)
Agata, R.; Ichimura, T.; Hirahara, K.; Hori, T.; Hyodo, M.; Hori, M.
2013-12-01
Many studies have focused on geodetic inversion analysis method of coseismic slip distribution with combination of observation data of coseismic crustal deformation on the ground and simplified crustal models such like analytical solution in elastic half-space (Okada, 1985). On the other hand, displacements on the seafloor or near trench axes due to actual earthquakes has been observed by seafloor observatories (e.g. the 2011 Tohoku-oki Earthquake (Tohoku Earthquake) (Sato et. al. 2011) (Kido et. al. 2011)). Also, some studies on tsunamis due to the Tohoku Earthquake indicate that large fault slips near the trench axis may have occurred. Those facts suggest that crustal models considering complex geometry and heterogeneity of the material property near the trench axis should be used for geodetic inversion analysis. Therefore, our group has developed a mesh generation method for finite element models of the Japanese Islands of higher fidelity and a fast crustal deformation analysis method for the models. Degree-of-freedom of the models generated by this method is about 150 million. In this research, the method is extended for inversion analyses of coseismic slip distribution. Since inversion analyses need computation of hundreds of slip response functions due to a unit fault slip assigned for respective divided cells on the fault, parallel computing environment is used. Plural crustal deformation analyses are simultaneously run in a Message Passing Interface (MPI) job. In the job, dynamic load balancing is implemented so that a better parallel efficiency is obtained. Submitting the necessary number of serial job of our previous method is also possible, but the proposed method needs less computation time, places less stress on file systems, and allows simpler job management. A method for considering the fault slip right near the trench axis is also developed. As the displacement distribution of unit fault slip for computing response function, 3rd order B
Inversion Method for Early Detection of ARES-1 Case Breach Failure
NASA Technical Reports Server (NTRS)
Mackey, Ryan M.; Kulikov, Igor K.; Bajwa, Anupa; Berg, Peter; Smelyanskiy, Vadim
2010-01-01
A document describes research into the problem of detecting a case breach formation at an early stage of a rocket flight. An inversion algorithm for case breach allocation is proposed and analyzed. It is shown how the case breach can be allocated at an early stage of its development by using the rocket sensor data and the output data from the control block of the rocket navigation system. The results are simulated with MATLAB/Simulink software. The efficiency of an inversion algorithm for a case breach location is discussed. The research was devoted to the analysis of the ARES-l flight during the first 120 seconds after the launch and early prediction of case breach failure. During this time, the rocket is propelled by its first-stage Solid Rocket Booster (SRB). If a breach appears in SRB case, the gases escaping through it will produce the (side) thrust directed perpendicular to the rocket axis. The side thrust creates torque influencing the rocket attitude. The ARES-l control system will compensate for the side thrust until it reaches some critical value, after which the flight will be uncontrollable. The objective of this work was to obtain the start time of case breach development and its location using the rocket inertial navigation sensors and GNC data. The algorithm was effective for the detection and location of a breach in an SRB field joint at an early stage of its development.
A new reconstruction method for the inverse source problem from partial boundary measurements
NASA Astrophysics Data System (ADS)
Canelas, Alfredo; Laurain, Antoine; Novotny, Antonio A.
2015-07-01
The inverse source problem consists of reconstructing a mass distribution in a geometrical domain from boundary measurements of the associated potential and its normal derivative. In this paper the inverse source problem is reformulated as a topology optimization problem, where the support of the mass distribution is the unknown variable. The Kohn-Vogelius functional is minimized. It measures the misfit between the solutions of two auxiliary problems containing information about the boundary measurements. The Newtonian potential is used to complement the unavailable information on the hidden boundary. The resulting topology optimization algorithm is based on an analytic formula for the variation of the Kohn-Vogelius functional with respect to a class of mass distributions consisting of a finite number of ball-shaped trial anomalies. The proposed reconstruction algorithm is non-iterative and very robust with respect to noisy data. Finally, in order to show the effectiveness of the devised reconstruction algorithm, some numerical experiments in two and three spatial dimensions are presented.
Shi, Yingzhong; Chung, Fu-Lai; Wang, Shitong
2015-09-01
Recently, a time-adaptive support vector machine (TA-SVM) is proposed for handling nonstationary datasets. While attractive performance has been reported and the new classifier is distinctive in simultaneously solving several SVM subclassifiers locally and globally by using an elegant SVM formulation in an alternative kernel space, the coupling of subclassifiers brings in the computation of matrix inversion, thus resulting to suffer from high computational burden in large nonstationary dataset applications. To overcome this shortcoming, an improved TA-SVM (ITA-SVM) is proposed using a common vector shared by all the SVM subclassifiers involved. ITA-SVM not only keeps an SVM formulation, but also avoids the computation of matrix inversion. Thus, we can realize its fast version, that is, improved time-adaptive core vector machine (ITA-CVM) for large nonstationary datasets by using the CVM technique. ITA-CVM has the merit of asymptotic linear time complexity for large nonstationary datasets as well as inherits the advantage of TA-SVM. The effectiveness of the proposed classifiers ITA-SVM and ITA-CVM is also experimentally confirmed.
NASA Astrophysics Data System (ADS)
Dolman, A. J.; Shvidenko, A.; Schepaschenko, D.; Ciais, P.; Tchebakova, N.; Chen, T.; van der Molen, M. K.; Belelli Marchesini, L.; Maximov, T. C.; Maksyutov, S.; Schulze, E.-D.
2012-06-01
We determine the carbon balance of Russia, including Ukraine, Belarus and Kazakhstan using inventory based, eddy covariance, Dynamic Global Vegetation Models (DGVM), and inversion methods. Our current best estimate of the net biosphere to atmosphere flux is -0.66 Pg C yr-1. This sink is primarily caused by forests that using two independent methods are estimated to take up -0.69 Pg C yr-1. Using inverse models yields an average net biopshere to atmosphere flux of the same value with a interannual variability of 35 % (1σ). The total estimated biosphere to atmosphere flux from eddy covariance observations over a limited number of sites amounts to -1 Pg C yr-1. Fires emit 137 to 121 Tg C yr-1 using two different methods. The interannual variability of fire emissions is large, up to a factor 0.5 to 3. Smaller fluxes to the ocean and inland lakes, trade are also accounted for. Our best estimate for the Russian net biosphere to atmosphere flux then amounts to -659 Tg C yr-1 as the average of the inverse models of -653 Tg C yr-1, bottom up -563 Tg C yr-1 and the independent landscape approach of -761 Tg C yr-1. These three methods agree well within their error bounds, so there is good consistency between bottom up and top down methods. The best estimate of the net land to atmosphere flux, including the fossil fuel emissions is -145 to -73 Tg C yr-1. Estimated methane emissions vary considerably with one inventory-based estimate providing a net land to atmosphere flux of 12.6 Tg C-CH4yr -1 and an independent model estimate for the boreal and Arctic zones of Eurasia of 27.6 Tg C-CH4 yr-1.
NASA Astrophysics Data System (ADS)
Kowada, Daisuke; Ueno, Akinori
In this paper, we propose a novel method for extending bandwidth of ECG measuring device without deteriorating the baseline tolerance for body motion. The proposed method is realized by synthesizing a two-stage analog forward filter with two different corner frequencies and a digital inverse filter having a corner frequency identical with the higher one of the analog filter. We applied the method to bed-type capacitive ECG sensor which can detect electrocardiographic potential non-intrusively and indirectly. The results demonstrated that the proposed method could precisely recover low-frequency components like T-wave. Furthermore, we confirmed that QTc (corrected QT interval) could be estimated from the recovered wave and that the QTc correlated 0.81 on the average with that obtained from Lead II ECG for seven subjects. These results indicate that the proposed method is useful to screening of long QT syndrome by combining the method with the bed-type capacitive ECG sensor.
NASA Technical Reports Server (NTRS)
Shkarayev, S.; Krashantisa, R.; Tessler, A.
2004-01-01
An important and challenging technology aimed at the next generation of aerospace vehicles is that of structural health monitoring. The key problem is to determine accurately, reliably, and in real time the applied loads, stresses, and displacements experienced in flight, with such data establishing an information database for structural health monitoring. The present effort is aimed at developing a finite element-based methodology involving an inverse formulation that employs measured surface strains to recover the applied loads, stresses, and displacements in an aerospace vehicle in real time. The computational procedure uses a standard finite element model (i.e., "direct analysis") of a given airframe, with the subsequent application of the inverse interpolation approach. The inverse interpolation formulation is based on a parametric approximation of the loading and is further constructed through a least-squares minimization of calculated and measured strains. This procedure results in the governing system of linear algebraic equations, providing the unknown coefficients that accurately define the load approximation. Numerical simulations are carried out for problems involving various levels of structural approximation. These include plate-loading examples and an aircraft wing box. Accuracy and computational efficiency of the proposed method are discussed in detail. The experimental validation of the methodology by way of structural testing of an aircraft wing is also discussed.
NASA Astrophysics Data System (ADS)
Hara, Tatsuhiko
2004-08-01
We implement the Direct Solution Method (DSM) on a vector-parallel supercomputer and show that it is possible to significantly improve its computational efficiency through parallel computing. We apply the parallel DSM calculation to waveform inversion of long period (250-500 s) surface wave data for three-dimensional (3-D) S-wave velocity structure in the upper and uppermost lower mantle. We use a spherical harmonic expansion to represent lateral variation with the maximum angular degree 16. We find significant low velocities under south Pacific hot spots in the transition zone. This is consistent with other seismological studies conducted in the Superplume project, which suggests deep roots of these hot spots. We also perform simultaneous waveform inversion for 3-D S-wave velocity and Q structure. Since resolution for Q is not good, we develop a new technique in which power spectra are used as data for inversion. We find good correlation between long wavelength patterns of Vs and Q in the transition zone such as high Vs and high Q under the western Pacific.
Chen, X.; Ashcroft, I. A.; Wildman, R. D.; Tuck, C. J.
2015-01-01
A method using experimental nanoindentation and inverse finite-element analysis (FEA) has been developed that enables the spatial variation of material constitutive properties to be accurately determined. The method was used to measure property variation in a three-dimensional printed (3DP) polymeric material. The accuracy of the method is dependent on the applicability of the constitutive model used in the inverse FEA, hence four potential material models: viscoelastic, viscoelastic–viscoplastic, nonlinear viscoelastic and nonlinear viscoelastic–viscoplastic were evaluated, with the latter enabling the best fit to experimental data. Significant changes in material properties were seen in the depth direction of the 3DP sample, which could be linked to the degree of cross-linking within the material, a feature inherent in a UV-cured layer-by-layer construction method. It is proposed that the method is a powerful tool in the analysis of manufacturing processes with potential spatial property variation that will also enable the accurate prediction of final manufactured part performance. PMID:26730216
NASA Astrophysics Data System (ADS)
Gillet-Chaulet, F.; Gagliardini, O.; Nodet, M.; Ritz, C.; Durand, G.; Zwinger, T.; Seddik, H.; Greve, R.
2010-12-01
About a third of the current sea level rise is attributed to the release of Greenland and Antarctic ice, and their respective contribution is continuously increasing since the first diagnostic of the acceleration of their coastal outlet glaciers, a decade ago. Due to their related societal implications, good scenario of the ice sheets evolutions are needed to constrain the sea level rise forecast in the coming centuries. The quality of the model predictions depend primary on the good description of the physical processes involved and on a good initial state reproducing the main present observations (geometry, surface velocities and ideally the trend in elevation change). We model ice dynamics on the whole Greenland ice sheet using the full-Stokes finite element code Elmer. The finite element mesh is generated using the anisotropic mesh adaptation tool YAMS, and shows a high density around the major ice streams. For the initial state, we use an iterative procedure to compute the ice velocities, the temperature field, and the basal sliding coefficient field. The basal sliding coefficient is obtained with an inverse method by minimizing a cost function that measures the misfit between the present day surface velocities and the modelled surface velocities. We use two inverse methods for this: an inverse Robin problem recently proposed by Arthern and Gudmundsson (J. Glaciol. 2010), and a control method taking advantage of the fact that the Stokes equations are self adjoint in the particular case of a Newtonian rheology. From the initial states obtained by these two methods, we run transient simulations to evaluate the impact of the initial state of the Greenland ice sheet onto its related contribution to sea level rise for the next centuries.
An Inverse Method for Determining Source Characteristics for Emergency Response Applications
NASA Astrophysics Data System (ADS)
Rudd, A. C.; Robins, A. G.; Lepley, J. J.; Belcher, S. E.
2012-07-01
Following a malicious or accidental atmospheric release in an outdoor environment it is essential for first responders to ensure safety by identifying areas where human life may be in danger. For this to happen quickly, reliable information is needed on the source strength and location, and the type of chemical agent released. We present here an inverse modelling technique that estimates the source strength and location of such a release, together with the uncertainty in those estimates, using a limited number of measurements of concentration from a network of chemical sensors considering a single, steady, ground-level source. The technique is evaluated using data from a set of dispersion experiments conducted in a meteorological wind tunnel, where simultaneous measurements of concentration time series were obtained in the plume from a ground-level point-source emission of a passive tracer. In particular, we analyze the response to the number of sensors deployed and their arrangement, and to sampling and model errors. We find that the inverse algorithm can generate acceptable estimates of the source characteristics with as few as four sensors, providing these are well-placed and that the sampling error is controlled. Configurations with at least three sensors in a profile across the plume were found to be superior to other arrangements examined. Analysis of the influence of sampling error due to the use of short averaging times showed that the uncertainty in the source estimates grew as the sampling time decreased. This demonstrated that averaging times greater than about 5min (full scale time) lead to acceptable accuracy.
A comparison of two stochastic inverse methods in a field-scale application.
Larocque, Marie; Delay, Fred; Banton, Olivier
2003-01-01
Inverse modeling is a useful tool in ground water flow modeling studies. The most frequent difficulties encountered when using this technique are the lack of conditioning information (e.g., heads and transmissivities), the uncertainty in available data, and the nonuniqueness of the solution. These problems can be addressed and quantified through a stochastic Monte Carlo approach. The aim of this work was to compare the applicability of two stochastic inverse modeling approaches in a field-scale application. The multi-scaling (MS) approach uses a downscaling parameterization procedure that is not based on geostatistics. The pilot point (PP) approach uses geostatistical random fields as initial transmissivity values and an experimental variogram to condition the calibration. The studied area (375 km2) is part of a regional aquifer, northwest of Montreal in the St. Lawrence lowlands (southern Québec). It is located in limestone, dolomite, and sandstone formations, and is mostly a fractured porous medium. The MS approach generated small errors on heads, but the calibrated transmissivity fields did not reproduce the variogram of observed transmissivities. The PP approach generated larger errors on heads but better reproduced the spatial structure of observed transmissivities. The PP approach was also less sensitive to uncertainty in head measurements. If reliable heads are available but no transmissivities are measured, the MS approach provides useful results. If reliable transmissivities with a well inferred spatial structure are available, then the PP approach is a better alternative. This approach however must be used with caution if measured transmissivities are not reliable.
Łeski, Szymon; Wójcik, Daniel K; Tereszczuk, Joanna; Swiejkowski, Daniel A; Kublik, Ewa; Wróbel, Andrzej
2007-01-01
Estimation of the continuous current-source density in bulk tissue from a finite set of electrode measurements is a daunting task. Here we present a methodology which allows such a reconstruction by generalizing the one-dimensional inverse CSD method. The idea is to assume a particular plausible form of CSD within a class described by a number of parameters which can be estimated from available data, for example a set of cubic splines in 3D spanned on a fixed grid of the same size as the set of measurements. To avoid specificity of particular choice of reconstruction grid we add random jitter to the points positions and show that it leads to a correct reconstruction. We propose different ways of improving the quality of reconstruction which take into account the sources located outside the recording region through appropriate boundary treatment. The efficiency of the traditional CSD and variants of inverse CSD methods is compared using several fidelity measures on different test data to investigate when one of the methods is superior to the others. The methods are illustrated with reconstructions of CSD from potentials evoked by stimulation of a bunch of whiskers recorded in a slab of the rat forebrain on a grid of 4x5x7 positions.
Hierarchical Velocity Structure in the Core of Abell 2597
NASA Technical Reports Server (NTRS)
Still, Martin; Mushotzky, Richard
2004-01-01
We present XMM-Newton RGS and EPIC data of the putative cooling flow cluster Abell 2597. Velocities of the low-ionization emission lines in the spectrum are blue shifted with respect to the high-ionization lines by 1320 (sup +660) (sub -210) kilometers per second, which is consistent with the difference in the two peaks of the galaxy velocity distribution and may be the signature of bulk turbulence, infall, rotation or damped oscillation in the cluster. A hierarchical velocity structure such as this could be the direct result of galaxy mergers in the cluster core, or the injection of power into the cluster gas from a central engine. The uniform X-ray morphology of the cluster, the absence of fine scale temperature structure and the random distribution of the the galaxy positions, independent of velocity, suggests that our line of sight is close to the direction of motion. These results have strong implications for cooling flow models of the cluster Abell 2597. They give impetus to those models which account for the observed temperature structure of some clusters using mergers instead of cooling flows.
NASA Technical Reports Server (NTRS)
Metcalf, Thomas R.; Canfield, Richard C.; Avrett, Eugene H.; Metcalf, Frederic T.
1990-01-01
Various methods of inverting solar Mg I 4571 and 5173 spectral line observations are examined to find the best method of using these lines to calculate the vertical temperature and electron density structure around the temperature minimum region. Following a perturbation analysis by Mein (1971), a Fredholm integral equation of the first kind is obtained which can be inverted to yield these temperature and density structures as a function of time. Several inversion methods are tested and compared. The methods are used to test data as well as to a subset of observations of these absorption lines taken on February 3, 1986 before and during a solar flare. A small but significant increase is found in the temperature and a relatively large increase in the electron density during this flare. The observations are inconsistent with heating and ionization by an intense beam of electrons and with ionization by UV photoionization of Si I.
Tracing low-mass galaxy clusters using radio relics: the discovery of Abell 3527-bis
NASA Astrophysics Data System (ADS)
de Gasperin, F.; Intema, H. T.; Ridl, J.; Salvato, M.; van Weeren, R.; Bonafede, A.; Greiner, J.; Cassano, R.; Brüggen, M.
2017-01-01
Context. Galaxy clusters undergo mergers that can generate extended radio sources called radio relics. Radio relics are the consequence of merger-induced shocks that propagate in the intra cluster medium (ICM). Aims: In this paper we analyse the radio, optical and X-ray data from a candidate galaxy cluster that has been selected from the radio emission coming from a candidate radio relic detected in NRAO VLA Sky Survey (NVSS). Our aim is to clarify the nature of this source and prove that under certain conditions radio emission from radio relics can be used to trace relatively low-mass galaxy clusters. Methods: We observed the candidate galaxy cluster with the Giant Meterwave Radio Telescope (GMRT) at three different frequencies. These datasets have been analysed together with archival data from ROSAT in the X-ray and with archival data from the Gamma-Ray Burst Optical/Near-Infrared Detector (GROND) telescope in four different optical bands. Results: We confirm the presence of a 1 Mpc long radio relic located in the outskirts of a previously unknown galaxy cluster. We confirm the presence of the galaxy cluster through dedicated optical observations and using archival X-ray data. Due to its proximity and similar redshift to a known Abell cluster, we named it Abell 3527-bis. The galaxy cluster is amongst the least massive clusters known to host a radio relic. Conclusions: We showed that radio relics can be effectively used to trace a subset of relatively low-mass galaxy clusters that might have gone undetected in X-ray or Sunyaev-Zel'dovich (SZ) surveys. This technique might be used in future deep, low-frequency surveys such as those carried on by the Low Frequency Array (LOFAR), the Upgraded GMRT (uGMRT) and, ultimately, the Square Kilometre Array (SKA).
NASA Astrophysics Data System (ADS)
Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.
2012-12-01
We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of
An inverse method was developed to integrate satellite observations of atmospheric pollutant column concentrations and direct sensitivities predicted by a regional air quality model in order to discern biases in the emissions of the pollutant precursors.
Downs, J. Crawford
2012-01-01
Numerical simulations or inverse numerical analyses of individual eyes or eye segments are often based on an eye-specific geometry obtained from in vivo medical images such as CT scans or from in vitro 3D digitizer scans. These eye-specific geometries are usually measured while the eye is subjected to internal pressure. Due to the nonlinear stiffening of the collagen fibril network in the eye, numerical incorporation of the pre-existing stress/strain state may be essential for realistic eye-specific computational simulations. Existing prestressing methods either compute accurate predictions of the prestressed state or guarantee a unique solution. In this contribution, a forward incremental pre-stressing method is presented that unifies the advantages of the existing approaches by providing accurate and unique predictions of the pre-existing stress/strain state at the true measured geometry. The impact of prestressing is investigated on (i) the inverse constitutive parameter identification of a synthetic sclera inflation test and (ii) an eye-specific simulation that estimates the realistic mechanical response of a preloaded posterior monkey scleral shell. Evaluation of the pre-existing stress/strain state in the inverse analysis had a significant impact on the reproducibility of the constitutive parameters but may be estimated based on an approximative approach. The eye-specific simulation of one monkey eye shows that prestressing is required for accurate displacement and stress/strain predictions. The numerical results revealed an increasing error in displacement, strain and stress predictions with increasing pre-existing pressure load when the pre-stress/strain state is disregarded. Disregarding the prestress may lead to a significant underestimation of the strain/stress environment in the sclera and overestimation in the lamina cribrosa. PMID:22224843
NASA Astrophysics Data System (ADS)
Chen, X.; Rubin, Y.; Baldocchi, D. D.
2005-12-01
Understanding the interactions between soil, plant, and the atmosphere under water-stressed conditions is important for ecosystems where water availability is limited. In such ecosystems, the amount of water transferred from the soil to the atmosphere is controlled not only by weather conditions and vegetation type but also by soil water availability. Although researchers have proposed different approaches to model the impact of soil moisture on plant activities, the parameters involved are difficult to measure. However, using measurements of observed latent heat and carbon fluxes, as well as soil moisture data, Bayesian inversion methods can be employed to estimate the various model parameters. In our study, actual Evapotranspiration (ET) of an ecosystem is approximated by the Priestley-Taylor relationship, with the Priestley-Taylor coefficient modeled as a function of soil moisture content. Soil moisture limitation on root uptake is characterized in a similar manner as the Feddes' model. The inference of Bayesian inversion is processed within the framework of graphical theories. Due to the difficulty of obtaining exact inference, the Markov chain Monte Carlo (MCMC) method is implemented using a free software package, BUGS (Bayesian inference Using Gibbs Sampling). The proposed methodology is applied to a Mediterranean Oak-Savanna FLUXNET site in California, where continuous measurements of actual ET are obtained from eddy-covariance technique and soil moisture contents are monitored by several time domain reflectometry probes located within the footprint of the flux tower. After the implementation of Bayesian inversion, the posterior distributions of all the parameters exhibit enhancement in information compared to the prior distributions. The generated samples based on data in year 2003 are used to predict the actual ET in year 2004 and the prediction uncertainties are assessed in terms of confidence intervals. Our tests also reveal the usefulness of various
The ZH ratio method for long-period seismic data: inversion for S-wave velocity structure
NASA Astrophysics Data System (ADS)
Yano, Tomoko; Tanimoto, T.; Rivera, L.
2009-10-01
The particle motion of surface waves, in addition to phase and group velocities, can provide useful information for S-wave velocity structure in the crust and upper mantle. In this study, we applied a new method to retrieve velocity structure using the ZH ratio, the ratio between vertical and horizontal surface amplitudes of Rayleigh waves. Analysing data from the GEOSCOPE network, we measured the ZH ratios for frequencies between 0.004 and 0.05 Hz (period between 20 and 250s) and inverted them for S-wave velocity structure beneath each station. Our analysis showed that the resolving power of the ZH ratio is limited and final solutions display dependence on starting models; in particular, the depth of the Moho in the starting model is important in order to get reliable results. Thus, initial models for the inversion need to be carefully constructed. We chose PREM and CRUST2.0 in this study as a starting model for all but one station (ECH). The eigenvalue analysis of the least-squares problem that arises for each step of the iterative process shows a few dominant eigenvalues which explains the cause of the inversion's initial-model dependence. However, the ZH ratio is unique in having high sensitivity to near-surface structure and thus provides complementary information to phase and group velocities. Application of this method to GEOSCOPE data suggest that low velocity zones may exist beneath some stations near hotspots. Our tests with different starting models show that the models with low-velocity anomalies fit better to the ZH ratio data. Such low velocity zones are seen near Hawaii (station KIP), Crozet Island (CRZF) and Djibuti (ATD) but not near Reunion Island (RER). It is also found near Echery (ECH) which is in a geothermal area. However, this method has a tendency to produce spurious low velocity zones and resolution of the low velocity zones requires further careful study. We also performed simultaneous inversions for volumetric perturbation and
Lee, Cue Hyunkyu; Cook, Seungho; Lee, Ji Sung
2016-01-01
The meta-analysis has become a widely used tool for many applications in bioinformatics, including genome-wide association studies. A commonly used approach for meta-analysis is the fixed effects model approach, for which there are two popular methods: the inverse variance-weighted average method and weighted sum of z-scores method. Although previous studies have shown that the two methods perform similarly, their characteristics and their relationship have not been thoroughly investigated. In this paper, we investigate the optimal characteristics of the two methods and show the connection between the two methods. We demonstrate that the each method is optimized for a unique goal, which gives us insight into the optimal weights for the weighted sum of z-scores method. We examine the connection between the two methods both analytically and empirically and show that their resulting statistics become equivalent under certain assumptions. Finally, we apply both methods to the Wellcome Trust Case Control Consortium data and demonstrate that the two methods can give distinct results in certain study designs. PMID:28154508
NASA Astrophysics Data System (ADS)
Stukel, Michael R.; Landry, Michael R.; Ohman, Mark D.; Goericke, Ralf; Samo, Ty; Benitez-Nelson, Claudia R.
2012-03-01
Despite the increasing use of linear inverse modeling techniques to elucidate fluxes in undersampled marine ecosystems, the accuracy with which they estimate food web flows has not been resolved. New Markov Chain Monte Carlo (MCMC) solution methods have also called into question the biases of the commonly used L2 minimum norm (L 2MN) solution technique. Here, we test the abilities of MCMC and L 2MN methods to recover field-measured ecosystem rates that are sequentially excluded from the model input. For data, we use experimental measurements from process cruises of the California Current Ecosystem (CCE-LTER) Program that include rate estimates of phytoplankton and bacterial production, micro- and mesozooplankton grazing, and carbon export from eight study sites varying from rich coastal upwelling to offshore oligotrophic conditions. Both the MCMC and L 2MN methods predicted well-constrained rates of protozoan and mesozooplankton grazing with reasonable accuracy, but the MCMC method overestimated primary production. The MCMC method more accurately predicted the poorly constrained rate of vertical carbon export than the L 2MN method, which consistently overestimated export. Results involving DOC and bacterial production were equivocal. Overall, when primary production is provided as model input, the MCMC method gives a robust depiction of ecosystem processes. Uncertainty in inverse ecosystem models is large and arises primarily from solution under-determinacy. We thus suggest that experimental programs focusing on food web fluxes expand the range of experimental measurements to include the nature and fate of detrital pools, which play large roles in the model.
Khorasani, M T; Shorgashti, S
2006-05-01
Microporous polyurethane vascular prostheses with a 4 mm diameter and 0.3-0.4 mm wall thickness were fabricated by a spray phase inversion technique. In this study, the effect of distance between spray guns (SG) and rotating mandrel (RM), the effect of rate of rotating mandrel (RRM), and the type of nonsolvent on pore morphology of PU films were evaluated using scanning electron microscopy (SEM) technique. It was observed that when the distance between SG and RM was increased or the rate of RM was decreased, the porosity of PU films increased and consequently the tensile strength decreased and compliance value increased. Compliance was measured in vitro by volume and vessel diameter changes. Furthermore, when the coagulant (water) was changed to the water/methanol, the porosity of PU film increased and porous morphology changed to filamentous morphology. Attachment of anchorage dependent cells, namely L929 fibroblast cells, were investigated in stationary culture conditions. The cells adhesion and cells growth were studied using optical photomicrographs. The results show that by increasing the porosity content of PU films would consequently increase the cell ingrowths.
Monitoring CO2 sequestration with a network inversion InSAR method
NASA Astrophysics Data System (ADS)
Rabus, B.; Ghuman, P.; MacDonald, B.
2009-05-01
The capture, containment and long-term storage of CO2 is increasingly discussed as an important means to counter climate change resulting from the ongoing release of greenhouse gases into the atmosphere. This CO2 sequestration often requires the pumping of the gas into deep saline aquifers. However, before sequestration can be regarded as a longterm solution it is necessary to investigate under which conditions permanent and leakless capture of the CO2 is achieved in the substrate. We demonstrate that a combination of spaceborne synthetic aperture interferometry (InSAR) and ground based measurements of ground uplift caused by the underground release and spreading of the CO2 can be forged into a powerful tool to monitor sequsetration. We use a novel InSAR approach, which combines the benefits of a point-based persistent scatterer algorithm with a network inversion approach, and an additional temporal filter to remove atmospheric disturbances also at smaller scales down to 1 km and less. Using case studies from several injection wells we show that InSAR and ground based data in conjunction with geological and structural information above the aquifer, as well as detailed injection logs, allow to monitor the volumetric spread of CO2 at the mm per year level. For the majority of the studied wells CO2 appears to approach a stable sequestration state, however, in at least one case our results suggest leakage outside the aquifer.
Bender, P.; Bogart, L. K.; Posth, O.; Szczerba, W.; Rogers, S. E.; Castro, A.; Nilsson, L.; Zeng, L. J.; Sugunan, A.; Sommertune, J.; Fornara, A.; González-Alonso, D.; Barquín, L. Fernández; Johansson, C.
2017-01-01
The structural and magnetic properties of magnetic multi-core particles were determined by numerical inversion of small angle scattering and isothermal magnetisation data. The investigated particles consist of iron oxide nanoparticle cores (9 nm) embedded in poly(styrene) spheres (160 nm). A thorough physical characterisation of the particles included transmission electron microscopy, X-ray diffraction and asymmetrical flow field-flow fractionation. Their structure was ultimately disclosed by an indirect Fourier transform of static light scattering, small angle X-ray scattering and small angle neutron scattering data of the colloidal dispersion. The extracted pair distance distribution functions clearly indicated that the cores were mostly accumulated in the outer surface layers of the poly(styrene) spheres. To investigate the magnetic properties, the isothermal magnetisation curves of the multi-core particles (immobilised and dispersed in water) were analysed. The study stands out by applying the same numerical approach to extract the apparent moment distributions of the particles as for the indirect Fourier transform. It could be shown that the main peak of the apparent moment distributions correlated to the expected intrinsic moment distribution of the cores. Additional peaks were observed which signaled deviations of the isothermal magnetisation behavior from the non-interacting case, indicating weak dipolar interactions.
Micro-tubular solid oxide fuel cells with graded anodes fabricated with a phase inversion method
NASA Astrophysics Data System (ADS)
Zhao, Ling; Zhang, Xiaozhen; He, Beibei; Liu, Beibei; Xia, Changrong
Micro-tubular proton-conducting solid oxide fuel cells (SOFCs) are developed with thin film BaZr 0.1Ce 0.7Y 0.1Yb 0.1O 3- δ (BZCYYb) electrolytes supported on Ni-BZCYYb anodes. The substrates, NiO-BZCYYb hollow fibers, are prepared by an immersion induced phase inversion technique. The resulted fibers have a special asymmetrical structure consisting of a sponge-like layer and a finger-like porous layer, which is propitious to serving as the anode supports for micro-tubular SOFCs. The fibers are characterized in terms of porosity, mechanical strength, and electrical conductivity regarding their sintering temperatures. To make a single cell, a dense BZCYYb electrolyte membrane about 20 μm thick is deposited on the hollow fiber by a suspension-coating process and a porous Sm 0.5Sr 0.5CoO 3 (SSC)-BZCYYb cathode is subsequently fabricated by a slurry coating technique. The micro-tubular proton-conducting SOFC generates a peak power density of 254 mW cm -2 at 650 °C when humidified hydrogen is used as the fuel and ambient air as the oxidant.
Mass, heat and nutrient fluxes in the Atlantic Ocean determined by inverse methods. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rintoul, Stephen Rich
1988-01-01
Inverse methods are applied to historical hydrographic data to address two aspects of the general circulation of the Atlantic Ocean. The method allows conservation statements for mass and other properties, along with a variety of other constraints, to be combined in a dynamically consistent way to estimate the absolute velocity field and associated property transports. The method was first used to examine the exchange of mass and heat between the South Atlantic and the neighboring ocean basins. The second problem addressed concerns the circulation and property fluxes across the 24 and 36 deg N in the subtropical North Atlantic. Conservation statements are considered for the nutrients as well as mass, and the nutrients are found to contribute significant information independent of temperature and salinity.
NASA Technical Reports Server (NTRS)
Nakanishi, I.; Anderson, D. L.
1984-01-01
In the present investigation, the single-station method reported by Brune et al. (1960) is utilized for an analysis of long-period Love(G) and Rayleigh(R) waves recorded on digital seismic networks. The analysis was conducted to study the lateral heterogeneity of surface wave velocities. The data set is examined, and a description is presented of the single-station method. Attention is given to an error analysis for velocity measurements, the estimation of geographical distribution of surface wave velocities, the global distribution of surface wave velocities, and the correlation between the surface wave velocities and the heat flow on the geoid. The conducted measurements and inversions of surface wave velocities are used as a basis to derive certain conclusions. It is found that the application of the single-station method to long-period surface waves recorded on digital networks makes it possible to reach an accuracy level comparable to great circle velocity measurements.
NASA Astrophysics Data System (ADS)
Matson, Kenneth Howell
A method exists for marine seismic data which removes all orders of free surface multiples and suppresses all orders of internal multiples while leaving primaries intact. This method is based on the inverse scattering series and makes no assumptions about the subsurface earth model. The marine algorithm assumes that the sources and receivers are located in the water column. In the context of land and ocean bottom data, the sources and receivers are located on or in an elastic medium. This opens up the possibility of recording multicomponent seismic data. Because both compressional (P) and shear (S) primaries are recorded in multicomponent data, it has the potential for providing a more complete picture of the subsurface. Coupled with the benefits of the P and S primaries are a complex set of elastic free surface and internal multiples. In this thesis, I develop an inverse scattering series method to attenuate these elastic multiples from multicomponent land and ocean bottom data. For land data, this method removes elastic free surface multiples. For ocean bottom data, multiples associated with the top and bottom of the water column are removed. Internal multiples are strongly attenuated for both data types. In common with the marine formulation, this method makes no assumptions about the earth below the sources and receivers, and does not affect primaries. The latter property is important for amplitude variation with offset analysis (AVO). The theory for multiple attentuation requires four component (two source, two receiver) data, a known near surface or water bottom, near offsets, and a known source wavelet. Tests on synthetic data indicate that this method is still effective using data with less than four components and is robust with respect to errors in estimating the near surface or ocean bottom properties.
Smoothing Technique and Variance Propagation for Abel Inversion of Scattered Data
1977-04-01
data and determination of the coefficients and transformation matrix. The bulk of this work is accomplished in SUB- ROUTINE COVCAL. However, subsequent...I 11111 I I I i I I I I I H i i I i l I i l i l i i i i l l l t ! i l 72 AE DC-T Ro76-163 I CALL INPUT I 1 1 C) A.4.0 FLOW CHARTS...2.0515~ 1D -03 . . . . . . . L S k I I 4 ? O - U . . . 3 , 7 0 6 9 9 4 0 - 0 3 4 , 1 1 0 7 8 6 0 - 0 3 3 , 6 9 7 1 1 6 D - 0 3 • . 2 ,5221~4D
Jeschke, G; Mandelshtam, V A; Shaka, A J
1999-03-01
Harmonic inversion of electron spin echo envelope (ESEEM) time-domain signals by filter diagonalization is investigated as an alternative to Fourier transformation. It is demonstrated that this method features enhanced resolution compared to Fourier-transform magnitude spectra, since it can eliminate dispersive contributions to the line shape, even if no linear phase correction is possible. Furthermore, instrumental artifacts can be easily removed from the spectra if they are narrow either in time or frequency domain. This applies to echo crossings that are only incompletely eliminated by phase cycling and to spurious spectrometer frequencies, respectively. The method is computationally efficient and numerically stable and does not require extensive parameter adjustments or advance knowledge of the number of spectral lines. Experiments on gamma-irradiated methyl-alpha-d-glucopyranoside show that more information can be obtained from typical ESEEM time-domain signals by filter-diagonalization than by Fourier transformation.
Simor, T; Kim, S K; Chu, W J; Pohost, G M; Elgavish, G A
1993-01-01
Shift-reagent-aided 23Na NMR spectroscopy allows differentiation of the intracellular (Na(i)) and extracellular sodium (Na(o)) signals. The goal of the present study has been to develop a 23Na NMR spectroscopic method to minimize the intensity of the shift-reagent-shifted Na(o) signal and thus increase Na(i) resolution. This is achieved by a selective inversion recovery (SIR) method which enhances the resolution between the Na(i) and Na(o) peaks in shift-reagent-aided 23Na NMR spectroscopy. The application of SIR with Dy(TTHA), Tm(DOTP), or with low concentrations of Dy(PPP)2 results in both good spectral resolution and physiologically acceptable contractile function in the isolated, perfused rat heart model.
NASA Astrophysics Data System (ADS)
Zou, Jiangwei; Tian, Biao; Chen, Zengping
2016-07-01
An inverse synthetic aperture radar (ISAR) high-precision compensation method is proposed based on coherent processing of intermediate frequency direct sampling data. First, the compensation of high-speed movement is performed by a modified linear frequency modulation matched filter during the pulse compression. The motion trajectory in the down-range direction is then reconstructed by compensation of window sampling difference of each pulse. Modified envelope correlation is applied to calculate the range profile shift between each pulse and the first one. Polynomial fitting is adopted to accurately estimate the motion characteristics. Subsequently, coherent processing is applied by combining range alignment and initial phase compensation. The migration through range cells correction can be then realized by keystone transform to the highly coherent data. Consequently, ISAR images with high quality are achieved. Experimental results on simulated and real data have demonstrated the validity of the proposed method.
Brig. Gen. Richard F. Abel and Col. Natan J. Lindsay answering questions
NASA Technical Reports Server (NTRS)
1982-01-01
Brigadier General Richard F. Abel, right, director of public affairs for the Air Force, and Colonel Nathan J. Lindsay of the USAF's space division, answer questions concerning STS-4 during a press conference at JSC on May 20, 1982.
Muto, A.; Scambos, T.A.; Steffen, K.; Slater, A.G.; Clow, G.D.
2011-01-01
We use measured firn temperatures down to depths of 80 to 90 m at four locations in the interior of Dronning Maud Land, East Antarctica to derive surface temperature histories spanning the past few decades using two different inverse methods. We find that the mean surface temperatures near the ice divide (the highest-elevation ridge of East Antarctic Ice Sheet) have increased approximately 1 to 1.5 K within the past ???50 years, although the onset and rate of this warming vary by site. Histories at two locations, NUS07-5 (78.65S, 35.64E) and NUS07-7 (82.07S, 54.89E), suggest that the majority of this warming took place in the past one or two decades. Slight cooling to no change was indicated at one location, NUS08-5 (82.63S, 17.87E), off the divide near the Recovery Lakes region. In the most recent decade, inversion results indicate both cooler and warmer periods at different sites due to high interannual variability and relatively high resolution of the inverted surface temperature histories. The overall results of our analysis fit a pattern of recent climate trends emerging from several sources of the Antarctic temperature reconstructions: there is a contrast in surface temperature trends possibly related to altitude in this part of East Antarctica. Copyright 2011 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
de Foy, B.; Wiedinmyer, C.; Schauer, J. J.
2012-10-01
Gaseous elemental mercury is a global pollutant that can lead to serious health concerns via deposition to the biosphere and bio-accumulation in the food chain. Hourly measurements between June 2004 and May 2005 in an urban site (Milwaukee, WI) show elevated levels of mercury in the atmosphere with numerous short-lived peaks as well as longer-lived episodes. The measurements are analyzed with an inverse model to obtain information about mercury emissions. The model is based on high resolution meteorological simulations (WRF), hourly back-trajectories (WRF-FLEXPART) and a chemical transport model (CAMx). The hybrid formulation combining back-trajectories and Eulerian simulations is used to identify potential source regions as well as the impacts of forest fires and lake surface emissions. Uncertainty bounds are estimated using a bootstrap method on the inversions. Comparison with the US Environmental Protection Agency's National Emission Inventory (NEI) and Toxic Release Inventory (TRI) shows that emissions from coal-fired power plants are properly characterized, but emissions from local urban sources, waste incineration and metal processing could be significantly under-estimated. Emissions from the lake surface and from forest fires were found to have significant impacts on mercury levels in Milwaukee, and to be underestimated by a factor of two or more.
NASA Astrophysics Data System (ADS)
de Foy, B.; Wiedinmyer, C.; Schauer, J. J.
2012-05-01
Gaseous elemental mercury is a global pollutant that can lead to serious health concerns via deposition to the biosphere and bio-accumulation in the food chain. Hourly measurements between June 2004 and May 2005 in an urban site (Milwaukee, WI) show elevated levels of mercury in the atmosphere with numerous short-lived peaks as well as longer-lived episodes. The measurements are analyzed with an inverse model to obtain information about mercury emissions. The model is based on high resolution meteorological simulations (WRF), hourly back-trajectories (WRF-FLEXPART) and forward grid simulations (CAMx). The hybrid formulation combining back-trajectories and grid simulations is used to identify potential source regions as well as the impacts of forest fires and lake surface emissions. Uncertainty bounds are estimated using a bootstrap method on the inversions. Comparison with the US Environmental Protection Agency's National Emission Inventory (NEI) and Toxic Release Inventory (TRI) shows that emissions from coal-fired power plants are properly characterized, but emissions from local urban sources, waste incineration and metal processing could be significantly under-estimated. Emissions from the lake surface and from forest fires were found to have significant impacts on mercury levels in Milwaukee, and to be underestimated by a factor of two or more.
SHOCKING TAILS IN THE MAJOR MERGER ABELL 2744
Owers, Matt S.; Couch, Warrick J.; Nulsen, Paul E. J.; Randall, Scott W.
2012-05-01
We identify four rare 'jellyfish' galaxies in Hubble Space Telescope imagery of the major merger cluster Abell 2744. These galaxies harbor trails of star-forming knots and filaments which have formed in situ in gas tails stripped from the parent galaxies, indicating they are in the process of being transformed by the environment. Further evidence for rapid transformation in these galaxies comes from their optical spectra, which reveal starburst, poststarburst, and active galactic nucleus features. Most intriguingly, three of the jellyfish galaxies lie near intracluster medium features associated with a merging 'Bullet-like' subcluster and its shock front detected in Chandra X-ray images. We suggest that the high-pressure merger environment may be responsible for the star formation in the gaseous tails. This provides observational evidence for the rapid transformation of galaxies during the violent core passage phase of a major cluster merger.
Giant ringlike radio structures around galaxy cluster Abell 3376.
Bagchi, Joydeep; Durret, Florence; Neto, Gastão B Lima; Paul, Surajit
2006-11-03
In the current paradigm of cold dark matter cosmology, large-scale structures are assembling through hierarchical clustering of matter. In this process, an important role is played by megaparsec (Mpc)-scale cosmic shock waves, arising in gravity-driven supersonic flows of intergalactic matter onto dark matter-dominated collapsing structures such as pancakes, filaments, and clusters of galaxies. Here, we report Very Large Array telescope observations of giant ( approximately 2 Mpc by 1.6 Mpc), ring-shaped nonthermal radio-emitting structures, found at the outskirts of the rich cluster of galaxies Abell 3376. These structures may trace the elusive shock waves of cosmological large-scale matter flows, which are energetic enough to power them. These radio sources may also be the acceleration sites where magnetic shocks are possibly boosting cosmic-ray particles with energies of up to 10(18) to 10(19) electron volts.
Ram pressure induced star formation in Abell 3266
NASA Astrophysics Data System (ADS)
Bonsall, Brittany
An X-ray observation of the merging galaxy cluster Abell 3266 was obtained via the ROSAT PSPC. This information, along with spectroscopic data from the WIde-field Nearby Galaxy-clusters Survey (i.e. WINGS), were used to investigate whether ram pressure is a mechanism that influences star formation. Galaxies exhibiting ongoing star formation are identified by the presence of strong Balmer lines (Hbeta), known to correspond to early type stars. Older galaxies where a rapid increase in star formation has recently ceased, known as E+A galaxies, are identified by strong Hbeta absorption coupled with little to no [OII] emission. The correlation between recent star formation and "high" ram pressure, as defined by Kapferer et al. (2009) as ≥ 5 x 10-11 dyn cm-2, was tested and lead to a contradiction of the previously held belief that ram pressure influences star formation on the global cluster scale.
Shocking Tails in the Major Merger Abell 2744
NASA Astrophysics Data System (ADS)
Owers, Matt S.; Couch, Warrick J.; Nulsen, Paul E. J.; Randall, Scott W.
2012-05-01
We identify four rare "jellyfish" galaxies in Hubble Space Telescope imagery of the major merger cluster Abell 2744. These galaxies harbor trails of star-forming knots and filaments which have formed in situ in gas tails stripped from the parent galaxies, indicating they are in the process of being transformed by the environment. Further evidence for rapid transformation in these galaxies comes from their optical spectra, which reveal starburst, poststarburst, and active galactic nucleus features. Most intriguingly, three of the jellyfish galaxies lie near intracluster medium features associated with a merging "Bullet-like" subcluster and its shock front detected in Chandra X-ray images. We suggest that the high-pressure merger environment may be responsible for the star formation in the gaseous tails. This provides observational evidence for the rapid transformation of galaxies during the violent core passage phase of a major cluster merger.
Inverse transonic airfoil design methods including boundary layer and viscous interaction effects
NASA Technical Reports Server (NTRS)
Carlson, L. A.
1979-01-01
The development and incorporation into TRANDES of a fully conservative analysis method utilizing the artificial compressibility approach is described. The method allows for lifting cases and finite thickness airfoils and utilizes a stretched coordinate system. Wave drag and massive separation studies are also discussed.
The distribution of dark and luminous matter in the unique galaxy cluster merger Abell 2146
NASA Astrophysics Data System (ADS)
King, Lindsay J.; Clowe, Douglas I.; Coleman, Joseph E.; Russell, Helen R.; Santana, Rebecca; White, Jacob A.; Canning, Rebecca E. A.; Deering, Nicole J.; Fabian, Andrew C.; Lee, Brandyn E.; Li, Baojiu; McNamara, Brian R.
2016-06-01
Abell 2146 (z = 0.232) consists of two galaxy clusters undergoing a major merger. The system was discovered in previous work, where two large shock fronts were detected using the Chandra X-ray Observatory, consistent with a merger close to the plane of the sky, caught soon after first core passage. A weak gravitational lensing analysis of the total gravitating mass in the system, using the distorted shapes of distant galaxies seen with Advanced Camera for Surveys - Wide Field Channel on Hubble Space Telescope, is presented. The highest peak in the reconstruction of the projected mass is centred on the brightest cluster galaxy (BCG) in Abell 2146-A. The mass associated with Abell 2146-B is more extended. Bootstrapped noise mass reconstructions show the mass peak in Abell 2146-A to be consistently centred on the BCG. Previous work showed that BCG-A appears to lag behind an X-ray cool core; although the peak of the mass reconstruction is centred on the BCG, it is also consistent with the X-ray peak given the resolution of the weak lensing mass map. The best-fitting mass model with two components centred on the BCGs yields M200 = 1.1^{+0.3}_{-0.4} × 1015 and 3^{+1}_{-2} × 1014 M⊙ for Abell 2146-A and Abell 2146-B, respectively, assuming a mass concentration parameter of c = 3.5 for each cluster. From the weak lensing analysis, Abell 2146-A is the primary halo component, and the origin of the apparent discrepancy with the X-ray analysis where Abell 2146-B is the primary halo is being assessed using simulations of the merger.
NASA Astrophysics Data System (ADS)
Fang, Jun; Zhang, Lizao; Duan, Huiping; Huang, Lei; Li, Hongbin
2016-05-01
The application of sparse representation to SAR/ISAR imaging has attracted much attention over the past few years. This new class of sparse representation based imaging methods present a number of unique advantages over conventional range-Doppler methods, the basic idea behind these works is to formulate SAR/ISAR imaging as a sparse signal recovery problem. In this paper, we propose a new two-dimensional pattern-coupled sparse Bayesian learning(SBL) method to capture the underlying cluster patterns of the ISAR target images. Based on this model, an expectation-maximization (EM) algorithm is developed to infer the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. Experimental results demonstrate that the proposed method is able to achieve a substantial performance improvement over existing algorithms, including the conventional SBL method.
NASA Astrophysics Data System (ADS)
Giudici, Mauro; Casabianca, Davide; Comunian, Alessandro
2015-04-01
The basic classical inverse problem of groundwater hydrology aims at determining aquifer transmissivity (T ) from measurements of hydraulic head (h), estimates or measures of source terms and with the least possible knowledge on hydraulic transmissivity. The theory of inverse problems shows that this is an example of ill-posed problem, for which non-uniqueness and instability (or at least ill-conditioning) might preclude the computation of a physically acceptable solution. One of the methods to reduce the problems with non-uniqueness, ill-conditioning and instability is a tomographic approach, i.e., the use of data corresponding to independent flow situations. The latter might correspond to different hydraulic stimulations of the aquifer, i.e., to different pumping schedules and flux rates. Three inverse methods have been analyzed and tested to profit from the use of multiple sets of data: the Differential System Method (DSM), the Comparison Model Method (CMM) and the Double Constraint Method (DCM). DSM and CMM need h all over the domain and thus the first step for their application is the interpolation of measurements of h at sparse points. Moreover, they also need the knowledge of the source terms (aquifer recharge, well pumping rates) all over the aquifer. DSM is intrinsically based on the use of multiple data sets, which permit to write a first-order partial differential equation for T , whereas CMM and DCM were originally proposed to invert a single data set and have been extended to work with multiple data sets in this work. CMM and DCM are based on Darcy's law, which is used to update an initial guess of the T field with formulas based on a comparison of different hydraulic gradients. In particular, the CMM algorithm corrects the T estimate with ratio of the observed hydraulic gradient and that obtained with a comparison model which shares the same boundary conditions and source terms as the model to be calibrated, but a tentative T field. On the other hand
GHRS observations of mass-loaded flows in Abell 78
NASA Technical Reports Server (NTRS)
Harrington, J. Patrick; Borkowski, Kazimierz J.; Tsvetanov, Zlatan
1995-01-01
Spectroscopic observations of the central star of the planetary nebula Abell 78 were obtained with the Goddard High Resolution Spectrograph (GHRS) onboard the Hubble Space Telescope (HST) in the vicinity of the C IV lambda 1548.2, 1550.8 doublet. We find a series of narrow absorption features superposed on the broad, P Cygni stellar wind profile. These features are seen in both components of the doublet at heliocentric radial velocities of -18, -71, -131, and -192 km/s. At higher velocities, individual components are no longer distinct but, rather, merge into a continuous absorption extending to approximately -385 km/s. This is among the highest velocities ever detected for gas in a planetary nebula. The -18 km/s feature originates in an outer envelope of normal composition, while the -71 km/s feature is produced in the wind-swept shell encircling an irregular wind-blown bubble in the planetary nebula center. The hydrogen-poor ejecta of Abell 78, consisting of dense knots with wind-blown tails, are located in the bubble's interior, in the vicinity of the stellar wind termination shock. The high-velocity C IV lambda 154 absorption features can be explained as due to parcels of ejecta being accelerated to high velocities as they are swept up by the stellar wind during its interaction with dense condensations of H-poor ejecta. As the ablated material is accelerated, it will partially mix with the stellar wind, creating a mass-loaded flow. The abundance anomalies seen at the rim of the bubble attest to the transport of H-poor knot material by such a flow.
NASA Astrophysics Data System (ADS)
Theobald, Mark R.; Crittenden, Peter D.; Tang, Y. Sim; Sutton, Mark A.
2013-12-01
Penguin colonies represent some of the most concentrated sources of ammonia emissions to the atmosphere in the world. The ammonia emitted into the atmosphere can have a large influence on the nitrogen cycling of ecosystems near the colonies. However, despite the ecological importance of the emissions, no measurements of ammonia emissions from penguin colonies have been made. The objective of this work was to determine the ammonia emission rate of a penguin colony using inverse-dispersion modelling and gradient methods. We measured meteorological variables and mean atmospheric concentrations of ammonia at seven locations near a colony of Adélie penguins in Antarctica to provide input data for inverse-dispersion modelling. Three different atmospheric dispersion models (ADMS, LADD and a Lagrangian stochastic model) were used to provide a robust emission estimate. The Lagrangian stochastic model was applied both in ‘forwards’ and ‘backwards’ mode to compare the difference between the two approaches. In addition, the aerodynamic gradient method was applied using vertical profiles of mean ammonia concentrations measured near the centre of the colony. The emission estimates derived from the simulations of the three dispersion models and the aerodynamic gradient method agreed quite well, giving a mean emission of 1.1 g ammonia per breeding pair per day (95% confidence interval: 0.4-2.5 g ammonia per breeding pair per day). This emission rate represents a volatilisation of 1.9% of the estimated nitrogen excretion of the penguins, which agrees well with that estimated from a temperature-dependent bioenergetics model. We found that, in this study, the Lagrangian stochastic model seemed to give more reliable emission estimates in ‘forwards’ mode than in ‘backwards’ mode due to the assumptions made.
E-coil: an inverse boundary element method for a quasi-static problem.
Sanchez, Clemente Cobos; Garcia, Salvador Gonzalez; Power, Henry
2010-06-07
Boundary element methods represent a valuable approach for designing gradient coils; these methods are based on meshing the current carrying surface into an array of boundary elements. The temporally varying magnetic fields produced by gradient coils induce electric currents in conducting tissues and so the exposure of human subjects to these magnetic fields has become a safety concern, especially with the increase in the strength of the field gradients used in magnetic resonance imaging. Here we present a boundary element method for the design of coils that minimize the electric field induced in prescribed conducting systems. This work also details some numerical examples of the application of this coil design method. The reduction of the electric field induced in a prescribed region inside the coils is also evaluated.
A new multistage groundwater transport inverse method: Presentation, evaluation, and implications
Anderman, E.R.; Hill, M.C.
1999-01-01
More computationally efficient methods of using concentration data are needed to estimate groundwater flow and transport parameters. This work introduces and evaluates a three-stage nonlinear-regression-based iterative procedure in which trial advective-front locations link decoupled flow and transport models. Method accuracy and efficiency are evaluated by comparing results to those obtained when flow- and transport-model parameters are estimated simultaneously. The new method is evaluated as conclusively as possible by using a simple test case that includes distinct flow and transport parameters, but does not include any approximations that are problem dependent. The test case is analytical; the only flow parameter is a constant velocity, and the transport parameters are longitudinal and transverse dispersivity. Any difficulties detected using the new method in this ideal situation are likely to be exacerbated in practical problems. Monte-Carlo analysis of observation error ensures that no specific error realization obscures the results. Results indicate that, while this, and probably other, multistage methods do not always produce optimal parameter estimates, the computational advantage may make them useful in some circumstances, perhaps as a precursor to using a simultaneous method.
NASA Astrophysics Data System (ADS)
Muto, Atsuhiro
The climate trend of the Antarctic interior remains unclear relative to the rest of the globe because of a lack of long-term weather records. Recent studies by other authors utilizing sparse available records, satellite data, and models have estimated a significant warming trend in the near-surface air temperature in West Antarctica and weak and poorly constrained warming trend in East Antarctica for the past 50 years. In this dissertation, firn thermal profiling was used to detect multi-decadal surface temperature trends in the interior of East Antarctica where few previous records of any kind exist. The surface temperature inversion from firn temperature profiles provides a climate reconstruction independent of firn chemistry, sparse weather data, satellite data, or ice cores, and therefore may be used in conjunction with these data sources for corroboration of climate trends over the large ice sheets. During the Norwegian-U.S. IPY Scientific Traverse of East Antarctica, in the austral summers of 2007--08 and 2008--09, thermal-profiling telemetry units were installed at five locations. Each unit consists of 16 PRTs (Platinum Resistance Thermometers) distributed in a back-filled borehole of 80 to 90 m deep. The accuracy of the temperature measurement is 0.03 K. Geophysical inverse methods (linearized and Monte Carlo inversion) were applied to one full year of data collected from three units installed near the ice divide in the Dome Fuji/Pole of Inaccessibility region and one on Recovery Lake B, situated >500 km south to south-west of and >1000 m lower in altitude than sites near the ice divide. Three sites near the ice divide indicate that the mean surface temperatures have increased approximately 1 to 1.5 K within the past ˜50 years although the onset and the duration of this warming vary by site. On the other hand, slight cooling to no change was detected at the Recovery Lake B site. Although uncertainties remain due to limitations of the method, these results
Dirken, J J; Vlaanderen, W
1994-01-01
Inversion of the uterus is a rare complication of childbirth. A primigravida aged 21 and a multigravida aged 32, hospitalized as emergency cases because of inversion of the uterus with major blood loss, were treated with infusion of liquids (to combat shock), repositioning of the uterus under anaesthesia and prevention of reinversion by uterine tonics. Inversion of the uterus should be part of the differential diagnosis in every case of fluxus post partum.
Jain, Pankaj C; Varadarajan, Raghavan
2014-03-15
With the development of deep sequencing methodologies, it has become important to construct site saturation mutant (SSM) libraries in which every nucleotide/codon in a gene is individually randomized. We describe methodologies for the rapid, efficient, and economical construction of such libraries using inverse polymerase chain reaction (PCR). We show that if the degenerate codon is in the middle of the mutagenic primer, there is an inherent PCR bias due to the thermodynamic mismatch penalty, which decreases the proportion of unique mutants. Introducing a nucleotide bias in the primer can alleviate the problem. Alternatively, if the degenerate codon is placed at the 5' end, there is no PCR bias, which results in a higher proportion of unique mutants. This also facilitates detection of deletion mutants resulting from errors during primer synthesis. This method can be used to rapidly generate SSM libraries for any gene or nucleotide sequence, which can subsequently be screened and analyzed by deep sequencing.
NASA Technical Reports Server (NTRS)
Larour, E.; Rignot, E.; Joughin, I.; Aubry, D.
2005-01-01
The Antarctic Ice Sheet is surrounded by large floating ice shelves that spread under their own weight into the ocean. Ice shelf rigidity depends on ice temperature and fabrics, and is influenced by ice flow and the delicate balance between bottom and surface accumulation. Here, we use an inverse control method to infer the rigidity of the Ronne Ice Shelf that best matches observations of ice velocity from satellite radar interferometry. Ice rigidity, or flow law parameter B, is shown to vary between 300 and 900 kPa a(sup 1/3). Ice is softer along the side margins due to frictional heating, and harder along the outflow of large glaciers, which advect cold continental ice. Melting at the bottom surface of the ice shelf increases its rigidity, while freezing decreases it. Accurate numerical modelling of ice shelf flow must account for this spatial variability in mechanical characteristics.
NASA Astrophysics Data System (ADS)
Virieux, J.; Bretaudeau, F.; Metivier, L.; Brossier, R.
2013-12-01
Simultaneous inversion of seismic velocities and source parameters have been a long standing challenge in seismology since the first attempts to mitigate trade-off between very different parameters influencing travel-times (Spencer and Gubbins 1980, Pavlis and Booker 1980) since the early development in the 1970s (Aki et al 1976, Aki and Lee 1976, Crosson 1976). There is a strong trade-off between earthquake source positions, initial times and velocities during the tomographic inversion: mitigating these trade-offs is usually carried empirically (Lemeur et al 1997). This procedure is not optimal and may lead to errors in the velocity reconstruction as well as in the source localization. For a better simultaneous estimation of such multi-parametric reconstruction problem, one may take benefit of improved local optimization such as full Newton method where the Hessian influence helps balancing between different physical parameter quantities and improving the coverage at the point of reconstruction. Unfortunately, the computation of the full Hessian operator is not easily computed in large models and with large datasets. Truncated Newton (TCN) is an alternative optimization approach (Métivier et al. 2012) that allows resolution of the normal equation H Δm = - g using a matrix-free conjugate gradient algorithm. It only requires to be able to compute the gradient of the misfit function and Hessian-vector products. Traveltime maps can be computed in the whole domain by numerical modeling (Vidale 1998, Zhao 2004). The gradient and the Hessian-vector products for velocities can be computed without ray-tracing using 1st and 2nd order adjoint-state methods for the cost of 1 and 2 additional modeling step (Plessix 2006, Métivier et al. 2012). Reciprocity allows to compute accurately the gradient and the full Hessian for each coordinates of the sources and for their initial times. Then the resolution of the problem is done through two nested loops. The model update Δm is
NASA Astrophysics Data System (ADS)
Yang, Guoce; Bai, Benfeng; Liu, Wenqi; Wu, Xiaochun
2016-04-01
Metal nanoparticles (NPs) have wide applications in various fields due to their unique properties. The accurate and fast characterization of metal NP concentration is highly demanded in the synthesis, metrology, and applications of NPs. The commonly used inductively coupled plasma mass spectrometry (ICP-MS) is a standard method for measuring the mass concentration (MC) of NPs, even though it is time-consuming, expensive, and destructive. While for the number concentration (NC) characterization of NPs, the method is less explored. Here, we present an improved optical extinction-scattering spectroscopic method for the fast, non-destructive characterization of the MC and NC of poly-disperse metal NP colloid simultaneously. By measuring the extinction spectrum and the 90° scattering spectrum of the nanorod (NR) colloid, we can solve an inverse scattering problem to retrieve the two dimensional joint probability density function (2D-JPDF) with respect to the width and the aspect ratio of NR sample accurately, based on which the NC and MC of the colloidal NPs can be calculated. This method is powerful to characterize both the geometric parameters and the concentrations, including the MC and NC, of poly-disperse metal NPs simultaneously. It is very useful for the non-destructive, non-contact, and in-situ comprehensive measurement of colloidal NPs. This method also has the potential to characterize NPs of other shapes or made of other materials.
van Dijk, Kees J.; Janssen, Marcus L. F.; Zwartjes, Daphne G. M.; Temel, Yasin; Visser-Vandewalle, Veerle; Veltink, Peter H.; Benazzouz, Abdelhamid; Heida, Tjitske
2016-01-01
Objective: In this study we introduce the use of the current source density (CSD) method as a way to visualize the spatial organization of evoked responses in the rat subthalamic nucleus (STN) at fixed time stamps resulting from motor cortex stimulation. This method offers opportunities to visualize neuronal input and study the relation between the synaptic input and the neural output of neural populations. Approach: Motor cortex evoked local field potentials and unit activity were measured in the subthalamic region, with a 3D measurement grid consisting of 320 measurement points and high spatial resolution. This allowed us to visualize the evoked synaptic input by estimating the current source density (CSD) from the measured local field potentials, using the inverse CSD method. At the same time, the neuronal output of the cells within the grid is assessed by calculating post stimulus time histograms. Main results: The CSD method resulted in clear and distinguishable sources and sinks of the neuronal input activity in the STN after motor cortex stimulation. We showed that the center of the synaptic input of the STN from the motor cortex is located dorsal to the input from globus pallidus. Significance: For the first time we have performed CSD analysis on motor cortex stimulation evoked LFP responses in the rat STN as a proof of principle. Our results suggest that the CSD method can be used to gain new insights into the spatial extent of synaptic pathways in brain structures. PMID:27857684
Liang, Wei; Murakawa, Hidekazu
2014-01-01
Welding-induced deformation not only negatively affects dimension accuracy but also degrades the performance of product. If welding deformation can be accurately predicted beforehand, the predictions will be helpful for finding effective methods to improve manufacturing accuracy. Till now, there are two kinds of finite element method (FEM) which can be used to simulate welding deformation. One is the thermal elastic plastic FEM and the other is elastic FEM based on inherent strain theory. The former only can be used to calculate welding deformation for small or medium scale welded structures due to the limitation of computing speed. On the other hand, the latter is an effective method to estimate the total welding distortion for large and complex welded structures even though it neglects the detailed welding process. When the elastic FEM is used to calculate the welding-induced deformation for a large structure, the inherent deformations in each typical joint should be obtained beforehand. In this paper, a new method based on inverse analysis was proposed to obtain the inherent deformations for weld joints. Through introducing the inherent deformations obtained by the proposed method into the elastic FEM based on inherent strain theory, we predicted the welding deformation of a panel structure with two longitudinal stiffeners. In addition, experiments were carried out to verify the simulation results.
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1988-01-01
Since the project is rapidly nearing conclusion, the status of the tasks outlined in the original proposal are briefly outlined. These tasks include: viscous interation and wake curvature effects; code optimization and design methodology studies; methods for the design of isolated regions; program improvement efforts; and validation, testing, and documentation.
Chromatid Paints: A New Method for Detecting Tumor-Specific Chromosomal Inversions
1999-10-01
et al./ Mutation Research 434 (1999) 75-88 77 chromosome 14), were plated for colony formation (Molecular Probes, Eugene , OR, USA). A hybridiza...R. & Chen, D. J. (1999)Mol Cell Biol 19(5), 3877-3884. 31. Meyne, J. & Moyzis, R. K. (1994) in Method in Molecular Biology, ed. Choo , K. H. A
A noise source identification technique using an inverse Helmholtz integral equation method
NASA Technical Reports Server (NTRS)
Gardner, B. K.; Bernhard, R. J.
1988-01-01
A technique is developed which utilizes numerical models and field pressure information to characterize acoustic fields and identify acoustic sources. The numerical models are based on boundary element numerical procedures. Either pressure, velocity, or passive boundary conditions, in the form of impedance boundary conditions, may be imposed on the numerical model. Alternatively, if no boundary information is known, a boundary condition can be left unspecified. Field pressure data may be specified to overdetermine the numerical problem. The problem is solved numerically for the complete sound field from which the acoustic sources may be determined. The model can then be used to idenfify acoustic intensity paths in the field. The solution can be modified and the model used to evaluate design alternatives. In this investigation the method is tested analytically and verified. In addition, the sensitivity of the method to random and bias error in the input data is demonstrated.
NASA Astrophysics Data System (ADS)
Bi, Hui; Zhang, Bingchen; Hong, Wen
2016-07-01
The elevation image quality of tomographic synthetic aperture radar (TomoSAR) data depends mainly on the elevation aperture size, number of baselines, and baseline distribution. In TomoSAR, due to the restricted number of baselines with irregular distributions, the elevation imaging quality is always unacceptable using the conventional spectral analysis approach. Therefore, for a given limited number of irregular baselines, the completion of data for the unobserved virtual uniform baseline distribution should be addressed to improve the spectral analysis-based TomoSAR reconstruction quality. We propose an Lq(0method for TomoSAR, which uses the geometric imaging relationship between the observed and unobserved baseline distributions. In the proposed method, we first estimate the transformation matrix between the acquisitions and the data of virtual uniform baseline distribution by solving an optimization problem, before calculating the data for virtual baseline distribution based on the acquisitions and the transformation matrix. Finally, the elevation reflectivity function is recovered using the spectral analysis method based on the estimated data. Compared with the reconstructed results only based on the limited irregular acquisitions, the image recovered using the dataset with a virtual uniform baseline distribution can improve the elevation image quality in an efficient manner.
NASA Astrophysics Data System (ADS)
Shonkwiler, K. B.; Ham, J. M.; Williams, C.
2012-12-01
Development Initiative. Food and Agriculture Organization of the United Nations, Rome, Italy. [2] Loubet, B., Génermont, S., Ferrara, R., Bedos, C., Decuq, C., Personne, E., Fanucci, O., Durand, B., Rana, G., Cellier, P., 2010. An inverse model to estimate ammonia emissions from fields. Eur. J. Soil Sci. 61: 793-805. Panorama of a weather station (left) utilizing micrometeorological methods to aid in estimating emissions of methane and ammonia from an anaerobic livestock lagoon (center) at a commercial dairy in Northern Colorado, USA.
Inverse Functions and their Derivatives.
ERIC Educational Resources Information Center
Snapper, Ernst
1990-01-01
Presented is a method of interchanging the x-axis and y-axis for viewing the graph of the inverse function. Discussed are the inverse function and the usual proofs that are used for the function. (KR)
Reconstructing the Nucleon-Nucleon Potential by a New Coupled-Channel Inversion Method
Pupasov, Andrey; Samsonov, Boris F.; Sparenberg, Jean-Marc; Baye, Daniel
2011-04-15
A second-order supersymmetric transformation is presented, for the two-channel Schroedinger equation with equal thresholds. It adds a Breit-Wigner term to the mixing parameter, without modifying the eigenphase shifts, and modifies the potential matrix analytically. The iteration of a few such transformations allows a precise fit of realistic mixing parameters in terms of a Pade expansion of both the scattering matrix and the effective-range function. The method is applied to build an exactly solvable potential for the neutron-proton {sup 3}S{sub 1}-{sup 3}D{sub 1} case.
The GLAS physical inversion method for analysis of HIRS2/MSU sounding data
NASA Technical Reports Server (NTRS)
Susskind, J.; Rosenfield, J.; Reuter, D.; Chahine, M. T.
1982-01-01
Goddard Laboratory for Atmospheric Sciences has developed a method to derive atmospheric temperature profiles, sea or land surface temperatures, sea ice extent and snow cover, and cloud heights and fractional cloud, from HIRS2/MSU radiance data. Chapter 1 describes the physics used in the radiative transfer calculations and demonstrates the accuracy of the calculations. Chapter 2 describes the rapid transmittance algorithm used and demonstrates its accuracy. Chapter 3 describes the theory and application of the techniques used to analyze the satellite data. Chapter 4 shows results obtained for January 1979.
Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V
2006-12-31
Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)
NASA Astrophysics Data System (ADS)
Pujos, Cyril; Regnier, Nicolas; Mousseau, Pierre; Defaye, Guy; Jarny, Yvon
2007-05-01
Simulation quality is determined by the knowledge of the parameters of the model. Yet the rheological models for polymer are often not very accurate, since the viscosity measurements are made under approximations as homogeneous temperature and empirical corrections as Bagley one. Furthermore rheological behaviors are often traduced by mathematical laws as the Cross or the Carreau-Yasuda ones, whose parameters are fitted from viscosity values, obtained with corrected experimental data, and not appropriate for each polymer. To correct these defaults, a table-like rheological model is proposed. This choice makes easier the estimation of model parameters, since each parameter has the same order of magnitude. As the mathematical shape of the model is not imposed, the estimation process is appropriate for each polymer. The proposed method consists in minimizing the quadratic norm of the difference between calculated variables and measured data. In this study an extrusion die is simulated, in order to provide us temperature along the extrusion channel, pressure and flow references. These data allow to characterize thermal transfers and flow phenomena, in which the viscosity is implied. Furthermore the different natures of data allow to estimate viscosity for a large range of shear rates. The estimated rheological model improves the agreement between measurements and simulation: for numerical cases, the error on the flow becomes less than 0.1% for non-Newtonian rheology. This method couples measurements and simulation, constitutes a very accurate mean of rheology determination, and allows to improve the prediction abilities of the model.
NASA Astrophysics Data System (ADS)
Jackson, Mike; Bowles, Julie A.; Lascu, Ioan; Solheid, Peat
2010-07-01
We explore the effects of sampling density, signal/noise ratios, and position-dependent measurement errors on deconvolution calculations for u channel magnetometer data, using a combination of experimental and numerical approaches. Experiments involve a synthetic sample set made by setting hydraulic cement in a 30-cm u channel and slicing the hardened material into ˜2-cm lengths, and a natural lake sediment u channel sample. The cement segments can be magnetized and measured individually, and reassembled for continuous u channel measurement and deconvolution; the lake sediment channel was first measured continuously and then sliced into discrete samples for individual measurement. Each continuous data set was deconvolved using the ABIC minimization code of Oda and Shibuya (1996) and two new approaches that we have developed, using singular-value decomposition and regularized least squares. These involve somewhat different methods to stabilize the inverse calculations and different criteria for identifying the optimum solution, but we find in all of our experiments that the three methods converge to essentially identical solutions. Repeat scans in several experiments show that measurement errors are not distributed with position-independent variance; errors in setting/determining the u channel position (standard deviation ˜0.2 mm) translate in regions of strong gradients into measurement uncertainties much larger than those due to instrument noise and drift. When we incorporate these depth-dependent measurement uncertainties into the deconvolution calculations, the resulting models show decreased stability and accuracy compared to inversions assuming depth-independent measurement errors. The cement experiments involved varying directions and uniform intensities downcore, and very good accuracy was obtained using all of the methods when the signal/noise ratio was greater than a few hundred and the sampling interval no larger than half the length scale of
da Silva, Wilton Pereira; E Silva, Cleide M D P S
2014-09-01
Cooling of fruits and vegetables, immediately after the harvest, has been a widely used method for maximizing post-harvest life. In this paper, an optimization algorithm and a numerical solution are used to determine simultaneously the convective heat transfer coefficient, hH, and the thermal diffusivity, α, for an individual solid with cylindrical shape, using experimental data obtained during its cooling. To this end, the one-dimensional diffusion equation in cylindrical coordinates is discretized and numerically solved through the finite volume method, with a fully implicit formulation. This solution is coupled to an optimizer based on the inverse method, in which the chi-square referring to the fit of the numerical simulation to the experimental data is used as objective function. The optimizer coupled to the numerical solution was applied to experimental data relative to the cooling of a cucumber. The obtained results for α and hH were coherent with the values available in the literature. With the results obtained in the optimization process, the cooling kinetics of cucumbers was described in details.
Light curve inversion of asteroid (585) Bilkis with Lommel-Seeliger ellipsoid method
NASA Astrophysics Data System (ADS)
Wang, Ao; Wang, Xiao-Bin; Muinonen, Karri; Han, Xianming L.; Wang, Yi-Bo
2016-12-01
The basic physical parameters of asteroids, such as spin parameters, shape and scattering parameters, can provide us with information on the formation and evolution of both the asteroids themselves and the entire solar system. In a majority of asteroids, the disk-integrated photometry measurement constitutes the primary source of the above knowledge. In the present paper, newly observed photometric data and existing data on (585) Bilkis are analyzed based on a Lommel-Seeliger ellipsoid model. With a Markov chain Monte Carlo (MCMC) method, we have determined the spin parameters (period, pole orientation) and shape (b/a, c/a) of (585) Bilkis and their uncertainties. As a result, we obtained a rotational period of 8.5738209 h with an uncertainty of 9×10-7 h, and derived a pole of (136.46°, 29.0°) in the ecliptic frame of J2000.0 with uncertainties of 0.67° and 1.1° in longitude and latitude respectively. We also derived triaxial ratios b/a and c/a of (585) Bilkis as 0.736 and 0.70 with uncertainties of 0.003 and 0.03 respectively.
NASA Astrophysics Data System (ADS)
Belkebir, Kamal; Tijhuis, Anton G.
2001-12-01
This paper concerns the reconstruction of the complex relative permittivity of an inhomogeneous object from the measured scattered field. The parameter of interest is retrieved using iterative techniques. Four methods are considered, in which the permittivity is updated along the standard Polak-Ribière conjugate gradient directions of a cost functional. The difference lies in the update direction for the field, and the determination of the expansion coefficients. In the modified gradient method, the search direction is the conjugate gradient direction for the field, and the expansion coefficients for field and profile are determined simultaneously. In the Born method (BM) the field is considered as the fixed solution of the forward problem with the available estimate of the unknown permittivity, and only the profile coefficients are determined from the cost function. In the modified Born method, we use the same field direction as in the BM, but determine the coefficients for field and profile simultaneously. In the modified2 gradient method, we use both field directions, and again update all coefficients simultaneously. Examples of the reconstruction of either metal or dielectric cylinders from experimental data are presented and the methods are compared for a range of frequencies.
NASA Astrophysics Data System (ADS)
Ren, Cong
Nowadays, the micro-tubular solid oxide fuel cells (MT-SOFCs), especially the anode supported MT-SOFCs have been extensively developed to be applied for SOFC stacks designation, which can be potentially used for portable power sources and vehicle power supply. To prepare MT-SOFCs with high electrochemical performance, one of the main strategies is to optimize the microstructure of the anode support. Recently, a novel phase inversion method has been applied to prepare the anode support with a unique asymmetrical microstructure, which can improve the electrochemical performance of the MT-SOFCs. Since several process parameters of the phase inversion method can influence the pore formation mechanism and final microstructure, it is essential and necessary to systematically investigate the relationship between phase inversion process parameters and final microstructure of the anode supports. The objective of this study is aiming at correlating the process parameters and microstructure and further preparing MT-SOFCs with enhanced electrochemical performance. Non-solvent, which is used to trigger the phase separation process, can significantly influence the microstructure of the anode support fabricated by phase inversion method. To investigate the mechanism of non-solvent affecting the microstructure, water and ethanol/water mixture were selected for the NiO-YSZ anode supports fabrication. The presence of ethanol in non-solvent can inhibit the growth of the finger-like pores in the tubes. With the increasing of the ethanol concentration in the non-solvent, a relatively dense layer can be observed both in the outside and inside of the tubes. The mechanism of pores growth and morphology obtained by using non-solvent with high concentration ethanol was explained based on the inter-diffusivity between solvent and non-solvent. Solvent and non-solvent pair with larger Dm value is benefit for the growth of finger-like pores. Three cells with different anode geometries was
The planetary nebula Abell 48 and its [WN] nucleus
NASA Astrophysics Data System (ADS)
Frew, David J.; Bojičić, I. S.; Parker, Q. A.; Stupar, M.; Wachter, S.; DePew, K.; Danehkar, A.; Fitzgerald, M. T.; Douchin, D.
2014-05-01
We have conducted a detailed multi-wavelength study of the peculiar nebula Abell 48 and its central star. We classify the nucleus as a helium-rich, hydrogen-deficient star of type [WN4-5]. The evidence for either a massive WN or a low-mass [WN] interpretation is critically examined, and we firmly conclude that Abell 48 is a planetary nebula (PN) around an evolved low-mass star, rather than a Population I ejecta nebula. Importantly, the surrounding nebula has a morphology typical of PNe, and is not enriched in nitrogen, and thus not the `peeled atmosphere' of a massive star. We estimate a distance of 1.6 kpc and a reddening, E(B - V) = 1.90 mag, the latter value clearly showing the nebula lies on the near side of the Galactic bar, and cannot be a massive WN star. The ionized mass (˜0.3 M⊙) and electron density (700 cm-3) are typical of middle-aged PNe. The observed stellar spectrum was compared to a grid of models from the Potsdam Wolf-Rayet (PoWR) grid. The best-fitting temperature is 71 kK, and the atmospheric composition is dominated by helium with an upper limit on the hydrogen abundance of 10 per cent. Our results are in very good agreement with the recent study of Todt et al., who determined a hydrogen fraction of 10 per cent and an unusually large nitrogen fraction of ˜5 per cent. This fraction is higher than any other low-mass H-deficient star, and is not readily explained by current post-AGB models. We give a discussion of the implications of this discovery for the late-stage evolution of intermediate-mass stars. There is now tentative evidence for two distinct helium-dominated post-AGB lineages, separate to the helium- and carbon-dominated surface compositions produced by a late thermal pulse. Further theoretical work is needed to explain these recent discoveries.
Müller, David; Cattaneo, Stefano; Meier, Florian; Welz, Roland; de Vries, Tjerk; Portugal-Cohen, Meital; Antonio, Diana C; Cascio, Claudia; Calzolai, Luigi; Gilliland, Douglas; de Mello, Andrew
2016-04-01
We demonstrate the use of inverse supercritical carbon dioxide (scCO2) extraction as a novel method of sample preparation for the analysis of complex nanoparticle-containing samples, in our case a model sunscreen agent with titanium dioxide nanoparticles. The sample was prepared for analysis in a simplified process using a lab scale supercritical fluid extraction system. The residual material was easily dispersed in an aqueous solution and analyzed by Asymmetrical Flow Field-Flow Fractionation (AF4) hyphenated with UV- and Multi-Angle Light Scattering detection. The obtained results allowed an unambiguous determination of the presence of nanoparticles within the sample, with almost no background from the matrix itself, and showed that the size distribution of the nanoparticles is essentially maintained. These results are especially relevant in view of recently introduced regulatory requirements concerning the labeling of nanoparticle-containing products. The novel sample preparation method is potentially applicable to commercial sunscreens or other emulsion-based cosmetic products and has important ecological advantages over currently used sample preparation techniques involving organic solvents.
Zhou, Qiaoxuan; Cadwallader, Keith R
2004-10-06
An inverse gas chromatographic (IGC) method was developed to study the binding interactions between selected volatile flavor compounds and soy protein isolate (SPI) under controlled relative humidity (RH). Three volatile probes (hexane, 1-hexanol, and hexanal) at very low levels were used to evaluate and validate system performance. On the basis of the thermodynamic data and the isotherms measured at 0% RH, 1-hexanol and hexanal had higher binding affinities than hexane, which could be attributed to hydrogen-bonding interactions with SPI. At 30% RH, 1-hexanol and hexanal were retained less than at 0% RH, indicating possible competition for binding sites on the SPI surface between water and volatile probe molecules. Results showed that the thermodynamic data determined were comparable to the available literature values. Use of IGC allowed for the rapid and precise generation of sorption isotherms. Repeatability between replicate injections and reproducibility across columns were very good. IGC is a potentially high-throughput method for the sensitive, precise, and accurate measurement of flavor-ingredient interactions in low-moisture food systems.
Zhu, Bing; Chen, Yizhou; Zhao, Jian
2014-01-01
An integrated chassis control (ICC) system with active front steering (AFS) and yaw stability control (YSC) is introduced in this paper. The proposed ICC algorithm uses the improved Inverse Nyquist Array (INA) method based on a 2-degree-of-freedom (DOF) planar vehicle reference model to decouple the plant dynamics under different frequency bands, and the change of velocity and cornering stiffness were considered to calculate the analytical solution in the precompensator design so that the INA based algorithm runs well and fast on the nonlinear vehicle system. The stability of the system is guaranteed by dynamic compensator together with a proposed PI feedback controller. After the response analysis of the system on frequency domain and time domain, simulations under step steering maneuver were carried out using a 2-DOF vehicle model and a 14-DOF vehicle model by Matlab/Simulink. The results show that the system is decoupled and the vehicle handling and stability performance are significantly improved by the proposed method.
2014-01-01
An integrated chassis control (ICC) system with active front steering (AFS) and yaw stability control (YSC) is introduced in this paper. The proposed ICC algorithm uses the improved Inverse Nyquist Array (INA) method based on a 2-degree-of-freedom (DOF) planar vehicle reference model to decouple the plant dynamics under different frequency bands, and the change of velocity and cornering stiffness were considered to calculate the analytical solution in the precompensator design so that the INA based algorithm runs well and fast on the nonlinear vehicle system. The stability of the system is guaranteed by dynamic compensator together with a proposed PI feedback controller. After the response analysis of the system on frequency domain and time domain, simulations under step steering maneuver were carried out using a 2-DOF vehicle model and a 14-DOF ve